id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.01655
Predicting blood pressure under circumstances of missing data: An analysis of missing data patterns and imputation methods using NHANES
The World Health Organization defines cardio-vascular disease (CVD) as "a group of disorders of the heart and blood vessels," including coronary heart disease and stroke (WHO 21). CVD is affected by "intermediate risk factors" such as raised blood pressure, raised blood glucose, raised blood lipids, and obesity. These are predominantly influenced by lifestyle and behaviour, including physical inactivity, unhealthy diets, high intake of salt, and tobacco and alcohol use. However, genetics and social/environmental factors such as poverty, stress, and racism also play an important role. Researchers studying the behavioural and environmental factors associated with these "intermediate risk factors" need access to high quality and detailed information on diet and physical activity. However, missing data are a pervasive problem in clinical and public health research, affecting both randomized trials and observational studies. Reasons for missing data can vary substantially across studies because of loss to follow-up, missed study visits, refusal to answer survey questions, or an unrecorded measurement during an office visit. One method of handling missing values is to simply delete observations for which there is missingness (called Complete Case Analysis). This is rarely used as deleting the data point containing missing data (List wise deletion) results in a smaller number of samples and thus affects accuracy. Additional methods of handling missing data exists, such as summarizing the variables with its observed values (Available Case Analysis). Motivated by the pervasiveness of missing data in the NHANES dataset, we will conduct an analysis of imputation methods under different simulated patterns of missing data. We will then apply these imputation methods to create a complete dataset upon which we can use ordinary least squares to predict blood pressure from diet and physical activity.
Harish Chauhan, Nikunj Gupta, Zoe Haskell-Craig
2023-05-01T18:15:44Z
http://arxiv.org/abs/2305.01655v1
Predicting blood pressure under circumstances of missing data: An analysis of missing data patterns and imputation methods using NHANES ###### Abstract The World Health Organization defines cardio-vascular disease (CVD) as "a group of disorders of the heart and blood vessels," including coronary heart disease and stroke. (World Health Organization 2021) CVD is affected by "intermediate risk factors" such as raised blood pressure, raised blood glucose, raised blood lipids, and obesity. These are predominantly influenced by lifestyle and behaviour, including physical inactivity, unhealthy diets, high intake of salt, and tobacco and alcohol use. However, genetics and social/environmental factors such as poverty, stress, and racism also play an important role. Researchers studying the behavioural and environmental factors associated with these "intermediate risk factors" need access to high quality and detailed information on diet and physical activity. However, missing data are a pervasive problem in clinical and public health research, affecting both randomized trials and observational studies. Reasons for missing data can vary substantially across studies because of loss to follow-up, missed study visits, refusal to answer survey questions, or an unrecorded measurement during an office visit. One method of handling missing values is to simply delete observations for which there is missingness (called Complete Case Analysis). This is rarely used as deleting the data point containing missing data (List wise deletion) results in a smaller number of samples and thus affects accuracy. Additional methods of handling missing data exists, such as summarizing the variables with its observed values (Available Case Analysis). Motivated by the pervasiveness of missing data in the NHANES dataset, we will conduct an analysis of imputation methods under different simulated patterns of missing data. We will then apply these imputation methods to create a complete dataset upon which we can use ordinary least squares to predict blood pressure from diet and physical activity. ## Methodology ### Data We used data from the 2013-2014 National Health and Nutrition Examination Survey (NHANES), a study that examined a nationally representative sample of the US population. Health interviews and survey questionnaires were administered to participants of all ages, alongside laboratory tests and a physical examination. (Centers for Disease Control and Prevention and National Center for Health Statistics 2017) Data collected included demographic data (age, gender, race and ethnicity, education level, citizenship status, household size and income), measures of body proportions and weight, pulse and blood pressure, history of diabetes, dietary practices (such as use of table salt), measure of caloric and macro-nutrient intake (dietary fiber, fat, etc), type and duration of physical activity, and blood cholesterol levels. The final sample size for 2013-2014 was 10 175. After limiting the sample to participants aged 20 or older (as some tests were only performed on adults), we had n = 5769 observations. After limiting participants to those without missing data for day 1 systolic blood pressure, we had 5111 rows. High low-density lipoproteins (LDL) cholesterol is considered especially dangerous for CVD as it can build up in blood vessels causing blockages. We categorized participants as having high LDL if they had more than 160 mg/dL (milligrams per deciliter) in their blood. (Goldman, R & Clark, C. 2021) Exploratory data analysis was conducted by examining pairwise Pearson's correlation coefficients between variables. Additionally, patterns of missingness were plotted. ### Missing Data Mechanisms Restricting our analysis to 16 variables including diet column DR1TKCAL (table 1), and participants for whom we observe data for each variable (no missing data), we simulated missing data according to missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR) patterns. #### Missingness completely at random (MCAR) A variable is missing completely at random if the probability of missingness is the same for all units, that is, the missingness is independent of all observed and unobserved variables. (Mack C 2018) If data are missing completely at random, then throwing out cases with missing data does not bias your inferences. We simulated MCAR by randomly removing observations from our sample. #### Missingness at random (MAR) Most missingness is not completely at random, as can be seen from the data them selves. A more general assumption, missing at random, is that the probability a variable is missing depends only on available information. That is, the missingness is related to the observed but not the unobserved variables. [15] Thus, if sex, age, bmi, and diet are recorded for all the people in the survey, then caloric intake is missing at random if the probability of nonresponse to this question depends only on these other, fully recorded variables. We simulated MAR data by removing data from the diet column DR1TKCAL at twice the probability if the respondent had a value of 2 for the physical activity question. Missingness not at random (MNAR)Missingness is no longer "at random" if it depends on information that has not been recorded and this information also predicts the missing values. For example, suppose that "surly" people are less likely to respond to the "dietary" question, surliness is predictive of blood pressure, and "surliness" is unobserved. Then, the blood pressure values are not missing at random. A familiar example from medical studies is that if a particular treatment causes discomfort, a patient is more likely to drop out of the study. This missingness is not at random (unless "discomfort" is measured and observed for all patients). If missingness is not at random, it must be explicitly modeled, or else you must accept some bias in your inferences. We simulated MNAR by removing data from the diet column at twice the probability if the respondent had a BMI greater than 25 - the definition of overweight. Note that while BMI is present in the larger dataset, we did not include this in our sample for missingness simulation - thus it takes on the role of an unobserved variable. ### Imputation methods We selected following imputation methods for our analysis. Univariate Feature Imputationmethod employs data imputation either by mean, median or most frequent values of a column in which missing data is located. This strategy can severely distort the distribution for this variable, leading to complications with summary measures including, notably, underestimates of the standard deviation. Moreover, mean imputation distorts relationships between variables by "pulling" estimates of the correlation toward zero. Nearest Neighbors Imputationprovides imputation by finding the samples in the set "closest" to it and averages these nearby points to fill in the value. It is certainly advantageous than univariate feature imputation but this method is computationally heavy and it is quite sensitive to outliers. Low Rank Approximationis approximating a matrix by one whose rank is less than that of the original matrix using Singular Value Decomposition (SVD). This technique iteratively approximates the missing observations from the complete low-rank matrix then recomputes the SVD. We created the SVD on a training dataset and then chose the rank to minimize error in the test set. Multivariate Feature Imputationmethod is a more sophisticated approach. It models each feature with missing values as a function of other features, and uses that estimate for imputation. It does so in an iterated round-robin fashion: at each step, a feature column is designated as output Y and the other feature columns are treated as inputs X. A regressor is fit on (X,Y) for known Y. Then, the regressor is used to predict the missing values of Y. This is done for each feature in an iterative fashion. Evaluation: Mean Squared ErrorWe used the mean squared error (MSE) to evaluate the performance of the imputation method, comparing the imputed values to those in the original subsample. ### Prediction OLS Regression On Imputed DataIn order to understand the association between lifestyle and blood pressure, an important "intermediate" risk factor for CVD, we applied Ordinary Least Squares (OLS) regression in tandem with the imputation methods to our complete dataset. OLS is a type of linear least squares method for estimating the unknown parameters in a linear regression model. The resulting estimator can be expressed by a simple formula, especially in the case of a simple linear regression, in which there is a single regressor on the right side of the regression equation. When the data contains many more observations than features (as in our case), OLS regression is optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated. OLS provides minimum-variance mean-unbiased estimation when the errors have finite variances. Under these conditions OLS significantly outperforms lasso and ridge regression (with \(\lambda>0\)). We first applied the six imputation methods to create a complete dataset without any missing observations. We then split the data 70/30 into training and test sets. We trained the OLS model on the training set, and then applied the model to the test set data to predict blood pressure. We assessed the performance of the various imputation methods using mean \begin{table} \begin{tabular}{|c|c|} \hline Variable Name & Description \\ \hline \hline RIDAGEYR & Age \\ \hline RIGENDDR & Gender \\ \hline RIDRETH1[3] & Race/Ethnicity \\ \hline DMDCITZN & Citizenship \\ \hline BMXLEG & Leg circumference \\ \hline BPXPULS & Pulse \\ \hline DIQ010 & Diabetes diagnosis \\ \hline DIQ050 & Insulin use \\ \hline HIQ011 & Health insurance \\ \hline PAQ635 & Walking or bicycling \\ \hline PAQ650 & Vigorous exercise \\ \hline PAQ665 & Moderate exercise \\ \hline PAD680 & Sedentary minutes \\ \hline PAQ710 & TV use \\ \hline DR1TKCAL & Caloric intake \\ \hline \end{tabular} \end{table} Table 1: Variables included in data subset used for missing data analysis. squared error (MSE) and root mean squared error (RMSE). ## Results and Discussion **Exploratory Data Analysis:** Correlation plots showed moderate to low correlation between most variables, with the exception of the dietary intake variables, diastolic and systolic blood pressure measured on two separate occasions, and measures of leg, arm, and waist circumference (figure 1). Dietary variable missingness occurred concurrently, as did history of cardiovascular disease questionnaire missingness. Using a chi-squared test, we determined that missing dietary data was independent of health insurance status and gender. Missing dietary data was not independent of physical exercise status (variable PAQ650)(figure 2). Table 2 summaries our findings of performances of various imputation methods on the simulated missing data. Our study demonstrated that multivariate regression feature imputation outperformed all other imputation methods independent of the pattern of missingness. Aside from regression, the second best performance came from the low rank approximation imputation when the data was MAR or MNAR. However, imputation with the mean performed second-best when the data was MCAR, closely competing with KNN's performance. **Prediction Results:** Table 3 summarizes the Linear Regression results obtained the complete dataset to predict blood pressure, we found that the model performed best when the data was imputed using _multivariate feature imputation_. Surprisingly, imputing with the median column values resulted in the best performing model on the test set. **Summary:** There are several reasons why we may have obtained this surprising result. For one, we used only a small subset of the features and observations to create the dataset for the simulation analysis. The distribution of data and relationship between features may be quite different in the complete dataset. This could result in the different performances of imputation methods between the complete dataset and the subset. Second, the poor performance of regression imputation in the complete dataset could be explained by the limited correlation between features in the complete dataset. If there is no way to predict one variable from another than regression imputation is likely to perform poorly. Finally, using the column median to impute missing values essentially smooths the data. It is possible that the OLS model predicts better when the features are "smoother". ## Conclusion To conclude, we found that multivariate regression imputation performed the best across all three simulated patterns of missingness in this dataset. We were able to predict blood pressure from diet and physical activity with reasonable RMSE. Imputing missing values using the column median produced a prediction model with the best RMSE. This study has several strengths. First, we assessed the performance of the imputation methods under different patterns of missingness, instead of simply assuming all the data was MCAR. Second, we examined the impact of different imputation methods both on the accuracy of imputed values and on the ability to predict blood pressure. This study also has several limitations. We only examined a small subset of the features and observations in our simulation study. Additionally, we only created missingness in one column: caloric intake. Further research should simulate different frequencies of missing values and missing patterns in other features. \begin{table} \begin{tabular}{|l||l|l|l|} \hline \multicolumn{4}{|c|}{Mean Squared Error} \\ \hline Imputation Method & MCAR & MAR & MNAR \\ \hline Mean & 0.831 & 1.149 & 1.045 \\ Median & 0.910 & 1.262 & 1.070 \\ Most-frequent & 1.157 & 1.484 & 1.188 \\ KNN & 0.861 & 1.473 & 0.926 \\ Low Rank Model & 1.021 & 1.060 & 1.010 \\ Regression & **0.560** & **1.013** & **0.853** \\ \hline \end{tabular} \end{table} Table 2: Analysis of Imputation Methods on Simulated Data Figure 1: Correlation between features \begin{table} \begin{tabular}{|l||l|l|l|} \hline \multicolumn{4}{|c|}{Root Mean Squared Error} \\ \hline Imputation Method & Train & Test \\ \hline Mean & 14.516 & 14.783 \\ Median & 14.776 & **14.227** \\ Most-frequent & 14.573 & 14.619 \\ KNN & 14.619 & 14.642 \\ Low Rank Model & 18.232 & 17.854 \\ Regression & **14.361** & 15.088 \\ \hline \end{tabular} \end{table} Table 3: Regressing blood pressure using Imputed datasets Figure 2: Missing data in NHANES dataset
2310.05685
Post-Selection Inference for Sparse Estimation
When the model is not known and parameter testing or interval estimation is conducted after model selection, it is necessary to consider selective inference. This paper discusses this issue in the context of sparse estimation. Firstly, we describe selective inference related to Lasso as per \cite{lee}, and then present polyhedra and truncated distributions when applying it to methods such as Forward Stepwise and LARS. Lastly, we discuss the Significance Test for Lasso by \cite{significant} and the Spacing Test for LARS by \cite{ryan_exact}. This paper serves as a review article. Keywords: post-selective inference, polyhedron, LARS, lasso, forward stepwise, significance test, spacing test.
Joe Suzuki
2023-10-09T12:54:57Z
http://arxiv.org/abs/2310.05685v2
# Post-Selection Inference for Sparse Estimation+ # Post-Selection Inference for Sparse Estimation+ Footnote †: This manuscript is the English translation of an article originally published in Japanese in the Journal of the Japan Statistical Society, Volume 53, Issue 1, September 2023 (pages 139-167) Joe Suzuki Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama-cho, Toyonaka, Osaka 560-8531, Japan E-mail: [email protected] **Abstract** When the model is not known and parameter testing or interval estimation is conducted after model selection, it is necessary to consider selective inference. This paper discusses this issue in the context of sparse estimation. Firstly, we describe selective inference related to Lasso as per Lee _et al._ (2016), and then present polyhedra and truncated distributions when applying it to methods such as Forward Stepwise and LARS. Lastly, we discuss the Significance Test for Lasso by Lockhart _et al._ (2014) and the Spacing Test for LARS by Tibshirani _et al._ (2016). This paper serves as a review article. **Keywords** post-selective inference, polyhedron, LARS, lasso, forward stepwise, significance test, spacing test. ## 1 Introduction In this study, we explore the discrepancies that arise between conducting parameter tests and estimating confidence intervals when the model is known, and performing inference regarding the parameters after selecting a model. Furthermore, variables selected by performing model selection tend to be significant in the first place. The question then becomes: how can we eliminate this bias? This issue is known as Post-Selective Inference and has been specifically discussed, triggered by the works of Berk _et al._ (2013) and Lee _et al._ (2016). As seen in Figure 1, if the model is known, it is permissible to conduct regular parameter estimation, with \(n\) samples \(y\) moving in \(\mathbb{R}^{n}\). However, if a model \(\hat{M}(y^{obs})\) is obtained from the observed value \(y^{obs}\) after conducting model selection, statistical inference must be performed according to the distribution (truncated distribution) within the set of \(y\) obtaining the same model \(\{y\in\mathbb{R}^{n}|\hat{M}(y)=\hat{M}(y^{obs})\}\) (truncated region). It could be said that the post-selective inference discussed in this paper pertains to this issue of the truncated region and distribution. Indeed, aside from that, it aligns with conventional statistical theory. Although the special feature this time is "Sparse Estimation," the issue of selective inference is always a topic of discussion whenever there is a variable selection procedure, not limited to sparse estimation, but also in cases where information criteria are applied. In sparse estimation, Lasso (least absolute shrinkage and selection operator, Tibshirani (1996)) is most commonly used. In the case of linear regression (assuming \(n\) samples and \(p\) variables), given \(X\in\mathbb{R}^{n\times p}\), \(y\in\mathbb{R}^{n}\), and a constant \(\lambda>0\), it is formulated as a problem to find \(\beta=[\beta_{1},\ldots,\beta_{p}]^{\top}\in\mathbb{R}^{p}\) that minimizes \[\frac{1}{2}\|y-X\beta\|_{2}^{2}+\lambda\sum_{j=1}^{p}|\beta_{j}|. \tag{1}\] The larger the \(\lambda\), the fewer non-zero components of \(\beta\) (the fewer selected variables). Particularly, even when \(n<p\), in other words, even when \(X^{\top}X\) does not have an inverse matrix, a solution can be found1. Furthermore, because Figure 1: Distribution of the test statistic under the null hypothesis. If the model is known, as in (a), it is permissible to perform regular statistical inference, considering \(n\) samples \(y\) move in \(\mathbb{R}^{n}\). However, if a model \(\hat{M}(y^{obs})\) is obtained from observed value \(y^{obs}\) after conducting model selection, as in (b), statistical inference must be performed according to the distribution (truncated distribution) within the set of \(y\) obtaining the same model \(\{y\in\mathbb{R}^{n}|\hat{M}(y)=\hat{M}(y^{obs})\}\) (truncated region). (1) is a convex function of \(\beta\in\mathbb{R}^{p}\), a solution that minimizes (1) can be efficiently found. Here, \(\lambda\) is usually determined from \(X,y\) using methods such as cross-validation. Footnote 1: The \(\beta\)-function is guaranteed when samples arise according to a continuous distribution (Tibshirani, 2012). Selective inference, along with sparse estimation, could be described as one of the specialties of Stanford Statistics, and many studies, starting with Lee _et al._ (2016), have been conducted in the context of sparse estimation. Since (1) includes an absolute value term, deriving confidence intervals was originally difficult. On the other hand, another sparse estimation method, SCAD (Fun and Lin (2001)), has a method of deriving confidence intervals based on oracle properties, but optimization in large-scale problems is difficult because the objective function is not convex. Therefore, there is a two-step estimation that performs variable selection with Lasso and derives confidence intervals using the selected variables, but inevitably, the variables selected by performing model selection tend to be significant. Several explanatory articles regarding selective inference have already been published. This paper focuses on its theoretical handling and sequential model selection. Understanding the essence of the work by Lee _et al._ (2016) is crucial for selective inference. Starting with selective inference assuming linear regression (Section 2), while confirming the theoretical treatment of Lasso (Section 3), we derive the theory of polyhedra constructing the truncated distribution (Section 4). Although it includes some formal discussions, this part corresponds to the preparation of this paper. Discussion of sequential model selection inevitably requires understanding the framework of Forward Stepwise (FS). Even in linear regression, if there are \(p\) candidate explanatory variables, it is necessary to choose the optimal combination from \(2^{p}\) combinations, making the theoretical analysis of variable selection difficult. Section 5 applies FS, which may not be optimal but selects variables top-down one by one, to perform analysis related to variable selection and residuals. LARS by Efron (Efron _et al._ (2004), Section 6) was also conceived by applying FS-like concepts to Lasso. It is a method that shows almost the same performance as Lasso and is considered easy to theoretically analyze. The sequential model selection dealt with in this paper (Sections 7 and 8) will be a discussion based on LARS. It repeatedly tests whether the coefficient in front of each explanatory variable of linear regression should be 0 or not. At that time, testing whether the coefficient in front of the variable at each point is 0 or not is performed based on different distributions that depend on the variables already selected. This framework seems like a generalization of the concept of selective inference. The author had a problem awareness that discussions on sequential selective inference, while important, are not well known. It may be a challenging problem that you cannot understand the essence unless you understand all of Stanford Statistics' products, such as selective inference, Lasso, FS, and LARS. In this paper, we explain them in order and reach the peak of Significance Test (Lockhart _et al._ (2014), Section 7) and Spacing Test (Tibshirani _et al._ (2016), Section 8). Section 9 summarizes the overall and mentions future prospects. To maintain self-containedness, a proof (provided an easy proof different from the original paper) was posted in the appendix. Additionally, this paper provides source programs in R language (Table 1). Please utilize appropriately. ## 2 Selective Inference in Linear Regression Let us consider \(n\) samples \((x_{1},y_{1}),\ldots,(x_{n},y_{n})\in\mathbb{R}^{p}\times\mathbb{R}\), where \(x_{i}=[x_{i,1},\ldots,x_{i,p}]\in\mathbb{R}^{p}\) (row vector2), \(i=1,\ldots,n\). We aim to find \(\beta_{0},\beta_{1},\ldots,\beta_{p}\) that minimize Footnote 2: Vectors are typically represented as column vectors. \[L:=\sum_{i=1}^{n}\left(y_{i}-\beta_{0}-\sum_{j=1}^{p}x_{i,j}\beta_{j}\right)^ {2}\] through the method of least squares. Hereafter, we denote by \(X\) the \(n\times p\) matrix whose rows are \(x_{i}\), and by \(y\) the \(n\)-dimensional vector with components \(y_{i}\). Without loss of generality, we assume that each column of \(X\) and \(y\) has been centered, and we exclude the intercept, thereby setting \(\beta_{0}=0\), for further discussion. The solution obtained by setting the derivative of \(L\) with respect to \(\beta=[\beta_{1},\ldots,\beta_{p}]^{\top}\) to zero is denoted as \(\hat{\beta}_{1},\ldots,\hat{\beta}_{p}\). Assume there exists some \(\beta\in\mathbb{R}^{p}\) (the true parameter), such that \[y_{i}-\sum_{j=1}^{p}x_{i,j}\beta_{j}\] for \(i=1,\ldots,n\) are independent and normally distributed with mean \(0\) and known variance \(\sigma^{2}\). Assuming \(X^{\top}X\) is non-singular, the coefficients obtained through the least squares method, \(\hat{\beta}=[\hat{\beta}_{1},\ldots,\hat{\beta}_{p}]^{\top}\), are random variables and, omitting detailed derivation, can be expressed as \[\hat{\beta}=(X^{\top}X)^{-1}X^{\top}y\sim N(\beta,\sigma^{2}(X^{\top}X)^{-1}) \tag{2}\] where \(\sim N(\mu,\Sigma)\) denotes a normal distribution in \(p\) dimensions with mean vector \(\mu\) and \(p\times p\) covariance matrix \(\Sigma\). \begin{table} \begin{tabular}{l|l} \hline Compute truncated distribution & [https://bayesnet.org/books](https://bayesnet.org/books)\_jp/?p=576 \\ Lasso (Figure 5) & [https://bayesnet.org/books](https://bayesnet.org/books)\_jp/3-1.html \\ LARS (Figure 5) & [https://bayesnet.org/books](https://bayesnet.org/books)\_jp/3-4.html \\ General sparse estimation & [https://bayesnet.org/books](https://bayesnet.org/books)\_jp/?page\_id=33 \\ \hline \end{tabular} \end{table} Table 1: Related source codes In linear regression, it is often assumed that among the \(p\) explanatory variables, some are redundant and their parameter \(\beta_{j}\) should be set to \(0\). We can identify redundant parameters by testing the null hypothesis \(H_{0}:\beta_{j}=0\) and alternative hypothesis \(H_{1}:\beta_{j}\neq 0\) for each \(j=1,\ldots,p\). If we define \[RSS:=\|y-X\hat{\beta}\|_{2}^{2}\] then \[\frac{RSS}{\sigma^{2}}\sim\chi_{n-p}^{2}\] and using equation (2), if we define \(\hat{\sigma}:=\sqrt{RSS/(n-p)}\), \(B_{j}\) as the \(j\)-th diagonal element of \((X^{\top}X)^{-1}\), and \(SE(\hat{\beta}_{j}):=\sqrt{B_{j}}\hat{\sigma}\), then \[t:=\frac{\hat{\beta}_{j}-\beta_{j}}{SE(\hat{\beta}_{j})}=\frac{(\hat{\beta}_{j }-\beta_{j})/\sqrt{B_{j}}}{\sqrt{RSS/(n-p)}}\] follows a \(t\)-distribution with \(n-p\) degrees of freedom. Under the null hypothesis \(\beta_{j}=0\), if the observed value \(\hat{\beta}_{j}/SE(\hat{\beta}_{j})\) falls outside the \(\alpha/2\) percentile of its null distribution for a significance level \(0<\alpha<1\), we consider the variable significant and decide \(\beta_{j}\neq 0\). For variable selection, methods such as the stepwise method or Lasso are available. We assume the existence of a mapping \[\hat{M}:\mathbb{R}^{n}\rightarrow\mbox{(the set of subsets of $\{1,\ldots,p\}$)}\] where \(\mathcal{P}(S)\) denotes the power set of \(S\). When using these methods to identify the model \(\hat{M}(y)=\{j\in\{1,\ldots,p\}|\hat{\beta}_{j}\neq 0\}\), parameters \(\beta_{j}\) for which \(\hat{\beta}_{j}\neq 0\), \(j\in\hat{M}(y)\), often yield small \(p\)-values. Moreover, parameters that are not selected are presumed to be \(0\). Consequently, it may arise that parameter confidence intervals and testing should be conducted conditionally given the selected model \(M=\hat{M}(y)\). Selective inference advocates for parameter confidence intervals and testing based on the conditional distribution given the selection and applies more broadly than just linear regression, being relevant for statistical estimation involving variable selection in general. Returning to linear regression, let's define the confidence level as \(1-\alpha\) (\(\alpha>0\)), and instead of defining a confidence interval \(C\) as \[\mathbb{P}\left(\beta_{j}^{\hat{M}(y)}\in C\right)\geq 1-\alpha\] it becomes defined under \(M=\hat{M}(y)\) as \[\mathbb{P}\left(\beta_{j}^{M}\in C|M=\hat{M}(y)\right)\geq 1-\alpha\] where we denote the true value of the \(j\)-th parameter assuming the model \(M\) by \(\beta_{j}^{M}\). Indeed, if the model \(M\) differs, the parameter \(\beta^{M}\in\mathbb{R}^{m}\) (\(m:=|M|\)) also differs. Here, writing \(X_{M}\in\mathbb{R}^{n\times m}\) and expressing the submatrix of \(X\) corresponding to the index set \(M\subseteq\{1,\ldots,p\}\), the estimate of \(\beta_{j}^{M}\) is written as \(\hat{\beta}_{j}^{M}=e_{j}^{\top}(X_{M}^{\top}X_{M})^{-1}X_{M}^{\top}y\). Here, let \(e_{j}\in\mathbb{R}^{m}\) be a vector where only the \(j\)-th component is \(1\) and the rest are \(0\). Below, denoting \(\eta=X_{M}(X_{M}^{\top}X_{M})^{-1}e_{j}\in\mathbb{R}^{n}\) and assuming \(y^{obs}\) is observed as \(y^{3}\), we determine the confidence interval and \(p\) value of \(\beta_{j}^{M}\) and \(\hat{\beta}_{j}^{M}\) based on the conditional probability \[\eta^{\top}y\mid\left\{\hat{M}(y)=\hat{M}(y^{obs})\right\}.\] At this moment, the confidence interval for \(\beta_{j}^{M}\) at a confidence level of \(1-\alpha\) (\(0<\alpha<1\)) is established by employing \(L,U\) that satisfy \[\mathbb{P}\left(\eta^{\top}y^{\rm obs}\leq\eta^{\top}y|\beta_{j}^{M}=L,\hat{M} (y)=\hat{M}(y^{\rm obs})\right)=\frac{\alpha}{2}\] \[\mathbb{P}\left(\eta^{\top}y\leq\eta^{\top}y^{\rm obs}|\beta_{j}^{M}=U,\hat{M }(y)=\hat{M}(y^{\rm obs})\right)=\frac{\alpha}{2}\] hence, it can be written as, \[\mathbb{P}\left(L\leq\beta_{j}^{M}\leq U|\hat{M}(y)=\hat{M}(y^{\rm obs}) \right)=1-\alpha \tag{3}\] Indeed, the probability of occurrence of \(y\in\mathbb{R}^{n}\) satisfying \(\eta^{\top}y^{\rm obs}\leq\eta^{\top}y\) is determined by \(\beta_{j}^{M}\). Similarly, under the condition \(\hat{M}(y)=\hat{M}(y^{\rm obs})\), the conditional probability \[\mathbb{P}(\eta^{\top}y^{\rm obs}\leq\eta^{\top}y|\hat{M}(y)=\hat{M}(y^{\rm obs })) \tag{4}\] holds. The lower limit of \(\beta_{j}^{M}\) for which (4) becomes at least \(\alpha/2\) is \(L\). Similarly, for the conditional probability \[\mathbb{P}\left(\eta^{\top}y^{\rm obs}\geq\eta^{\top}y\mid\hat{M}(y)=\hat{M}( y^{\rm obs})\right)\] the upper limit of \(\beta_{j}^{M}\) that causes it to be at least \(\alpha/2\) is \(U\). On the other hand, in the case of a two-sided test, the p-value of the statistic \(\hat{\beta}_{j}^{\hat{M}(y^{\rm obs})}\) under the null hypothesis \(\beta_{j}^{\hat{M}(y^{\rm obs})}=0\) can be calculated as, \[2\min\left\{\mathbb{P}\left(\eta^{\top}y\leq\eta^{\top}y^{\rm obs }|\beta_{j}^{M}=0,\hat{M}(y)=\hat{M}(y^{\rm obs})\right)\ \right.,\] \[\left.\mathbb{P}\left(\eta^{\top}y^{\rm obs}\leq\eta^{\top}y| \beta_{j}^{M}=0,\hat{M}(y)=\hat{M}(y^{\rm obs})\right)\right\} \tag{5}\] In fact, generally, when the null distribution is \(f(t)\) and the statistic \(T\) is \(t\), regardless of whether \[\nu:=\int_{t}^{\infty}f(u)du\] is \(0\leq\nu\leq 0.5\) or \(0.5<\nu\leq 1\), \(2\min\{\nu,1-\nu\}\) becomes the \(p\)-value (see Figure 2). Therefore, if we set \(T=\eta^{\top}y^{\rm obs}\) and \(f\) as the conditional density function of \(\eta^{\top}y\) under \(\beta_{j}^{M}=0\) and \(\hat{M}(y)=\hat{M}(y^{\rm obs})\), and let \[\nu=\int_{\eta^{\top}y^{\rm obs}}^{\infty}f(t)dt\] then (5) is obtained. Note that \(\hat{M}\) takes the same value even if \(y\) is different as long as \(\eta^{\top}y\) is the same. Although it seems repetitive, conducting tests and estimating confidence intervals based on the distribution (truncated distribution) that considers \[\left\{y\in\mathbb{R}^{n}\ |\ \hat{M}(y)=\hat{M}(y^{\rm obs})\right\}\] as the universal set, rather than \[\{y\in\mathbb{R}^{n}\}\] is referred to as selective inference. ## 3 Selective Inference in Lasso Given \(X\in\mathbb{R}^{n\times p}\), \(y\in\mathbb{R}^{n}\), for each \(\lambda>0\) we consider \[\frac{1}{2}\|y-X\beta\|_{2}^{2}+\lambda\|\beta\|_{1} \tag{6}\] to minimize over \(\beta\in\mathbb{R}^{p}\), a problem referred to as the Lasso in linear regression. Here, \(\|\beta\|_{1}:=\sum_{j=1}^{p}|\beta_{j}|\) (typically, normalization is performed for each Figure 2: Regardless of whether the distribution of \(t\) is a truncated distribution, when the null distribution is \(f(u)\) and the statistic \(T\) is \(t\), the value of \(\nu=\int_{t}^{\infty}f(u)du\) is, regardless of \(0\leq\nu\leq 0.5\), \(0.5<\nu\leq 1\), the p-value is \(2\min\{\nu,1-\nu\}\). \(1,\ldots,p\) to ensure that \(\sum_{i=1}^{n}x_{i,j}^{2}=1\), before optimizing). Even though we seek to minimize, the inclusion of the absolute value function (non-differentiable at the origin) means that when each column of \(X\) is orthogonal, it operates to set \(\beta_{j}=0\) for \(j\) where the absolute value of the ordinary least squares estimator \(\sum_{i=1}^{n}x_{i,j}y_{i}\) is small (Suzuki, 2020, 2021). Choosing \(j\) such that \(\beta_{j}\neq 0\) can be interpreted as performing variable selection. Unlike the case in ordinary least squares, even when \(n<p\) and the situation is sparse, a solution can be computed under suitable conditions. In cases where \(n<p\), since \(X^{\top}X\) does not have an inverse matrix, applying ordinary least squares is challenging. Moreover, letting \(g(n)\) be a monotonically increasing function of \(n\) that takes positive values, comparing \[\frac{1}{2}\|y-X\beta\|_{2}^{2}+g(n)\sum_{j=1}^{p}I(\beta_{j}\neq 0) \tag{7}\] across models such as \(\{\},\{1\},\{2\},\ldots,\{1,\ldots,p\}\), etc., which totals to \(2^{p}\) models, would require immense computations. Here, \(I(A)\) takes the value \(1\) when condition \(A\) is true, and \(0\) otherwise. Both terms in (6) are convex, and methods exist for efficiently searching for solutions. However, due to the non-convexity of the graph in Figure 3 on the right, searching for solutions to (7) is not straightforward. Lee _et al._ (2016) demonstrated that the condition \(\hat{M}(y)=\hat{M}(y^{obs})\) in Lasso variable selection can be expressed using some \(m\geq 1,A\in\mathbb{R}^{m\times n},b\in\mathbb{R}^{m}\) as a set of inequalities \(Ay\leq b\). Figure 3: Left: \(f(x)=|x|\) is convex. However, at the origin, the derivative does not coincide from both sides, and it cannot be differentiated. Understanding that the subdifferential at \(x=0\) is \([-1,1]\) might be seen as the derivative value increasing from \(-1\) to \(1\) at the ”dance floor” (\(x=0\)), since the derivatives for \(x<0\) and \(x>0\) are \(-1\) and \(1\) respectively. Right: Not convex. For example, information criteria like AIC or BIC are not convex as \(g(n)\) becomes a function of the number of non-zero parameters. Therefore, searching for the optimal solution is not efficient. Generally, for a convex function \(f:\mathbb{R}\to\mathbb{R}\) at \(a\in\mathbb{R}\), when \[f(x)\geq f(a)+z(x-a)\,\ x\in\mathbb{R}\] is satisfied, the set of such \(z\in\mathbb{R}\) is called the subdifferential of \(f\) at \(a\). If \(f(x)\) is differentiable at \(x=a\), then its subdifferential is the set consisting only of \(f^{\prime}(a)\). Moreover, if \(f(x)=|x|\) and \(a=0\), then its subdifferential is the interval \([-1,1]\). Indeed, \[|x|\geq zx\text{ for any }x\in\mathbb{R}\] \[\Longleftrightarrow\left\{\begin{array}{ll}z\leq 1,&x>0\\ z\geq 1,&x<0\end{array}\right.\Longleftrightarrow|z|\leq 1\] is satisfied (see Figure 3). Therefore, when subdifferentiating (6), we can write \[X^{\top}(X\hat{\beta}-y)+\lambda\hat{s}=0 \tag{8}\] where \(\hat{s}=[\hat{s}_{1},\ldots,\hat{s}_{p}]^{\top}\), and if the solution \(\hat{\beta}_{j}\) of (8) is non-zero (active), \(\hat{s}_{j}=\text{sign}(\hat{\beta}_{j})\), and if \(\hat{\beta}_{j}\) is \(0\) (inactive), \(\hat{s}_{j}\in[-1,1]\). An example of how to find the solution of a function containing an absolute value function is shown in Figure 4. Dividing the index set \(\{1,\ldots,p\}\) into \[\hat{M}:=\{j|\hat{\beta}_{j}\neq 0\}\,\ -\hat{M}:=\{j|\hat{\beta}_{j}=0\}\] Figure 4: For example, in the function on each of the left and right, even though \(x^{2}-3x\) and \(x^{2}+x\) are differentiable, because \(|x|\) is not differentiable at \(x=0\), the subdifferential at \(x=0\) becomes the interval \([-4,-2]\) and \([-1,3]\) respectively. The left does not include \(0\) in that interval, while the right includes \(0\). Therefore, the left does not become a minimum at the origin, whereas the right becomes a minimum. assuming there are \(m\) elements in \(\hat{M}\), and the first and last \(m\) columns of \(X\) correspond to the indices of \(\hat{M},-\hat{M}\) respectively. The same applies to \(\hat{\beta},\hat{s}\). Then, representing them as \[X=[X_{\hat{M}},X_{-\hat{M}}]\,\ \hat{\beta}=\left[\begin{array}{c}\hat{\beta}_{ \hat{M}}\\ \hat{\beta}_{-\hat{M}}\end{array}\right]\,\ \hat{s}=\left[\begin{array}{c}\hat{s}_{ \hat{M}}\\ \hat{s}_{-\hat{M}}\end{array}\right]\] we can concretely write (8) as \[X_{\hat{M}}^{\top}(X_{\hat{M}}\hat{\beta}_{\hat{M}}-y)+\lambda\hat{s}_{\hat{M} }=0 \tag{9}\] \[X_{-\hat{M}}^{\top}(X_{\hat{M}}\hat{\beta}_{\hat{M}}-y)+\lambda\hat{s}_{-\hat{ M}}=0 \tag{10}\] \[\mbox{sign}(\hat{\beta}_{\hat{M}})=\hat{s}_{\hat{M}} \tag{11}\] \[\|\hat{s}_{-\hat{M}}\|_{\infty}\leq 1 \tag{12}\] However, we denote \(\|u\|_{\infty}\) to represent the maximum absolute value of the components of \(u=[u_{1},\ldots,u_{p}]^{\top}\). Also, if we let \(P_{M}^{\perp}:=I-X_{M}(X_{M}^{\top}X_{M})^{-1}X_{M}^{\top}\), \(X_{M}^{+}:=(X_{M}^{\top}X_{M})^{-1}X_{M}^{\top}\), then from (9) and (10) we can write \[\hat{\beta}_{\hat{M}}=(X_{\hat{M}}^{\top}X_{\hat{M}})^{-1}(X_{\hat{M}}^{\top} y-\lambda\hat{s}_{\hat{M}}) \tag{13}\] \[\hat{s}_{-\hat{M}} = -\frac{1}{\lambda}\{X_{-\hat{M}}^{\top}(X_{\hat{M}}\hat{\beta}_ {\hat{M}}-y)\} \tag{14}\] \[= -\frac{1}{\lambda}\{X_{-\hat{M}}^{\top}(X_{\hat{M}}(X_{\hat{M}}^ {\top}X_{\hat{M}})^{-1}(X_{\hat{M}}^{\top}y-\lambda\hat{s}_{\hat{M}})-y)\}\] \[= X_{-\hat{M}}^{\top}(X_{\hat{M}}^{+})^{\top}\hat{s}_{\hat{M}}+ \frac{1}{\lambda}X_{-\hat{M}}^{\top}P_{\hat{M}}^{\perp}y\] Therefore, for \((\hat{M},\hat{s}_{\hat{M}})=(M,s)\) to satisfy (11) and (12), it is necessary and sufficient from (13) and (14) that \[\left\{\begin{array}{l}\mbox{diag}(s)(X_{M}^{\top}X_{M})^{-1}(X_{M}^{\top} y-\lambda s)>0\\ -1<X_{-M}^{\top}(X_{M}^{+})^{\top}s+\frac{1}{\lambda}X_{-M}^{\top}P_{M}^{\perp} y<1\end{array}\right. \tag{15}\] Note that the two inequalities in (15) indicate that each element of the \(m\)-dimensional and \(p-m\)-dimensional vectors, respectively, fall within that range. Moreover, this can be written as \[\left[\begin{array}{c}\frac{1}{\lambda}X_{-M}^{\top}P_{\hat{M}}^{\perp}\\ -\frac{1}{\lambda}X_{-M}^{\top}P_{\hat{M}}^{\perp}\\ -\mbox{diag}(s)(X_{M}^{+})^{\top}\end{array}\right]y\leq\left[\begin{array}[] {c}1_{p-m}-X_{-M}^{\top}(X_{M}^{\top})^{+}s\\ 1_{p-m}+X_{-M}^{\top}(X_{M}^{\top})^{+}s\\ -\lambda\mbox{diag}(s)(X_{M}^{\top}X_{M})^{-1}s\end{array}\right]\,\] where \(1_{p-m}\in\mathbb{R}^{p-m}\) is a vector with all components equal to \(1\), and \(\mbox{diag}(s)\) is a diagonal matrix with \(s\in\{-1,1\}^{m}\) as diagonal components. Therefore, \[\{\hat{M}=M,\hat{s}_{\hat{M}}=s\}=\{A(M,s)y\leq b(M,s)\}\] we can observe the existence of \(A(M,s)\in\mathbb{R}^{(2p-m)\times n}\), \(b(M,s)\in\mathbb{R}^{2p-m}\). Furthermore, although \((M,s)\) is conditioned by the sign \(s\), summing over all signs results in a polyhedral domain (Lee _et al._ (2016)). \[\{\hat{M}=M\}=\cup_{s\in\{-1,1\}^{m}}\Big{\{}\hat{M}=M,\hat{s}_{\hat{M}}=s \Big{\}}=\cup_{s\in\{-1,1\}^{m}}\{A(M,s)y\leq b(M,s)\} \tag{16}\] Here, at the stage of calculating the conditional probability in Lasso, it is often assumed that \(p\) is sufficiently larger than \(n\), and it is presumed that the number of selected variables, \(m\), is also reasonably large. Thus, there is a possibility that summing all \(2^{m}\) elements involves considerable computation. However, conditioning not on \(\{\hat{M}=M\}\) but on \(\{\hat{M}=M,\hat{s}=s\}\) weakens the detection power and widens the confidence interval. ## 4 Selective Inference for Distributions Conditioned on Polyhedra Let \(\eta:=X(X^{\top}X)^{-1}e_{j}\). The parameter \(\beta_{j}\) of linear regression is often estimated by the least squares method as \(\hat{\beta}_{j}=\eta^{\top}y\). In the following, considering \(\eta\in\mathbb{R}^{n}\), we will examine the conditional distribution of \(\eta^{\top}y\) under \(\{Ay\leq b\}\). First, let \(\Sigma=\sigma^{2}I_{n}\in\mathbb{R}^{n\times n}\), \(c:=\Sigma\eta(\eta^{\top}\Sigma\eta)^{-1}=\eta/\|\eta\|^{2}\in\mathbb{R}^{n}\), and \(z:=(I_{n}-c\eta^{\top})y\in\mathbb{R}^{n}\). The covariance matrix \(\mathbb{R}^{1\times p}\) of \(\eta^{\top}y\) and \(z\), which follow a normal distribution, is \[\text{Cov}[\eta^{\top}y,z]=\mathbb{E}[\eta^{\top}(y{-}X\beta)(y{-}X\beta)^{ \top}(I_{n}{-}c\eta^{\top})^{\top}]=\eta^{\top}\sigma^{2}\left(I_{n}-\frac{ \eta\eta^{\top}}{\|\eta\|^{2}}\right)=\eta^{\top}\sigma^{2}{-}\eta^{\top} \sigma^{2}=0,\] revealing that \(\eta^{\top}y\) and \(z\) are independent. Moreover, representing the \(j\)th component of a vector by \((\cdot)_{j}\), we have \[Ay\leq b \Longleftrightarrow A(c\eta^{\top}y+z)\leq b\Longleftrightarrow(Ac)_{j}(\eta^{\top}y) \leq b_{j}-(Az)_{j},j=1,\ldots,p\] \[\Longleftrightarrow \left\{\begin{array}{ll}\eta^{\top}y\leq\frac{b_{j}-(Az)_{j}}{ (Ac)_{j}},&(Ac)_{j}>0\\ \eta^{\top}y\geq\frac{b_{j}-(Az)_{j}}{(Ac)_{j}},&(Ac)_{j}<0\\ 0\leq b_{j}-(Az)_{j},&(Ac)_{j}=0\end{array}\right.\] \[\Longleftrightarrow \left\{\begin{array}{ll}\eta^{\top}y\leq\min_{j:(Ac)_{j}>0} \frac{b_{j}-(Az)_{j}}{(Ac)_{j}}\\ \eta^{\top}y\geq\max_{j:(Ac)_{j}<0}\frac{b_{j}-(Az)_{j}}{(Ac)_{j}}\\ 0\leq\min_{j:(Ac)_{j}=0}\{b_{j}-(Az)_{j}\}\end{array}\right.\] thus, \[\{Ay\leq b\}=\{\nu^{-}(z)\leq\eta^{\top}y\leq\nu^{+}(z),\nu^{0}(z)\geq 0\}\] is permissible to write. Where, \[\left\{\begin{array}{l}\nu^{-}(z):=\max_{j:(Ac)_{j}<0}\frac{b_{j}-( Az)_{j}}{(Ac)_{j}}\\ \nu^{+}(z):=\min_{j:(Ac)_{j}>0}\frac{b_{j}-(Az)_{j}}{(Ac)_{j}}\\ \nu^{0}(z):=\min_{j:(Ac)_{j}=0}\{b_{j}-(Az)_{j}\}\end{array}\right. \tag{17}\] varies independently of \(\eta^{\top}y\). Additionally, since \(\eta^{\top}y\) and \(z\) are independent, for \(z_{0}\in\mathbb{R}^{n}\) such that \(\nu^{0}(z_{0})\geq 0\), we have \[\eta^{\top}y|\{Ay\leq b,z=z_{0}\}=\eta^{\top}y|\{\nu^{-}(z_{0})\leq\eta^{\top} y\leq\nu^{+}(z_{0})\} \tag{18}\] where the right side of (18) becomes a truncated normal distribution in the interval \([\nu^{-}(z_{0}),\nu^{+}(z_{0})]\) for \(N(\beta_{j},\sigma^{2}\|\eta\|^{2})\). Henceforth, we will use the notation \[F_{\mu,\sigma^{2}}^{[a,b]}(x):=\frac{\Psi((x-\mu)/\sigma)-\Psi((a-\mu)/\sigma )}{\Psi((b-\mu)/\sigma)-\Psi((a-\mu)/\sigma)} \tag{19}\] where \(\Psi\) represents the cumulative probability of the standard normal distribution. Equation (19) becomes the distribution function of the truncated normal distribution. Generally, the value of the distribution function \(F_{X}(x)\) of a random variable \(X\) uniformly distributes in \([0,1]\) when \(x\) varies according to \(F_{X}\). Therefore, when \(\eta^{\top}y\) ranges within \([\nu^{-}(z_{0}),\nu^{+}(z_{0})]\) under \(z=z_{0}\), the value of its distribution function \[F_{\beta_{j},\sigma^{2}\|\eta\|^{2}}^{[\nu^{-}(z),\nu^{+}(z)]}\left(\eta^{\top }y|Ay\leq b,z=z_{0}\right) \tag{20}\] uniformly distributes in \([0,1]\). Here, since \(\eta^{\top}y\) and \(z\) are independent, (20) holds whether \(z=z_{0}\) or not4. Footnote 4: In Lee _et al._ (2016), the probability concerning \(z=z_{0}\) is multiplied and integrated to marginalize it, thus excluding the impact of \(z=z_{0}\). In practice, taking into account (16), it becomes necessary to marginalize over \(s\in\{-1,1\}^{m}\). As a result, we need to consider the logical OR of \(2^{m}\) conditions. When defined as \[F_{\mu}(x):=F_{\mu,\sigma^{2}\|\eta\|^{2}}^{\cup_{s}[\nu^{-}_{s}(z),\nu^{+}_ {s}(z)]}(x)|\cup_{s}\{A_{s}\leq b_{s}\}\] \(F_{\beta_{j}}(\eta^{\top}y^{obs})\) then becomes uniformly distributed in \([0,1]\). Note that \(\nu^{-}_{s}(\cdot)\), \(\nu^{+}_{s}(\cdot)\), \(A_{s}\), and \(b_{s}\) are, respectively, the \(\nu^{-}(\cdot)\), \(\nu^{+}(\cdot)\), \(A\), and \(b\) for \(s\in\{-1,1\}^{m}\). Thus, the range of \(\beta_{j}\) for which \[\frac{\alpha}{2}\leq F_{\beta_{j}}(\eta^{\top}y^{obs})\leq 1-\frac{\alpha}{2}\] holds is the confidence interval at a confidence level of \(1-\alpha\). In this context, note the validity of the following lemma. **Lemma 1**: _When \(\nu^{-}(z^{obs})\leq x\leq\nu^{+}(z^{obs})\) is fixed, \(F_{\mu}(x)\) monotonically decreases with respect to \(\mu\in\mathbb{R}\)._ (Proof can be found in the appendix.) In other words, similar to Section 2, we should determine \(L,U\) such that \[F_{L}(\eta^{\top}y^{obs})=1-\frac{\alpha}{2},\] \[F_{U}(\eta^{\top}y^{obs})=\frac{\alpha}{2}.\] Moreover, the p-value when \(\beta_{0}\) is fixed is given by \[2\min\{F_{0}(\eta^{\top}y^{obs}),1-F_{0}(\eta^{\top}y^{obs})\}.\] However, as seen in (16), computing the logical OR of \(2^{m}\) conditions is not realistic, especially when \(p\) is large. Furthermore, setting the condition with a particular single sign can reduce the detection power. Thus, Duy and Takeuchi (2021) proposed a method for constructing the truncated distribution conditioned by a certain parameter value. First, for each \(j\in\hat{M}(y^{obs})\), let \(c=\Sigma\eta(\eta^{\top}\Sigma\eta)^{-1}\), \(z=(I_{n}-c\eta^{\top})y^{obs}\), and \(y=z+cu\). Note that \(u=\eta^{\top}y\in\mathbb{R}\) is predetermined to move within the range \([u_{\min},u_{\max}]\). Then, specify the range of \(u\in[u_{\min},u_{\max}]\) such that \(\hat{M}(y)=\hat{M}(y^{obs})\) and determine the truncated distribution. The reason for seeking \(u\) in the range of the line \(z+cu\) with this method is because \(\eta^{\top}y,z\) are independent, and there is no need to explore with \(y=z^{\prime}+cu\) for \(z\) different from \(z^{\prime}\). ## 5 Selective Inference in Forward Stepwise In this section, we introduce a method of sequentially selecting variables in linear regression (Forward Stepwise, FS). Although it does not optimize the squared error, it facilitates theoretical analysis. Finally, we determine the polyhedron when applying FS. Let's define, when \(M_{k}\subseteq\{1,\ldots,p\}\), \(k=1,2,\ldots,p\), \[P_{k}:=X_{M_{k}}(X_{M_{k}}^{\top}X_{M_{k}})^{-1}X_{M_{k}}^{\top},\] \[P_{k}^{\perp}:=I_{n}-X_{M_{k}}(X_{M_{k}}^{\top}X_{M_{k}})^{-1}X_{M_{k}}^{\top}.\] In the following, we utilize that \(P_{k}^{2}=P_{k}\), \((P_{k}^{\perp})^{2}=P_{k}^{\perp}\), and \(P_{k}^{\perp}P_{k}=O\). Departing from Lasso and applying the least squares method, in case of performing variable selection that minimizes information criteria such as AIC or BIC, if there are \(p\) variables in total, there is a method that compares all \(2^{p}\) models \(M\subseteq\{1,\ldots,p\}\). However, this would be computationally intensive. FS, instead, selects a variable \(j_{1}\) that minimizes the squared error, then selects \(j_{2}\) that minimizes the squared error for the variable set \(\{j_{1},j_{2}\}\), and so on, adding one variable \(j_{k}\) at each \(k\) without backtracking, and halts if the value of the information criterion increases in a naive search method. FS, while not always selecting the optimal model, is often used in analyzing the performance of information criteria. The procedure of FS (Forward Stepwise) can be described as follows. Initially, define \(M_{0}=\{\}\) (empty set), \(r_{0}=y\), and \(P_{0}^{\perp}=I_{n}\). Generally, for \(k\geq 1\) and \(j\not\in M_{k-1}\), define the coefficient \[\hat{\beta}_{j}:=\frac{(P_{k-1}^{\perp}X_{j})^{\top}r_{k-1}}{\|P_{k-1}^{\perp} X_{j}\|_{2}^{2}}, \tag{21}\] and the residual \[q_{j,k-1}:=r_{k-1}-P_{k-1}^{\perp}X_{j}\hat{\beta}_{j}=\left\{I_{n}-\frac{P_{k -1}^{\perp}X_{j}(P_{k-1}^{\perp}X_{j})^{\top}}{\|P_{k-1}^{\perp}X_{j}\|_{2}^{2} }\right\}r_{k-1}, \tag{22}\] minimize the squared norm \(\|q_{j,k-1}\|_{2}^{2}\) to determine \(j_{k}\), and set \(M_{k}:=M_{k-1}\cup\{j_{k}\}\). Furthermore, \[r_{k}:=q_{j_{k},k-1}, \tag{23}\] and increment \(k:=k+1\) for the next cycle. To provide a supplementary note since the notation using \(P_{k},P_{k}^{\perp}\) in FS is distinctive: At first, select \(j\) that minimizes the residual \(q_{j,0}\), denoting that \(j\) as \(j_{1}\), and determine the residual \(r_{1}\). Next, coordinate-transform \(X\) to \(P_{1}^{\perp}X\). Then, from among \(P_{1}^{\perp}X_{j}\), \(j\neq j_{1}\), choose \(j\) that minimizes the squared norm of the residual \(q_{j,1}\) as \(j_{2}\), and find the residual \(r_{2}\). This process is repeated. Up to steps \(k=1,2,\ldots,p\), the residual \(P_{k}^{\perp}y\) when \(X\) is set as \(X_{M_{k}}\) is obtained. That is, \[r_{k-1}=P_{k-1}^{\perp}y \tag{24}\] is valid for \(k=1,2,\ldots,p\). In the subsequent analysis, a polyhedron in the case of Forward Selection (FS) is sought. Initially, \[\|q_{j,k-1}\|_{2}^{2}=\left\|\left\{I-\frac{P_{k-1}^{\perp}X_{j}(P_{k-1}^{ \perp}X_{j})^{\top}}{\|P_{k-1}^{\perp}X_{j}\|_{2}^{2}}\right\}r_{k-1}\right\|_ {2}^{2}=r_{k-1}^{\top}r_{k-1}-\frac{r_{k-1}^{\top}P_{k-1}^{\perp}X_{j}(P_{k-1} ^{\perp}X_{j})^{\top}r_{k-1}}{\|P_{k-1}^{\perp}X_{j}\|_{2}^{2}} \tag{25}\] is established. Stemming from (25), the sign of \(X_{j_{k}}^{\top}P_{k-1}^{\perp}r_{k-1}\) coincides with the sign of \(\hat{\beta}_{j_{k}}\), denoted as \(s_{k}\). Therefore, \[s_{k}=\text{sign}\left(X_{j_{k}}^{\top}P_{k-1}^{\perp}r_{k-1}\right) \tag{26}\] holds true. From (24), for each \(j\not\in M_{k-1}\), we have the following equivalencies, \[\|q_{j_{k},k-1}\|_{2}^{2}\leq\|q_{j,k-1}\|_{2}^{2}\] \[\Longleftrightarrow \left|\frac{X_{j_{k}}^{\top}P_{k-1}^{\perp}r_{k-1}}{\|X_{j_{k}}^{ \top}P_{k-1}^{\perp}\|_{2}}\right|\geq\left|\frac{X_{j}^{\top}P_{k-1}^{\perp}r _{k-1}}{\|X_{j}^{\top}P_{k-1}^{\perp}\|_{2}}\right|\] \[\Longleftrightarrow \left|\frac{X_{j_{k}}^{\top}(P_{k-1}^{\perp})^{2}y}{\|X_{j_{k}}^{ \top}P_{k-1}^{\perp}\|_{2}}\right|\geq\left|\frac{X_{j}^{\top}(P_{k-1}^{\perp} )^{2}y}{\|X_{j}^{\top}P_{k-1}^{\perp}\|_{2}}\right|\] \[\Longleftrightarrow s_{k}\frac{X_{j_{k}}^{\top}P_{k-1}^{\perp}}{\|X_{j_{k}}^{ \top}P_{k-1}^{\perp}\|_{2}}y\geq\frac{X_{j}^{\top}P_{k-1}^{\perp}}{\|X_{j}^{ \top}P_{k-1}^{\perp}\|_{2}}y\,\ s_{k}\frac{X_{j_{k}}^{\top}P_{k-1}^{\perp}}{\|X_{j_{k}}^{\top}P_{k-1}^{ \perp}\|_{2}}y\geq-\frac{X_{j}^{\top}P_{k-1}^{\perp}}{\|X_{j}^{\top}P_{k-1}^{ \perp}\|_{2}}y\] will hold true. Therefore, for \(j\not\in M_{k}\), \[\left(\pm\frac{X_{j}^{\top}P_{k-1}^{\perp}}{\|X_{j}^{\top}P_{k-1}^{\perp}\|_{ 2}}-s_{k}\frac{X_{j_{k}}^{\top}P_{k-1}^{\perp}}{\|X_{j_{k}}^{\top}P_{k-1}^{ \perp}\|_{2}}\right)y\leq 0\] Consequently, employing some \(A_{k}\in\mathbb{R}^{2(p-k)\times n}\) and \(0=b_{k}\in\mathbb{R}^{2(p-k)}\), a polyhedron \(A_{k}y\leq b_{k}\) can be constructed Tibshirani _et al._(2016). Moreover, if this procedure is executed up to the \(m\)th step, summing the rows of this inequality on both sides for \(k=1,\ldots,m\) and from \(\sum_{k=1}^{m}2(p-k)=2pm-m^{2}-m\), we can derive \(Ay\leq b\) where \(A\in\mathbb{R}^{(2pm-m^{2}-m)\times n},b\in\mathbb{R}^{2pm-m^{2}-m}\). Loftus and Taylor (2015, 2014) generalized the aforementioned FS concept and proposed a method to divide \(p\) variables into \(G\) groups. Furthermore, extending Loftus and Taylor (2015), Yang _et al._(2016) has considered a method applicable to Group Lasso. ## 6 Selective Inference in LARS The LARS (Least Angle Regression) to be discussed in this section is a variable selection method similar yet distinct to Lasso. Both FS (Forward Selection) and LARS are considered theoretically amenable to analysis. In the following, we define the procedure of LARS and determine the polyhedron of the truncation region. This polyhedron will also be necessary when considering sequential selective inference in Section 8. In Lasso, with \(\lambda\geq 0\) taken on the horizontal axis5, and the solutions \(\hat{\beta}_{j}(\lambda)\), \(j=1,\ldots,p\) for each \(\lambda\) obtained from (6) taken on the vertical axis, the \(p\) curves formed are referred to as the solution path. For example, Figure 5 applies sparse estimation of linear regression to U.S. crime data6. LARS, as FS does, proceeds through a greedy search variable selection method and, sharing similar properties with Lasso but being theoretically more manageable, is often used when analytically approximating Lasso. While it requires \(O(p^{3})\) computational time, Lasso has computational efficiency merits. Footnote 5: There are packages that allow displaying \(\log\lambda\) or \(\|\beta\|_{1}\) on the horizontal axis Footnote 6: [https://web.stanford.edu/~hastie/StatLearnSparsity/data.html](https://web.stanford.edu/~hastie/StatLearnSparsity/data.html) ## 5 Coefficients for each \(\lambda\) (Lasso/LARS) Figure 5: U.S. crime data is stored in the text file crime.txt, and the crime rate per one million people is used as the dependent variable. Lasso and LARS were performed to select explanatory variables from the following. As a result, the same solution path was observed for each \(\lambda\). In LARS, firstly, a piecewise linear function \(\beta:[0,\infty)\rightarrow\mathbb{R}^{p}\) is constructed as follows (see Figure 6). Assuming \(k\geq 1\), given \(r_{1}=y,r_{2},\ldots,r_{k-1}\in\mathbb{R}^{n}\), \(\lambda_{1}:=\max_{j}|X_{j}^{\top}y|>\lambda_{2}>\ldots>\lambda_{k-1}\geq 0\), \(\beta_{1}=0,\beta_{2},\ldots,\beta_{k-1}\in\mathbb{R}^{p}\), \(M_{k-1}=\{j_{1},\ldots,j_{k-1}\}\subseteq\{1,\ldots,p\}\) (where \(j_{1}\) maximizes \(\max_{j}|X_{j}^{\top}y|\)), for each \(k=2,3,\ldots\), the following is performed. Note that \(P_{k-1}=X_{M_{k-1}}(X_{M_{k-1}}^{\top}X_{M_{k-1}})^{-1}X_{M_{k-1}}^{\top}\). 1. Define the following functions in the range \(\lambda\leq\lambda_{k-1}\): \[\beta(\lambda)=\beta_{k-1}+(1-\frac{\lambda}{\lambda_{k-1}})\left[\begin{array} []{c}X_{M_{k-1}}^{+}r_{k-1}\\ 0\end{array}\right]\] (27) \[r(\lambda)=y-X\beta(\lambda)\] (28) 2. Include \(j_{k}\not\in M_{k-1}\) in \(M_{k-1}\) to make \(M_{k}\), such that the absolute value of \(X_{j}^{\top}r(\lambda_{k})\) attains the maximum value \(\lambda_{k}\) (\(\leq\lambda_{k-1}\)). 3. Extend the range of the functions \(\beta(\lambda)\), \(r(\lambda)\) from \(\lambda_{k-1}\leq\lambda\) to \(\lambda_{k}\leq\lambda\), and let \(\beta_{k}:=\beta(\lambda_{k})\), \(r_{k}:=r(\lambda_{k})\). Note that in (27), we have denoted the case where the first \(k-1\) components are non-zero to prevent the notation from becoming cumbersome (actually, we have not performed any component swapping, etc.). According to this procedure, for \(\lambda\leq\lambda_{k}\) and each \(j\in M_{k}\), \[X_{j}^{\top}r(\lambda)=\pm\lambda \tag{29}\] holds. Indeed, from (27) (28), \[r(\lambda) = y-X\beta(\lambda)=y-X\left\{\beta_{k}+\left(1-\frac{\lambda}{ \lambda_{k}}\right)\left[\begin{array}{c}X_{M_{k}}^{+}r_{k}\\ 0\end{array}\right]\right\}\] \[= r_{k}-\left(1-\frac{\lambda}{\lambda_{k}}\right)P_{k}r_{k}=P_{k }^{\bot}r_{k}+\frac{\lambda}{\lambda_{k}}P_{k}r_{k}\] is established. Note that from the second step of the algorithm above, \(X_{j}^{\top}r_{k}=\pm\lambda_{k}\), and \(X_{j}^{\top}P_{k}=(P_{k}X_{j})^{\top}=X_{j}^{\top}\) holds for \(j\in M_{k}\). This means that \[X_{j}^{\top}r(\lambda)=X_{j}^{\top}\left(P_{k}^{\bot}r_{k}+\frac{\lambda}{ \lambda_{k}}P_{k}r_{k}\right)=0+\frac{\lambda}{\lambda_{k}}X_{j}^{\top}r_{k}=\pm\lambda\] In other words, as the value of \(\lambda\) decreases and once \(j\in M\) becomes active, it continues to satisfy (29) until \(\lambda\) becomes \(0\), and does not become inactive thereafter. In selective inference within LARS, variables are sequentially selected, similar to FS. However, we consider the truncation domain and truncation distribution when deciding to stop the selection at any given stage. The polytope of LARS is constructed as follows. Firstly, by substituting \(\lambda=\lambda_{k}\) into equation (27), we establish \[r_{k}=y-X\beta(\lambda_{k})=y-X\left\{\beta_{k-1}+\left(1-\frac{\lambda_{k}}{ \lambda_{k-1}}\right)\right\}\left[\begin{array}{c}X_{M_{k-1}}^{+}r_{k-1}\\ 0\end{array}\right]=r_{k-1}-\left(1-\frac{\lambda_{k}}{\lambda_{k-1}}\right)P_ {k-1}r_{k-1} \tag{30}\] Also, from equation (29), \(r_{k-1}\), \(\lambda_{k-1}\) must satisfy \(X_{h}^{\top}r_{k-1}=s_{h}\lambda_{k-1}\), \(h\in M_{k-1}\) Suzuki (2020, 2021), namely, defining \(s_{M_{k-1}}:=\left[s_{1},\ldots,s_{k-1}\right]^{\top}\), \[X_{M_{k-1}}^{\top}r_{k-1}=\lambda_{k-1}s_{M_{k-1}} \tag{31}\] must hold. And, pay attention to the following lemma. **Lemma 2**: \[X_{j}^{\top}r_{k}=s_{k}\lambda_{k}\Longleftrightarrow\frac{X_{j}^{\top}P_{k -1}^{\bot}}{s_{k}-X_{j}^{\top}(X_{M_{k-1}}^{+})^{\top}s_{M_{k-1}}}y=\lambda_{k}\] (32) (Refer to the appendix for the proof.) Then, \[c_{k}(j,s):=\frac{P_{k-1}^{\bot}X_{j}}{s-X_{j}^{\top}(X_{M_{k-1}}^{+})^{\top}s _{M_{k-1}}} \tag{33}\] let's define as above. From (32), we have \[X_{j_{k}}^{\top}r_{k}=s_{k}\lambda_{k}\Longleftrightarrow c_{k}(j_{k},s_{k}) ^{\top}y=\lambda_{k}\] Also, \[c_{k}(j,s)^{\top}y\leq\lambda_{k-1}\,\ (j,s)\in C_{k} \tag{34}\] the set of \((j,s)\not\in M_{k-1}\times\{-1,1\}\) for which holds, we denote by \(C_{k}\) (the competitors), and the set of \((j,s)\not\in M_{k-1}\times\{-1,1\}\) for which (34) does not hold, we write as \(D_{k}\). It holds that \(C_{k}\cup D_{k}=\overline{M}_{k}\times\{-1,1\}\), and \[c_{k}(j,s)^{\top}y\geq\lambda_{k-1}\,\ (j,s)\in D_{k} \tag{35}\] \[c_{k}(j_{k},s_{k})^{\top}y\geq c_{k}(j,s)^{\top}y\,\ (j,s)\in C_{k}\backslash(j _{k},s_{k}) \tag{36}\] are satisfied Tibshirani _et al._ (2016). Furthermore, because \(\lambda\geq 0\), \[c_{k}(j_{k},s_{k})^{\top}y\geq 0 \tag{37}\] holds. That is to say, because the value of \(\lambda\) is being lowered from \(\lambda_{k-1}\), if \(c_{k}(j,s)^{\top}y\geq\lambda_{k-1}\), it is excluded from the objects to be compared for the maximum value (becoming an element of \(C_{k}\)). Therefore, \(c_{k}(j_{k},s_{k})^{\top}y\leq\lambda_{k-1}\). At each \(k\), the polyhedron is formed by at most \(|C_{k}|+2(p-k+1)\) inequalities of these four types and the rows of \(A,b\) are formed by adding them for \(k=1,\ldots,p\). However, this polyhedron is the one under the condition of active sets \(M_{1},\ldots,M_{k}\), the corresponding signs \(s_{M_{1}},\ldots,s_{M_{k}}\), and competitors \(C_{1},\ldots,C_{k}\). Of course, if the logical sum is taken for possible \(\{C_{i}\}\), the polyhedron related to \(\{M_{i}\}\) and \(\{s_{M_{i}}\}\) is formed, and further, if the logical sum is taken for possible signs \(\{s_{M_{i}}\}\), the polyhedron related to \(\{M_{i}\}\) is formed. On the other hand, in the case of Lasso, the condition for \((j_{k},s_{k})\) to become active from non-active is the same. However, even if it becomes active once, it can become non-active Tibshirani (2012). ## 7 Significance Test for Lasso In this section, we consider sequential statistical tests that do not depend on selective inference using the polyhedra dealt with so far. When \(X^{\top}X=I_{p}\), the statistic \(T_{k}\), which signifies whether the \(k\)-th variable is significant or not, can be calculated only from \(\lambda_{k}\), \(\lambda_{k+1}\), and \(\sigma^{2}\). This test is shown to have performance asymptotically equivalent to the Spacing Test using a polyhedron, as discussed in Section 8. In AIC and BIC, for a variable set \(M\subseteq\{1,\ldots,p\}\) and \(j\not\in M\), it is common to test whether it is significant to add variable \(j\) by comparing the difference in the \(RSS\) values of \(M\) and \(M\cup\{j\}\) with 2 or \(\log n\). However, making such a decision based on the value \[R_{k}:=\frac{RSS_{M_{k-1}}-RSS_{M_{k-1}\cup\{j_{k}\}}}{\sigma^{2}}\] normalized by the difference \(r_{k}\) (the value of RSS) defined in (23) when sequentially selecting variables in FS and adding variable \(j\) to the first selected \(k-1\) variables has problems. Figure 7(a) is a \(Q-Q\) plot with a \(\chi^{2}\) distribution of degree 1, measuring the value of \(R_{1}\) of the first selected variable by performing FS in linear regression where all slopes of \(p\) variables are 0, for \(n\) times. Since it selects one variable that reduces the RSS the most among \(p\) variables, \(R_{1}\) does not follow \(\chi^{2}_{1}\), even though that variable is not significant. Thus, Lockhart _et al._ (2014) proposed a test method called Significance Test for Lasso. Below, we assume as a null hypothesis that \(M_{k-1}\) contains all the indices of the non-zero components of the true \(\beta\), denoted \(\beta^{*}\). In Lasso, let \(M_{k-1}\) be the active set just before \(\lambda_{k}\). That is, at \(\lambda=\lambda_{k}\) Figure 7: (a) Values of \(R_{1}\), the FS of the first selected variable among \(p\) variables, and (b) values of \(T_{1}\) for the first variable selected in the Significance Test for Lasso, along with the \(Q-Q\) plot of the \(\chi^{2}\) values, schematically representing the results of the simulation. The statistic \(R_{1}\) is distributed with larger values than \(\chi^{2}_{1}\), while the statistic \(T_{1}\) follows a distribution similar to \(Exp(1)\). among \(\hat{\beta}(\lambda_{k})\in\mathbb{R}^{p}\), the components of \(M_{k-1}\) are non-zero, and \[\frac{1}{2}\|y-X\beta\|_{2}^{2}+\lambda_{k}\|\beta\|_{1} \tag{38}\] is minimized by \(\beta=\hat{\beta}(\lambda_{k})\), in which the component of the newly selected \(j_{k}\in M_{k}\backslash M_{k-1}\) is also \(0\) at \(\lambda=\lambda_{k}\). Assuming \(M_{k-1}\subsetneq M_{k}\), similarly at \(\lambda=\lambda_{k+1}\), among \(\hat{\beta}(\lambda_{k+1})\in\mathbb{R}^{p}\), the components of \(M_{k}\) are non-zero, and \[\frac{1}{2}\|y-X\beta\|_{2}^{2}+\lambda_{k+1}\|\beta\|_{1}\] is minimized by \(\beta=\hat{\beta}(\lambda_{k+1})\), in which the component of \(j_{k+1}\in M_{k+1}\backslash M_{k}\) is \(0\) at \(\lambda=\lambda_{k+1}\). Lockhart _et al._ (2014) derived the distribution under the null hypothesis of the statistic \[T_{k}:=\left(\langle y,X\hat{\beta}(\lambda_{k+1})\rangle-\langle y,X_{M_{k-1 }}\tilde{\beta}_{M_{k-1}}(\lambda_{k+1})\rangle\right)/\sigma^{2} \tag{39}\] to test for a significant difference between this \(\hat{\beta}(\lambda_{k+1})\in\mathbb{R}^{p}\) and \(\tilde{\beta}=\tilde{\beta}_{M_{k-1}}(\lambda_{k+1})\in\mathbb{R}^{k-1}\), which minimizes \[\frac{1}{2}\|y-X_{M_{k-1}}\tilde{\beta}\|_{2}^{2}+\lambda_{k+1}\|\tilde{\beta} \|_{1}. \tag{40}\] In general, in Lasso, there is no guarantee that the variable set \(M\) will expand monotonically even if the value of \(\lambda\) is decreased. Under \(M_{k-1}\subsetneq M_{k}\), Lockhart _et al._ (2014) set \[\omega_{k}=\left\|(X_{M_{k}}^{+})^{\top}s_{M_{k}}-(X_{M_{k-1}}^{+})^{\top}s_{M _{k-1}}\right\|_{2} \tag{41}\] and derived the following. **Lemma 3**: \[T_{k}=\omega_{k}^{2}\lambda_{k}(\lambda_{k}-\lambda_{k+1})/\sigma^{2}.\] (42) (For a proof, please refer to the appendix.) In the following, we discuss the case where \(X^{\top}X=I_{p}\). For a generalization to the general case, please refer to Lockhart _et al._ (2014). In the case of \(X^{\top}X=I_{p}\), \[\omega_{k} = \left\|(X_{M_{k}}^{+})^{\top}s_{M_{k}}-(X_{M_{k-1}}^{+})^{\top}s_ {M_{k-1}}\right\|_{2}\] \[= \left\|X_{M_{k}}s_{M_{k}}-X_{M_{k-1}}s_{M_{k-1}}\right\|_{2}=\|s_ {j_{k}}X_{j_{k}}\|\] \[= \|X_{j_{k}}\|=1,\] thus, \[T_{k}=\lambda_{k}(\lambda_{k}-\lambda_{k+1})/\sigma^{2} \tag{43}\] holds. Also, when we let \(U_{j}=X_{j}^{\top}y\), we can write \(\lambda_{j}=|U_{j}|\) where \(j=1,\ldots,p\). However, \(U_{1},\ldots,U_{p}\) are random variables satisfying \(|U_{1}|\geq|U_{2}|\geq\ldots\geq|U_{p}|\). In this case, \[T_{k}=|U_{k}|(|U_{k}|-|U_{k+1}|)/\sigma^{2}\] can be written. On the other hand, when we consider \(V_{1}\geq V_{2}\geq\ldots\geq V_{p}\) as \(p\) independent random variables following a \(\chi_{1}\) distribution, and when fixing \(r\leq p\), as \(p\to\infty\), \[(V_{1}(V_{1}-V_{2}),V_{2}(V_{2}-V_{3}),\ldots,V_{r}(V_{r}-V_{r+1}))\xrightarrow {d}(Exp(1),Exp(1/2),\ldots,Exp(1/r)) \tag{44}\] is derived (Lockhart _et al._ (2014)). Here, \(\xrightarrow{d}\) denotes convergence in distribution, and \(Exp(\alpha)\) is an exponential distribution such that \(Z\sim Exp(\alpha)\Longrightarrow\mathbb{E}[Z]=\alpha\). Using the statistic \(T_{k}\), assuming Lasso variable selection and denoting \(M_{*}\) as the true variable set, a test is constructed that takes the current selected variable set \(M_{k}\) containing \(M^{*}\) as a subset as the null hypothesis \(H_{0}\). If all necessary variables are selected, the unselected variables are merely noise, and it is assumed to follow the null hypothesis. Lockhart _et al._ (2014) specifically proved the following proposition, and constructed a statistical test based on this null distribution. **Proposition 1** (Lockhart _et al._ (2014)): _Under the assumption that_ \[\lim_{p\to\infty}\left(\min_{j\in M^{*}}|\beta_{j}^{*}|-\sigma\sqrt{2\log p} \right)=\infty,\] _the following hold as \(p\to\infty\):_ 1. _The probability of the event_ \(\min_{j\in M^{*}}|U_{j}|>\max_{j\notin M^{*}}|U_{j}|\) _converges to 1._ 2. \((T_{k,*+1},T_{k,*+2},\ldots,T_{k,*+r})\xrightarrow{d}\big{(}Exp(1),Exp\left( \frac{1}{2}\right),\ldots,Exp\left(\frac{1}{r}\right)\big{)}\)_._ **Proof of Proposition 1:** Let \(\theta:=\min_{j\in M^{*}}\beta_{j}^{*}\). By assumption, there exists \(c_{p}\) such that both \(c_{p}-\sigma\sqrt{2\log p}\) and \(\theta-c_{p}\) tend to infinity as \(p\to\infty\). Since \(X^{\top}X=I_{p}\), and \(U_{j}\sim N(\beta_{j}^{*},\sigma^{2})\) occur independently, for each \(j\in M^{*}\), we have \[\mathbb{P}\left(|U_{j}|\leq c_{p}\right)=\Phi\left(\frac{c_{p}-\beta_{j}^{*}} {\sigma}\right)-\Phi\left(\frac{-c_{p}-\beta_{j}^{*}}{\sigma}\right)\leq\Phi \left(\frac{c_{p}-\beta_{j}^{*}}{\sigma}\right)\to 0,\] \[\mathbb{P}\left(\min_{j\in M^{*}}|U_{j}|>c_{p}\right)=\prod_{j\in M^{*}} \mathbb{P}(|U_{j}|>c_{p})\to 1,\] \[\mathbb{P}\left(\min_{j\notin M^{*}}|U_{j}|\leq c_{p}\right)=\left(1-2\Phi^{c }\left(\frac{c_{p}}{\sigma}\right)\right)^{p-k_{*}}\to 1,\] where \(\beta_{j}^{*}=0\), \(j\notin M^{*}\), and \[\Phi^{c}(u)=\frac{1}{\sqrt{2\pi}}\int_{u}^{\infty}e^{-x^{2}/2}dx<\frac{1}{ \sqrt{2\pi}}\int_{u}^{\infty}\frac{x}{u}e^{-x^{2}/2}dx=\frac{e^{-u^{2}/2}}{u \sqrt{2\pi}},\quad u>0,\] \[1-2\Phi^{c}(\sqrt{2\log p})>1-\frac{2e^{-\log p}}{\sqrt{2\pi\log p}}=1-\frac{2}{p \sqrt{2\pi\log p}},\] \[\left(1-2\Phi^{c}(\sqrt{2\log p})\right)^{p-k_{*}}>\left\{\left(1-\frac{2}{p \sqrt{2\pi\log p}}\right)^{\sqrt{2\pi}p\sqrt{\log p}}\right\}^{\frac{p-k_{*}}{ p\sqrt{2\pi\log p}}}\to 1.\] Let's denote the event \(\min_{j\in M^{*}}|U_{j}|>\max_{j\notin M^{*}}|U_{j}|\) by \(B\). Then \(\mathbb{P}(B)\to 1\), and therefore, for any event \(E\), \(\mathbb{P}(E|B)\rightarrow\mathbb{P}(E)\). Under event \(B\), \(T_{k_{*}+i}=V_{i}(V_{i}-V_{i+1})\), so when \(\mathbb{P}(B)\to 1\), \(T_{k_{*}+i}\) converges in probability to \(V_{i}(V_{i}-V_{i+1})\). From (44), this implies the proposition. ## 8 Spacing Test for LARS Lastly, we introduce a simplified Spacing Test, discussed with respect to the selective inference for Lasso in Section 6 (Tibshirani _et al._ (2016)), and discuss its relation with the Significance test in Section 7 (Lockhart _et al._ (2014)). Here, Tibshirani _et al._ (2016) constructed a framework that uniformly handles sequential model selection, including not only Lasso but also Forward Selection (FS) and others. In order to uniformly deal with such sequential model selection, without assuming linear regression and instead letting \[y=\theta+\epsilon,\quad\epsilon\sim N(0,\sigma^{2}I_{n}),\] without setting \(\theta=X\beta^{*}\), \(X\in\mathbb{R}^{n\times p}\), \(\beta^{*}\in\mathbb{R}^{p}\), it considered a test regarding the null hypothesis \(v^{\top}\theta=0\) for some \(v\in\mathbb{R}^{n}\). Regarding LARS, \(v\) is set as \(v:=(X^{+}_{M_{k-1}})^{\top}e_{k}\), and \(\theta\) is used as the mean of \(y\) under the null hypothesis, which is \(X\beta^{*}\). Under the null hypothesis, the \(k-1\) components of \(\beta^{*}\) are non-zero, while the rest are zero; hence, \(v^{\top}\theta\) becomes the \(k\)-th component of \(\beta^{*}\), i.e., \(\beta^{*}_{j_{k}}\). That is, the Spacing Test, like the Significance Test, uses a null hypothesis that, given the variables selected so far, the coefficients of any further selected variables become zero (there is no need to select more variables). Below, we present the polyhedron, null hypothesis, and truncated distribution. First, we use the following fact. **Lemma 4** (Tibshirani _et al._ (2016)): _Equations (36) and (37) are equivalent to the following four conditions:_ \[c_{1}(j_{1},s_{1})^{\top}y\geq c_{2}(j_{2},s_{2})^{\top}y\geq \cdots\geq c_{k}(j_{k},s_{k})^{\top}y\geq 0 \tag{45}\] \[c_{k}(j_{k},s_{k})^{\top}y\geq c_{k+1}^{*}:=\max_{(j,s)\in S_{k} ^{+}}c_{k+1}(j,s)^{\top}y\] (46) \[c_{l}(j_{l},s_{l})^{\top}y\geq\min_{(j,s)\in S_{l}^{-}}c_{l+1}(j,s)^{\top}y\,\ l=1,\ldots,k\] (47) \[c_{l}(j_{l},s_{l})^{\top}y\geq\max_{(j,s)\in S_{l}^{0}}c_{k}(j,s )^{\top}y\,\ l=1,\ldots,k \tag{48}\] _where we have set:_ \[S_{k}^{+}:=\{(j,s)|j\not\in M_{k},c_{k}(j,s)^{\top}c_{k}(j_{k},s_{k})\leq\|c_{k}(j _{k},s_{k})\|_{2}^{2},c_{k}(j,s)^{\top}y\leq c_{k}(j_{k},s_{k})^{\top}y\}\] \[S_{l}^{-}:=\{(j,s)|j\not\in M_{l},c_{l}(j,s)^{\top}c_{l}(j_{l},s_{l})\geq\|c_{l }(j_{l},s_{l})\|_{2}^{2},c_{l}(j,s)^{\top}y\leq c_{l}(j_{l},s_{l})^{\top}y\}\] \[S_{l}^{0}:=\{(j,s)|j\not\in M_{l},c_{l}(j,s)^{\top}c_{l}(j_{l},s_{l})=\|c_{l}( j_{l},s_{l})\|_{2}^{2},c_{l}(j,s)^{\top}y\leq c_{l}(j_{l},s_{l})^{\top}y\}\] (See the appendix for proof.) Furthermore, it has been proposed to approximate equations (45)-(48) by \(k+1\) conditions from equations (45) and (46). In fact, when \(X^{\top}X=I_{p}\), \(c_{l}(j,s)=X_{j}\), and for \(j\not\in M_{l}\), \(c_{l}(j,s)^{\top}c_{l}(j_{l},s_{l})=X_{j}^{\top}X_{l}=0\) holds. This means that both \(S_{l}^{-}\) and \(S_{l}^{0}\) are empty sets and equations (47) and (48) are unconditionally satisfied. In the general situation where \(X^{\top}X\neq I_{p}\), it becomes an accurate approximation of the polyhedron, but computationally advantageous as the matrix \(A\) has significantly fewer rows. In practice, the constraint conditions in equations (47) and (48) are often not applied, and deleting them does not significantly alter the geometric characteristics of the set. First, we define the row vectors \(A_{1},\ldots,A_{k+1}\) of matrix \(A\) as \[A_{l}=-c_{l}(j_{l},s_{l})^{\top}+c_{l+1}(j_{l+1},s_{l+1})^{\top}\,\ l=1,\ldots,k-1\] \[A_{k}=A_{k+1}=-c_{k}(j_{k},s_{k})^{\top}\] and let \(b_{1}=\ldots=b_{k}=0\), \(b_{k+1}=c_{k+1}^{*}\). Constructing the polyhedron \(A_{l}y\leq b_{l}\), \(l=1,\ldots,k+1\), and under the null hypothesis \(\beta_{j_{k}}=0\), the distribution followed by the statistic \[T_{k}:=\frac{\Psi\left(\lambda_{k-1}\frac{\omega_{k}}{\sigma}\right)-\Psi \left(\lambda_{k}\frac{\omega_{k}}{\sigma}\right)}{\Psi\left(\lambda_{k-1} \frac{\omega_{k}}{\sigma}\right)-\Psi\left(c_{k+1}^{*\top}y\frac{\omega_{k}}{ \sigma}\right)} \tag{49}\] (spacing test) was derived. Here, \(\omega_{k}\) is given in (41), and \(\eta^{\top}y=\lambda_{k}\) is divided by \(\sigma\) and multiplied by \(\omega_{k}\) because \(\omega_{k}\) is the reciprocal of \(\|\eta\|_{2}\), and to normalize the magnitude of \(\eta\in\mathbb{R}^{n}\) so that \(\eta^{\top}y/(\sigma\|\eta\|_{2})\) follows the standard normal distribution. **Lemma 5**: \(\|\eta\|_{2}=\omega_{k}^{-1}\)__ (See the appendix for proof.) **Proposition 2**: _Assuming the null hypothesis \(H_{0}:\beta_{j_{k}}=0\), the conditional probability of event \(\{T_{k}^{*}\leq\alpha\}\) under the polyhedron \(\{Ay\leq b\}\) is \(\alpha\)._ \[\mathbb{P}_{H_{0}}(T_{k}\leq\alpha|Ay\leq b)=\alpha \tag{50}\] For a two-sided test, use \(2\min\{T_{k},1-T_{k}\}\) instead of \(T_{k}\). **Proof of Proposition 2:** Throughout, to simplify the notation, we write \(c_{l}:=c_{l}(j_{l},s_{l})\), \(l=1,\ldots,k\). First, let \(\eta:=c_{k}\). If \(h<k\), then \[(P_{h-1}^{\perp}X_{j_{h}})^{\top}P_{k-1}^{\perp}X_{j_{k}} =X_{j_{h}}^{\top}P_{h-1}^{\perp}P_{k-1}^{\perp}X_{j_{k}}\] \[=X_{j_{h}}^{\top}P_{k-1}^{\perp}X_{j_{k}}=0 \tag{51}\] implying that \(c_{h}^{\top}c_{k}=0\) and, since \(c=\eta/\|\eta\|_{2}^{2}\), we get \[A_{1}c=\cdots=A_{k-2}c=0,\] \[A_{k-1}c=1,\] \[A_{k}c=A_{k+1}c=-1.\] Therefore, there is only one \(j\) such that \((Ac)_{j}>0\). Also, there are two \(j\) such that \((Ac)_{j}<0\), but since \(A_{k}=A_{k+1}=-c_{k}\), \(b_{k}=0\), and \(b_{k+1}=c_{k+1}^{*}\), from equation (17) we have \[\nu^{+}(z) =\frac{b_{k-1}-A_{k-1}z}{A_{k-1}c}\] \[=0-\{-c_{k-1}^{\top}+c_{k}^{\top}\}(I_{n}-\frac{\eta\eta^{\top}}{ \|\eta\|_{2}^{2}})y\] \[=(c_{k-1}^{\top}-c_{k}^{\top}+c_{k}^{\top}\frac{c_{k}c_{k}^{\top }}{\|c_{k}\|_{2}^{2}})y=\lambda_{k-1} \tag{52}\] and \[\nu^{-}(z) =\frac{b_{k+1}-A_{k+1}z}{A_{k+1}c}\] \[=c_{k+1}^{*}-(-c_{k}^{\top})\left(I_{n}-\frac{\eta\eta^{\top}}{ \|\eta\|_{2}^{2}}\right)y\] \[=c_{k+1}^{*}-(-c_{k}^{\top})\left(I_{n}-\frac{c_{k}c_{k}^{\top}}{ \|c_{k}\|_{2}^{2}}\right)y=c_{k+1}^{*} \tag{53}\] are valid, and the value in equation (20) is uniformly distributed in \([a,b]=[\nu^{-}(z),\nu^{+}(z)]\). The value of \(z_{0}\) at that time is \[z_{0}=\left(I_{n}-\frac{c_{k}c_{k}^{\top}}{\|c_{k}\|_{2}^{2}}\right)y.\] Tibshirani _et al._ (2016) proposes an approximation due to the complexity of obtaining the value of \(c_{k+1}^{*}{}^{\top}y\) in equation (49), which slightly weakens the detection power, as \[T_{k}^{sp}:=\frac{\Psi\left(\lambda_{k-1}\frac{\omega_{k}}{\sigma}\right)- \Psi\left(\lambda_{k}\frac{\omega_{k}}{\sigma}\right)}{\Psi\left(\lambda_{k-1 }\frac{\omega_{k}}{\sigma}\right)-\Psi\left(\lambda_{k+1}\frac{\omega_{k}}{ \sigma}\right)}\] and proves that \[\mathbb{P}_{H_{0}}(T_{k}^{sp}\leq\alpha|Ay\leq b)\leq\alpha. \tag{54}\] Because \[\lambda_{k+1}=\max_{(j,s):j\not\in M_{k}}c_{k+1}(j,s)\geq\max_{(j,s)\in S_{k}^ {+}}c_{k+1}(j,s)=c_{k+1}^{*}\] and \(T_{k}\leq T_{k}^{sp}\), equation (54) holds. Furthermore, under the assumption that \(\omega_{k}\lambda_{k+1}\xrightarrow{P}\infty\) and \(\omega_{k}^{2}\lambda_{k-1}(\lambda_{k-1}-\lambda_{k})\xrightarrow{P}\infty\), it was demonstrated that there exists a relationship \[-\log T_{k}^{sp}-T_{k}^{sig}\xrightarrow{P}0, \tag{55}\] where \(T_{k}^{sig}\) is the \(T_{k}\) in equation (39) for the Significance Test in Lasso, and \(\xrightarrow{P}\) denotes convergence in probability. ## 9 Conclusion We have discussed selective inference in sparse estimation above. There are several explanations about sparse estimation, and one might feel that it is somewhat "old news." In this paper, we tried to follow the conservative mainstream flow (Stanford Statistics) that includes selective inference, Lasso, FS, and LARS. The Significance Test in Section 7 Lockhart _et al._ (2014) and the Spacing Test in Section 8 Tibshirani _et al._ (2016) are considered to be difficult to understand (the latter presupposes an understanding of the former). And while they are highly cited7, hardly any follow-up research has emerged even after a decade. The honest motive for writing this paper is a belief that if readers can grasp the essence of them through this commentary, it might lead to research opportunities, suspecting that there might be "buried treasure" that remains unearthed. Footnote 7: As of March 1, 2023, their citation counts on Google Scholar are 812 and 506, respectively. Although the number of theoretical papers is decreasing, results applying to specific problems in machine learning and other fields have been presented. Applications to clustering Gao _et al._ (2020), to estimating the MMD (maximum mean discrepancy) for assessing differences in distributions Yamada _et al._ (2019), to HSIC-Lasso applications Yamada _et al._ (2018), and others like Duy and Takeuchi (2021) mentioned in Section 4, have been presented at top conferences in machine learning. Furthermore, Tasaka and Suzuki (2022) proposes a method for determining the value of \(\lambda\) in Fused Lasso by utilizing the Spacing Test. ## Appendix A Appendix ### Proof of Lemma 1 Since \(F_{\mu}(x)\) is a truncated normal distribution, when the probability density function is denoted as \(f_{\mu}(x)\), regarding the likelihood ratio, \[\mu<\lambda,y<z\Longrightarrow\frac{f_{\lambda}(z)}{f_{\mu}(z)}>\frac{f_{ \lambda}(y)}{f_{\mu}(y)}\] holds. Indeed, taking the logarithm on both sides and subtracting the right side from the left side, we get \[-\frac{(z-\lambda)^{2}}{2\sigma^{2}}+\frac{(z-\mu)^{2}}{2\sigma^{2}}+\frac{(y -\lambda)^{2}}{2\sigma^{2}}-\frac{(y-\mu)^{2}}{2\sigma^{2}}=\frac{(z-y)( \lambda-\mu)}{\sigma^{2}}>0\] At this time, by integrating both sides of \(f_{\lambda}(z)f_{\mu}(y)>f_{\lambda}(y)f_{\mu}(z)\), we have \[f_{\lambda}(z)F_{\mu}(x) =\int_{-\infty}^{x}f_{\lambda}(z)f_{\mu}(y)dy>\int_{-\infty}^{x}f_{ \lambda}(y)f_{\mu}(z)dy=f_{\mu}(z)F_{\lambda}(x)\] \[F_{\mu}(x)(1-F_{\lambda}(x)) =\int_{x}^{\infty}f_{\lambda}(z)F_{\mu}(x)dz>\int_{x}^{\infty}f_{ \mu}(z)F_{\lambda}(x)dz=F_{\lambda}(x)(1-F_{\mu}(x))\] This implies that \(F_{\mu}(x)>F_{\lambda}(x)\). ### Proof of Lemma 2 From (30) and (31), for \(j\not\in M_{k-1}\), we can transform \[X_{j}^{\top}r_{k}=s_{k}\lambda_{k} \Longleftrightarrow X_{j}^{\top}\left\{r_{k-1}-\left(1-\frac{ \lambda_{k}}{\lambda_{k-1}}\right)P_{k-1}r_{k-1}\right\}=s_{k}\lambda_{k}\] \[\Longleftrightarrow X_{j}^{\top}P_{k-1}^{\perp}r_{k-1}=\lambda_{k} \left\{s_{k}-X_{j}^{\top}(X_{M_{k-1}}^{+})^{\top}s_{M_{k-1}}\right\}\] \[\Longleftrightarrow\frac{X_{j}^{\top}P_{k-1}^{\perp}}{s_{k}-X_{ j}^{\top}(X_{M_{k-1}}^{+})^{\top}s_{M_{k-1}}}y=\lambda_{k}\] In the last transformation, we used \[r_{k}=P_{k-1}^{\perp}r_{k-1}+\frac{\lambda_{k}}{\lambda_{k-1}}P_{k-1}r_{k-1}\] \[P_{k-1}^{\perp}r_{k-1}=P_{k-1}^{\perp}P_{k-2}^{\perp}r_{k-2}=P_{k-1}^{\perp}r _{k-2}=\cdots=P_{k-1}^{\perp}y\] ### Proof of Lemma 3 From (38), the component of \(\hat{\beta}(\lambda_{k})\) corresponding to \(M_{k-1}\) is non-zero. By setting the derivative of \[\frac{1}{2}\|y-X_{M_{k-1}}\beta\|_{2}^{2}+\lambda_{k}\|\beta\|_{1}\] to zero, we have \[-X_{M_{k-1}}^{\top}(y-X_{M_{k-1}}\beta)+s_{M_{k-1}}\lambda_{k}=0\] for some \(\beta\in\mathbb{R}^{k-1}\). Therefore, using this \(\beta\) we can write \[X\hat{\beta}(\lambda_{k})=X_{M_{k-1}}\beta=X_{M_{k-1}}(X_{M_{k-1}}^{\top}X_{M _{k-1}})^{-1}(X_{M_{k-1}}^{\top}y-\lambda_{k}s_{M_{k-1}})=P_{k-1}y-\lambda_{k} (X_{M_{k-1}^{\top}})^{+}s_{M_{k-1}}.\] Similarly, \[X\hat{\beta}(\lambda_{k+1})=P_{k}y-\lambda_{k+1}(X_{M_{k}^{\top}})^{+}s_{M_{k}}\] holds. Also, the component of \(\tilde{\beta}_{M_{k-1}}(\lambda_{k+1})\) corresponding to \(M_{k-1}\) is non-zero, and for some \(\tilde{\beta}\in\mathbb{R}^{k-1}\) satisfying \[\frac{1}{2}\|y-X_{M_{k-1}}\tilde{\beta}\|_{2}^{2}+\lambda_{k+1}\|\tilde{\beta }\|_{1},\] when differentiated and set to zero, we have \[X_{M_{k-1}}\tilde{\beta}_{M_{k-1}}(\lambda_{k+1})=P_{k-1}y-\lambda_{k+1}(X_{M_{k -1}^{\top}})^{+}s_{M_{k-1}}.\] Substituting these into the definition of \(T_{k}\) (39), we obtain the following equation: \[T_{k}=y^{\top}(P_{k}-P_{k-1})y/\sigma^{2}-\lambda_{k+1}y^{\top}\left\{(X_{M_{k }}^{\top})^{+}s_{M_{k}}-(X_{M_{k-1}}^{\top})^{+}s_{M_{k-1}}\right\}/\sigma^{2}.\] (A.1) On the other hand, when \(f_{k}(\lambda):=P_{k}y-\lambda(X_{M_{k}}^{\top})^{+}s_{M_{k}}\), by continuity of the Lasso solution path, we have \(f_{k-1}(\lambda_{k})=f_{k}(\lambda)\), i.e., \[P_{k-1}y-\lambda_{k}(X_{M_{k-1}}^{\top})s_{M_{k-1}}=P_{k}y-\lambda_{k}(X_{M_{k }}^{\top})s_{M_{k}}.\] Therefore, \[(P_{k}-P_{k-1})y=\lambda_{k}\left((X_{M_{k}}^{+})^{\top}s_{M_{k}}-(X_{M_{k-1}} ^{+})^{\top}s_{M_{k-1}}\right)\] (A.2) \[y^{\top}(P_{k}-P_{k-1})y=\lambda_{k}^{2}\left\|(X_{M_{k}}^{+})^{\top}s_{M_{k}} -(X_{M_{k-1}}^{+})^{\top}s_{M_{k-1}}\right\|_{2}^{2}\] (A.3) hold, where we used \((P_{k}-P_{k-1})^{2}=P_{k}-P_{k-1}\). Finally, taking the inner product of both sides of (A.2) with \(y\) and using (A.3), we obtain \[y^{\top}\left((X_{M_{k}}^{+})^{\top}s_{M_{k}}-(X_{M_{k-1}}^{+})^{\top}s_{M_{k- 1}}\right)=\lambda_{k}\left\|(X_{M_{k}}^{+})^{\top}s_{M_{k}}-(X_{M_{k-1}}^{+}) ^{\top}s_{M_{k-1}}\right\|_{2}^{2}\] (A.4) Substituting (A.3) and (A.4) into (A.1), the lemma is obtained. ### Proof of Lemma 4 \[h_{k}(j,s):=\frac{c_{k}(j,s)-\frac{c_{k}(j,s)^{\top}c_{k}(j_{k},s_{k})}{\|c_{ k}(j_{k},s_{k})\|_{2}^{2}}c_{k}(j_{k},s_{k})}{1-\frac{c_{k}(j,s)^{\top}c_{k}(j_{ k},s_{k})}{\|c_{k}(j_{k},s_{k})\|_{2}^{2}}}\] We have that, \[c_{k}(j_{k},s_{k})^{\top}y\geq c_{k}(j,s)^{\top}y\] \[\Longleftrightarrow c_{k}(j_{k},s_{k})^{\top}y\left\{1-\frac{c_{k}(j,s)^{\top}c_{k}(j_{ k},s_{k})}{\|c_{k}(j_{k},s_{k})\|_{2}^{2}}\right\}\geq c_{k}(j,s)^{\top}y-\frac{c_{k}(j,s )^{\top}c_{k}(j_{k},s_{k})}{\|c_{k}(j_{k},s_{k})\|_{2}^{2}}c_{k}(j_{k},s_{k})^ {\top}y\] \[\Longleftrightarrow \left\{\begin{array}{ll}c_{k}(j_{k},s_{k})^{\top}y\geq h_{k}(j, s)^{\top}y,&c_{k}(j,s)^{\top}c_{k}(j_{k},s_{k})\leq\|c_{k}(j_{k},s_{k})\|_{2}^{2} \\ c_{k}(j_{k},s_{k})^{\top}y\leq h_{k}(j,s)^{\top}y,&c_{k}(j,s)^{\top}c_{k}(j_{k}, s_{k})\geq\|c_{k}(j_{k},s_{k})\|_{2}^{2}\\ c_{k}(j_{k},s_{k})^{\top}y\geq c_{k}(j,s)^{\top}y,&c_{k}(j,s)^{\top}c_{k}(j_{k}, s_{k})=\|c_{k}(j_{k},s_{k})\|_{2}^{2}\end{array}\right.\] Therefore, \[c_{k}(j_{k},s_{k})^{\top}y\geq c_{k}(j,s)^{\top}y\,\ (j,s)\neq(j_{k},s_{k})\] \[\Longleftrightarrow\ \left\{\begin{array}{l}c_{k}(j_{k},s_{k})^{\top}y\geq c_{k}(j_{k},-s _{k})^{\top}y\Longleftrightarrow c_{k}(j_{k},s_{k})^{\top}y\geq 0\\ c_{k}(j_{k},s_{k})^{\top}y\geq\max_{c_{k}(j,s)^{\top}c_{k}(j_{k},s_{k})\leq\|c _{k}(j_{k}.s_{k})\|_{2}^{2}}h_{k}(j,s)^{\top}y\\ c_{k}(j_{k},s_{k})^{\top}y\leq\min_{c_{k}(j,s)^{\top}c_{k}(j_{k},s_{k})\geq\|c _{k}(j_{k}.s_{k})\|_{2}^{2}}h_{k}(j,s)^{\top}y\\ c_{k}(j_{k},s_{k})^{\top}y\geq\max_{c_{k}(j,s)^{\top}c_{k}(j_{k},s_{k})=\|c_{k }(j_{k}.s_{k})\|_{2}^{2}}c_{k}(j,s)^{\top}y\end{array}\right.\] (A.5) holds (Lockhart _et al._ (2014)). Next, we consider the following lemma. **Lemma 6** (Lockhart _et al._ (2014)): \[h_{k}(j,s)=c_{k+1}(j,s)\] (The proof follows after the proof of Lemma 5) Consequently, equations (34), (36), and (37) are equivalent to the following five conditions Tibshirani _et al._ (2016): \[c_{l}(j_{l},s_{l})^{\top}y\leq\lambda_{l-1},\ l=1,\ldots,k\] (A.6) \[c_{l}(j_{l},s_{l})^{\top}y\geq 0,\ l=1,\ldots,k\] (A.7) \[c_{l}(j_{l},s_{l})^{\top}y\geq\max_{(j,s)\in S_{l}^{+}}c_{l+1}(j,s)^{\top}y,\ l=1,\ldots,k\] (A.8) \[c_{l}(j_{l},s_{l})^{\top}y\geq\min_{(j,s)\in S_{l}^{-}}c_{l+1}(j,s)^{\top}y\,\ l=1,\ldots,k\] (A.9) \[c_{l}(j_{l},s_{l})^{\top}y\geq\max_{(j,s)\in S_{l}^{0}}c_{k}(j,s)^{\top}y\,\ l=1,\ldots,k\] (A.10) First, since \(\lambda_{l}=c_{l}(j_{l},s_{l})^{\top}y\geq c_{l+1}(j_{l+1},s_{l+1})^{\top}y\), equations (A.6) and (A.7) are equivalent to equation (45). Additionally, in general \[c_{l}(j_{l},s_{l})\geq c_{l}(j_{l+1},s_{l+1})=\max_{(j,s):j\not\in M_{l}}c_{l+ 1}(j,s)^{\top}y\geq\max_{(j,s)\in S_{l}^{+}}c_{l+1}(j,s)^{\top}y\] therefore, condition (A.8) can be omitted for \(l=1,\ldots,k-1\). Thus, the proposition is obtained. ### Proof of Lemma 5 \[X_{M_{k}}^{\top}X_{M_{k}}\left[\begin{array}{c}z_{1}\\ z_{2}\end{array}\right]=\left[\begin{array}{cc}X_{M_{k-1}}^{\top}X_{M_{k-1 }}&X_{M_{k-1}}^{\top}X_{j_{k}}\\ X_{j_{k}}^{\top}X_{M_{k-1}}&X_{j_{k}}^{\top}X_{j_{k}}\end{array}\right]\left[ \begin{array}{c}z_{1}\\ z_{2}\end{array}\right]=\left[\begin{array}{c}s_{M_{k-1}}\\ s_{j_{k}}\end{array}\right]\] \[\Longleftrightarrow\ \left\{\begin{array}{l}z_{1}=(X_{M_{k-1}}^{\top}X_{M_{k-1 }})^{-1}s_{M_{k-1}}-X_{M_{k-1}}^{+}X_{j_{k}}z_{2}\\ z_{2}=\frac{s_{k}-s_{M_{k-1}}X_{M_{k-1}}^{+}X_{j_{k}}}{X_{j_{k}}^{\top}P_{k-1}^{ \bot}X_{j_{k}}}\end{array}\right.\] Hence, the following transformation can be made: \[\omega_{k}^{2} = \|(X_{M_{k}}^{\top})^{+}s_{M_{k}}-(X_{M_{k-1}}^{\top})^{+}s_{M_{k-1} }\|_{2}^{2}\] \[= s_{M_{k}}^{\top}(X_{M_{k}}^{\top}X_{M_{k}})s_{M_{k}}-s_{M_{k-1}}^{ \top}(X_{M_{k-1}}^{\top}X_{M_{k-1}})s_{M_{k-1}}\] \[= s_{M_{k-1}}^{\top}z_{1}+s_{k}^{\top}z_{2}-s_{M_{k-1}}^{\top}(X_{M _{k-1}}^{\top}X_{M_{k-1}})s_{M_{k-1}}\] \[= [s_{k}-s_{M_{k-1}}^{\top}X_{M_{k-1}}^{+}X_{j_{k}}]z_{2}\] \[= \frac{\{s_{k}-s_{M_{k-1}}X_{M_{k-1}}^{+}X_{j_{k}}\}^{2}}{X_{j_{k} }^{\top}P_{k-1}^{\bot}X_{j_{k}}}=\|\eta\|_{2}^{-2}\] ### Proof of Lemma 6 From (33), we define \[\theta_{k,j}:=\frac{X_{j_{k}}^{\top}P_{k-1}^{\bot}X_{j}}{X_{j_{k}}^{\top}P_{k -1}^{\bot}X_{j_{k}}}\] and need to show \[\frac{X_{j}^{\top}P_{k-1}^{\bot}y-\theta_{k,j}X_{j_{k}}^{\top}P_{k-1y}^{\bot}} {\{s-s_{M_{k-1}}(X_{M_{k-1}})^{+}X_{j}\}-\theta_{k,j}\{s_{k}-s_{M_{k-1}}(X_{M_ {k-1}})^{+}X_{j_{k}}\}}=\frac{X_{j}^{\top}P_{k}^{\bot}y}{s-s_{M_{k}}(X_{M_{k}} )^{+}X_{j}}\] (A.11) (Lockhart _et al._ (2014)). Since \(\theta_{k,j}\) is the coefficient of \(X_{j_{k}}\) when \(X_{j}\) is the dependent variable and \(X_{M_{k}}\) are the independent variables, \(\theta_{k,j}\) becomes the \(j_{k}\)-th component of \(X_{M_{k}}^{+}X_{j}\). Thus, \[X_{M_{k}}^{+}X_{j}=[\theta_{M_{k-1},j},\theta_{k,j}]^{\top} \Longleftrightarrow X_{M_{k-1}}\theta_{M_{k-1},j}+X_{j_{k}}\theta_{k,j}=P_{k}X_{j}\] \[\Longleftrightarrow \theta_{M_{k-1},j}=X_{M_{k-1}}^{+}(P_{k}X_{j}-\theta_{k,j}X_{j_{k }})=X_{M_{k-1}}^{+}(X_{j}-\theta_{k,j}X_{j_{k}})\] Therefore, the denominator of the left-hand side of (A.11) is \[s-[s_{M_{k-1}},s_{k}]^{\top}\left[\begin{array}{c}X_{M_{k-1}}^{+}(X_{j}- \theta_{k,j}X_{j_{k}})\\ \theta_{k,j}\end{array}\right]=s-s_{M_{k}}^{\top}\left[\begin{array}{c} \theta_{M_{k-1},j}\\ \theta_{k,j}\end{array}\right]=s-s_{M_{k}}^{\top}(X_{M_{k}})^{+}X_{j}\] which matches the denominator on the right-hand side. Also, since \(P_{k-1}^{\bot}(X_{j}-\theta_{k,j}X_{j_{k}})=P_{k}^{\bot}X_{j}\), the numerators on both sides of (A.11) match as well. **Acknowledgments**: We express our gratitude to the handling editor, Professor Hidetoshi Matsui, and two other reviewers, for their numerous comments in great detail. We take this opportunity to extend our thanks.
2304.00662
Averaging operators on $q$-deformed Witt and $q$-deformed $W(2,2)$ algebras
The aim of this paper is to give some constructions results of averaging operators on Hom-Lie algebras. The homogeneous averaging operators on $q$-deformed Witt and $q$-deformed $W(2,2)$ Hom-algebras are classified. As applications, the induced Hom-Leibniz algebra structures are obtained and their multiplicativity conditions are also given.
Ismail Laraiedh, Sergei Silvestrov
2023-04-03T00:15:43Z
http://arxiv.org/abs/2304.00662v1
# Averaging operators on \(q\)-deformed Witt and \(q\)-deformed \(W(2,2)\) algebras ###### Abstract The aim of this paper is to give some constructions results of averaging operators on Hom-Lie algebras. The homogeneous averaging operators on \(q\)-deformed Witt and \(q\)-deformed \(W(2,2)\) Hom-algebras are classified. As applications, the induced Hom-Leibniz algebra structures are obtained and their multiplicativity conditions are also given. 0 Footnote 0: _Keywords_: Hom-Lie algebra, averaging operator, \(q\)-deformed Witt Hom-algebra, \(q\)-deformed \(W(2,2)\) Hom-algebra 0 Footnote 0: _Keywords_: Hom-Lie algebra, averaging operator, \(q\)-deformed Witt Hom-algebra, \(q\)-deformed \(W(2,2)\) Hom-algebra ## 1 Introduction The investigations of various quantum deformations or \(q\)-deformations of Lie algebras began a period of rapid expansion in 1980's stimulated by introduction of quantum groups motivated by applications to the quantum Yang-Baxter equation, quantum inverse scattering methods and constructions of the quantum deformations of universal enveloping algebras of semi-simple Lie algebras. Various \(q\)-deformed Lie algebras have appeared in physical contexts such as string theory, vertex models in conformal field theory, quantum mechanics and quantum field theory in the context of deformations of infinite-dimensional algebras, primarily the Heisenberg algebras, oscillator algebras and Witt and Virasoro algebras. In [5, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 42, 52, 53, 54], it was in particular discovered that in these \(q\)-deformations of Witt and Visaroro algebras and some related algebras, some interesting \(q\)-deformations of Jacobi identities, extending Jacobi identity for Lie algebras, are satisfied. This has been one of the initial motivations for the development of general quasi-deformations and discretizations of Lie algebras of vector fields using more general \(\sigma\)-derivations (twisted derivations) in [39]. Hom-Lie algebras and more general quasi-Hom-Lie algebras were introduced first by Hartwig, Larsson and Silvestrov [39], where the general quasi-deformations and discretizations of Lie algebras of vector fields using more general \(\sigma\)-derivations (twisted derivations) and a general method for construction of deformations of Witt and Virasoro type algebras based on twisted derivations have been developed, initially motivated by the \(q\)-deformed Jacobi identities observed for the \(q\)-deformed algebras in physics, along with \(q\)-deformed versions of homological algebra and discrete modifications of differential calculi. Hom-Lie algebras, Hom-Lie color algebras and more general quasi-Lie algebras and color quasi-Lie algebras where introduced first in [48, 49, 75]. Quasi-Lie algebras and color quasi-Lie algebras encompass within the same algebraic framework the quasi-deformations and discretizations of Lie algebras of vector fields by \(\sigma\)-derivations obeying twisted Leibniz rule, and the well-known generalizations of Lie algebras such as color Lie algebras, the natural generalizations of Lie algebras and Lie superalgebras. In quasi-Lie algebras, the skew-symmetry and the Jacobi identity are twisted by deforming twisting linear maps, with the Jacobi identity in quasi-Lie and quasi-Hom-Lie algebras in general containing six twisted triple bracket terms. In Hom-Lie algebras, the bilinear product satisfies the non-twisted skew-symmetry property as in Lie algebras, and the Hom-Lie algebras Jacobi identity has three terms twisted by a single linear map, reducing to the Lie algebras Jacobi identity when the twisting linear map is the identity map. Hom-Lie admissible algebras have been considered first in [58], where in particular the Hom-associative algebras have been introduced and shown to be Hom-Lie admissible, that is leading to Hom-Lie algebras using commutator map as new product, and in this sense constituting a natural generalization of associative algebras as Lie admissible algebras. Since the pioneering works [39, 47, 48, 49, 50, 58], Hom-algebra structures expanded into a popular area with increasing number of publications in various directions. Hom-algebra structures of a given type include their classical counterparts and open broad possibilities for deformations, Hom-algebra extensions of cohomological structures and representations, formal deformations of Hom-associative and Hom-Lie algebras, Hom-Lie admissible Hom-coalgebras, Hom-coalgebras, Hom-Hopf algebras [6, 26, 36, 47, 51, 59, 60, 61, 71, 72, 78, 80]. Hom-Lie algebras, Hom-Lie superalgebras and color Hom-Lie algebras and their \(n\)-ary generalizations have been further investigated in various aspects for example in [1, 2, 3, 44, 45, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 64, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 85, 86, 87, 88]. In the 1930s, the notion of averaging operator was explicitly defined by Kolmogoroff and Kampe de Feriet [41, 63]. Then G. Birkhoff [27] continued its study and showed that a positive bounded projection in the Banach algebra \(C(X)\), the algebra of scalar valued continuous functions on a compact Hausdorff space \(X\), onto a fixed range space is an idempotent averaging operator. In 1954, S. T. C. Moy [65] made the connection between averaging operators and conditional expectation. Furthermore, she studied the relationship between integration theory and averaging operators in turbulence theory and probability. Then her results were extended by G. C. Rota [70]. During the same period, the idempotent averaging operators on \(C_{\infty}(X)\), the algebra of all real valued continuous functions on a locally compact Hausdorff space \(X\) that vanish at the infinity, were characterized by J. L. Kelley [43]. In this century, while averaging operators continued to find many applications in its traditional areas of analysis and applied areas [37], their algebraic study has been deepened and generalized. J. L. Loday [55] defined the diassociative algebra as the enveloping algebra of the Leibniz algebra by analogy with the associative algebra as the enveloping algebra of the Lie algebra. More precisely, an averaging operator on an algebra \(A\) over a field \(\mathbb{K}\) is a linear map \(P:A\to A\) satisfying the averaging relation: \[P(x)P(y)=P(P(x)y)=P(xP(y)).\] M. Aguiar in [4] showed that a diassociative algebra can be derived from an averaging associative algebra by defining two new operations \(x\dashv y:=xP(y)\) and \(x\vdash y:=P(x)y\). An analogue process gives a Leibniz algebra from an averaging Lie algebra by defining a new operation \(\{x,y\}:=[P(x),y]\) and derives a (left) permutative algebra from an averaging commutative associative algebra. In general, an averaging operator was defined on any binary operad and this kind of process was systematically studied in [66] by relating the averaging actions to a special construction of binary operads called duplicators [67, 84]. The purpose of this paper is to give some constructions results of averaging operators on Hom-Lie algebras and to classify the homogeneous averaging operators on \(q\)-deformed Witt and \(q\)-deformed \(W(2,2)\) algebras. Then the induced Leibniz algebra structures are obtained. Section 2 contains some necessary important basic notions, notations and examples on \(\mathbb{Z}\)-graded Hom-Lie algebras which will be used in next sections and we study the multiplicativity conditions of \(q\)-deformed Witt and \(q\)-deformed \(W(2,2)\) Hom algebras. Next, we present some useful methods for constructions of averaging operator on Hom-Lie algebras. In section 3, we classify the homogeneous averaging operators on the \(q\)-deformed Witt Hom-algebra \(\mathcal{V}^{q}\) and we give the induced Hom-Leibniz algebras from the averaging operators on the \(q\)-deformed Witt Hom-algebra \(\mathcal{V}^{q}\). In section 4, we classify the homogeneous averaging operators on the \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}\). Also, we give the induced Hom-Leibniz algebras from the averaging operators on the \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}\). ## 2 Constructions of averaging operators on Hom-Lie algebras In this section, firstly, we review some important basic notions, notations and examples on \(\mathbb{Z}\)-graded Hom-Lie algebras which will be used in next sections. Then, we present some useful methods for constructions of averaging operator on Hom Lie algebras. In this article, all linear spaces are over a field \(\mathbb{K}\) of characteristic zero. A linear operator \(T:A\mapsto A\) on a \(\mathbb{Z}\)-graded linear space \(A=\bigoplus_{j\in\mathbb{Z}}V_{j}\), is said to respect the grading of the linear space \(A\) if for any \(i\in\mathbb{Z}\) there exists \(j\in\mathbb{Z}\) such that \(T(A_{i})\subseteq A_{j}\). The linear operator respecting grading is said to be homogeneous of degree \(\deg T\in\mathbb{Z}\) if \(T(A_{i})\subseteq A_{i+\deg T}\) for all \(i\in\mathbb{Z}\), and \(T\) is said to be even if \(\deg T=0\), that is \(T(A_{i})\subseteq A_{i}\) for all for all \(i\in\mathbb{Z}\). ### Hom-algebras, Hom-Lie algebras and multiplicativity Hom-algebras in general are triples \((A,[\cdot,\cdot],\alpha)\) consisting of a linear space \(A\), bilinear product \([\cdot,\cdot]:A\times A\mapsto A\) and a linear map (linear space homomorphism) \(\alpha:A\mapsto A\). If, moreover, the linear map \(\alpha:A\to A\) is an algebra endomorphism, meaning that it satisfies for all \(x,y\in A\) the multiplicativity property \[\alpha([x,y])=[\alpha(x),\alpha(y)], \tag{2.1}\] then the Hom-Lie algebra is called _multiplicative_. Within specific classes of Hom-algebras, it is important to characterize multiplicative and non-multiplicative Hom-algebras belonging to the class. A Hom-algebra, \((A,[\cdot,\cdot],\alpha)\) is said to be \(\mathbb{Z}\)_-graded_ if the linear space \(A\) is \(\mathbb{Z}\)-graded, \[A=\bigoplus_{j\in\mathbb{Z}}A_{j},\] the bilinear product \([\cdot,\cdot]\) is \(\mathbb{Z}\)-graded, that is, for all \(m,n\in\mathbb{Z}\), \[[A_{m},A_{n}]\subseteq A_{m+n},\] and the linear operator \(\alpha\) respects the \(\mathbb{Z}\)-grading of the linear space \(A\), that is, for any \(i\in\mathbb{Z}\) there exists \(j\in\mathbb{Z}\) such that \(\alpha(A_{i})\subseteq A_{j}\). In any \(\mathbb{Z}\)-graded Hom-algebra (\(A=\bigoplus_{j\in\mathbb{Z}}A_{j},[\cdot,\cdot],\alpha\)), and the following inclusions hold: \[[\alpha(A_{m}),\alpha(A_{n})]\subseteq[A_{m+k},A_{n+k}]\subseteq A _{m+n+2k},\] \[\alpha([A_{m},A_{n}])\subseteq\alpha(A_{m+n})\subseteq A_{m+n+k},\] \[[\alpha(A_{m}),\alpha(A_{n})]\cap\alpha([A_{m},A_{n}])\subseteq A _{m+n+2k}\cap A_{m+n+k}=\left\{\begin{array}{ll}A_{m+n},&\mbox{if $k=0$}\\ 0,&\mbox{if $k\neq 0$}\end{array},\right.\] \[\left.\begin{array}{ll}\ker([\cdot,\cdot])\cap((\ker(\alpha) \times A)\cup(A\times\ker(\alpha)))\\ \subseteq M_{A,[\cdot,\cdot],\alpha}=\{(x,y)\in A\times A\mid[\alpha(x), \alpha(y)]=\alpha([x,y])\}\end{array}\right.\] These inclusions directly yield the following handy conditions for checking whether \(\mathbb{Z}\)-graded Hom-algebras are multiplicative or non-multiplicative, based on an interaction between the bilinear product \([\cdot,\cdot]\), the twisting map \(\alpha\), the \(\mathbb{Z}\)-grading of \(A\) and elements of its homogeneous subspaces \(A_{j},j\in\mathbb{Z}\) in the \(\mathbb{Z}\)-grading direct decomposition. **Theorem 2.1**.: _Let \((A=\bigoplus_{n\in\mathbb{Z}}A_{n},[\cdot,\cdot],\alpha)\) be a \(\mathbb{Z}\)-graded Hom-algebra where \(\alpha\) is a linear operator homogeneous of degree \(\deg\alpha=k\in\mathbb{Z}\)._ 1. _The Hom-algebra_ \((A,[\cdot,\cdot],\alpha)\) _is not multiplicative, if and only if_ \[\exists\ m,n\in\mathbb{Z},x_{m}\in A_{m},x_{n}\in A_{n}:[\alpha(x_{m}),\alpha( x_{n})]\neq\alpha([x_{m},x_{n}]).\] _The Hom-algebra_ \((A,[\cdot,\cdot],\alpha)\) _is multiplicative if and only if_ \[\forall\ m,n\in\mathbb{Z},x_{m}\in A_{m},x_{n}\in A_{n}:[\alpha(x_{m}),\alpha( x_{n})]=\alpha([x_{m},x_{n}]).\] 2. \((A,[\cdot,\cdot],\alpha)\) _is not multiplicative, if and only if the strict inclusion takes place_ \[\exists\ m,n\in\mathbb{Z}:\{(x_{m},x_{n})\in A_{m}\times A_{n}\mid[\alpha(x_{m }),\alpha(x_{n})]=\alpha([x_{m},x_{n}])\}\subsetneq A_{m}\times A_{n},\] _or equivalently if and and only if_ \[\exists\ m,n\in\mathbb{Z}:\] \[A_{m}\times A_{n}\setminus\{(x_{m},x_{n})\in A_{m}\times A_{n}\mid[\alpha(x_ {m}),\alpha(x_{n})]=\alpha([x_{m},x_{n}])\}\neq\emptyset\] _._ * _If_ \((A,[\cdot,\cdot],\alpha)\) _is multiplicative, then one of the following alternatives holds:_ * _linear operator_ \(\alpha\) _is even, that is homogeneous of degree_ \(k=0\)_;_ * _linear operator_ \(\alpha\) _is homogeneous of degree_ \(k\neq 0\) _and_ \[\forall\ m,n\in\mathbb{Z}:\ [\alpha(A_{m}),\alpha(A_{n})]\times\alpha([A_{m},A_{ n}])=\{0\}\times\{0\}=\{(0,0)\},\] _by linearity of_ \(\alpha\) _and bilinearity of_ \([\cdot,\cdot]\) _equivalent to_ \([\alpha(\cdot),\alpha(\cdot)]=\alpha([\cdot,\cdot])=0\)_._ * _If_ \(k\neq 0\)_, then_ \((A,[\cdot,\cdot],\alpha)\) _is not multiplicative if and only if_ \[\exists\ m,n\in\mathbb{Z}:[\alpha(A_{m}),\alpha(A_{n})]\times\alpha([A_{m},A_ {n}])\neq\{0\}\times\{0\}=\{(0,0)\}.\] _If_ \(k\neq 0\)_, then_ \((A,[\cdot,\cdot],\alpha)\) _is multiplicative if and only if_ \[\forall\ m,n\in\mathbb{Z}:[\alpha(A_{m}),\alpha(A_{n})]\times\alpha([A_{m},A_ {n}])=\{0\}\times\{0\}=\{(0,0)\},\] _that is_ \(\forall\ m,n\in\mathbb{Z}:[\alpha(A_{m}),\alpha(A_{n})]=\alpha([A_{m},A_{n}])= \{0\},\) _which is the same as_ \[[A,A]\subseteq\ker(\alpha),\quad\alpha(A)\subseteq\ker([\cdot,\cdot])=\{(x,y )\in A\times A\mid[x,y]=0\}.\] _or equivalently, for elements of the homogeneous subspaces, if and only if_ \[\forall\ m,n\in\mathbb{Z},x_{m}\in A_{m},x_{n}\in A_{n}:[\alpha(x_{m}),\alpha( x_{n})]=\alpha([x_{m},x_{n}])=0.\] If \(\dim A_{m}=1\) for all \(m\in\mathbb{Z}\) and \(\{x_{m}\in A_{m},m\in\mathbb{Z}\}\) is a homogeneous basis of the \(\mathbb{Z}\)-graded linear space \(A=\bigoplus\limits_{m\in\mathbb{Z}}A_{m}\), then for all \(m,n\in\mathbb{Z}\), \[\alpha(x_{m})=\alpha_{m+k,m}x_{m+k},\quad\text{for some unique $\alpha_{m+k,m}\in\mathbb{K}$}\] \[\alpha(x_{m+n})=\alpha_{m+n+k,m+n}x_{n+m+k},\quad\text{for some unique $\alpha_{m+n+k,m+n}\in\mathbb{K}$}\] \[[x_{m},x_{n}]=c_{m,n}^{m+n}x_{m+n},\quad\text{for some unique $c_{m,n}^{m+n}\in\mathbb{K}$}\] \[[\alpha(x_{m}),\alpha(x_{n})]=\alpha_{m+k,m}\alpha_{n+k,n}c_{m+k, n+k}^{m+n+2k}x_{m+n+2k},\] \[\alpha([x_{m},x_{n}])=c_{m,n}^{m+n}\alpha(x_{m+n})=c_{m,n}^{m+n} \alpha_{m+n+k,m+n}x_{m+n+k}.\] **Corollary 2.2**.: _Let \(\mathcal{A}=(A=\bigoplus\limits_{n\in\mathbb{Z}}A_{n},[\cdot,\cdot],\alpha)\) be a \(\mathbb{Z}\)-graded Hom-algebra where \(\alpha\) is a homogeneous linear operator of degree \(\deg\alpha=k\in\mathbb{Z}.\) If \(\dim A_{m}=1\) for all \(m\in\mathbb{Z}\) and \(\{x_{m}\in A_{m},m\in\mathbb{Z}\}\) is a homogeneous basis of the \(\mathbb{Z}\)-graded linear space \(A=\oplus_{m\in\mathbb{Z}}A_{m}\), then for all \(m,n\in\mathbb{Z}\),_ * _If_ \(\deg\alpha=k\neq 0\)_, then_ \(\mathcal{A}\) _is multiplicative if and only if for all_ \(m,n\in\mathbb{Z}\)_,_ \[\alpha_{m+k,m}\alpha_{n+k,n}c_{m+k,n+k}^{m+n+2k}=c_{m,n}^{m+n}\alpha_{m+n+k,m+n }=0,\] _which is equivalent to_ \(\left\{\begin{array}{l}\alpha_{m+n+k,m+n}=0,\quad\text{if}\quad c_{m,n}^{m+n }\neq 0\\ \alpha_{m+k,m}=0\text{ or }\alpha_{n+k,n}=0,\quad\text{if}\quad c_{m+k,n+k}^{m+n+2k} \neq 0\end{array}\right.\)__ _._ 2. _If_ \(\deg\alpha=k=0\)_, then_ \(\mathcal{A}\) _is multiplicative if and only if, for all_ \(m,n\in\mathbb{Z}\)_,_ \[\alpha_{m,m}\alpha_{n,n}c_{m,n}^{m+n}=c_{m,n}^{m+n}\alpha_{m+n,m+n},\] _that is if and only if, for all_ \(m,n\in\mathbb{Z}\)_,_ \[c_{m,n}^{m+n}(\alpha_{m,m}\alpha_{n,n}-\alpha_{m+n,m+n})=0,\] _or equivalently if and only if, for all_ \(m,n\in\mathbb{Z}\)_,_ \[\alpha_{m,m}\alpha_{n,n}=\alpha_{m+n,m+n},\quad\text{if}\quad c_{m,n}^{m+n}\neq 0.\] **Definition 2.3** ([39, 58]).: Hom-Lie algebras are Hom-algebras \((A,[\cdot,\cdot],\alpha)\) consisting of a linear space \(A\) over a field \(\mathbb{K}\), a bilinear map \([\cdot,\cdot]\colon A\times A\to A\) and a linear map \(\alpha:A\to A\), satisfying for all \(x,y,z\in A\), \[[x,y] =-[y,x] \text{(Skew-symmetry identity)} \tag{2.2}\] \[=[\alpha(x),[y,z]]+[\alpha(y),[z,x]]+[\alpha(z),[x,y]]=0. \text{(Hom-Jacobi identity)} \tag{2.3}\] **Definition 2.4** ([48, 58]).: Hom-Leibniz algebras are Hom-algebras \((A,[\cdot,\cdot],\alpha)\) consisting of a linear space \(A\) over a field \(\mathbb{K}\), a bilinear map \([\cdot,\cdot]\colon A\times A\to A\) and a linear map \(\alpha:A\to A\) satisfying for all \(x,y,z\in A\), \[[\alpha(x),[y,z]]=[[x,y],\alpha(z)]+[\alpha(y),[x,z]].\qquad\text{(Hom- Leibniz identity)} \tag{2.4}\] When, moreover, the linear map \(\alpha:A\to A\) satisfies multiplicativity (2.1), that is when \(\alpha\) is an algebra endomorphism, the Hom-Leibniz algebra \((A,[\cdot,\cdot],\alpha)\) is called multiplicative. _Remark 2.5_.: Skewsymmetric Hom-algebras are Hom-algebras satisfying the skewsymmetry axiom (2.2), and hence the Hom-Lie algebras form a special subclass of skewsymmetric Hom-algebras where moreover the Hom-Jacobi identity (2.3) holds. In skewsymmetric algebras however there is no requirement of any relations between the linear operation \(\alpha\) and bilinear operation \([\cdot,\cdot]\). In this sence, the skewsymmetric Hom-algebras can be seen and studied just as arbitrary pairs of skewsymmetric algebras and linear operators on them. However, this is not the case in Hom-Lie or Hom-Leibniz algebras where the linear and bilinear operations are dependent via Hom-Jacobi and Hom-Leibniz identities in nontrivial ways. _Remark 2.6_.: Every skewsymmetric Hom-Leibniz algebra is a Hom-Lie algebra, every Hom-Lie algebra is a skewsymmetric Leibniz algebra, but not every Hom-Leibniz algebra is skewsymmetric, and thus Hom-Lie algebras as a class of Hom-algebras coincides with the intersection of the class of Hom-Leibniz algebras and the class of skewsymmetric algebras, which is moroever properly included in each of classes. _Example 2.7_.: For \(q\in\mathbb{K}\setminus\{0\}\) and \(n\in\mathbb{Z}\), the \(q\)-numbers \(\{n\}\) defined by \[\{n\}=\left\{\begin{array}{ll}\frac{1-q^{n}}{1-q},&\text{for $q\neq 1$}\\ n,&\text{for $q=1$}\end{array}\right.\] have the following properties \[\begin{array}{l}\{m+1\}=1+q\{m\}=\{m\}+q^{m},\ \{m+n\}=\{m\}+q^{m}\{n\},\ q^{m}\{-m \}=-\{m\},\\ \{m\}=0\ \Leftrightarrow\ q^{m}=1.\end{array} \tag{2.5}\] The linear space \({\cal V}^{q}\) with a basis \(\{L_{n}|n\in{\mathbb{Z}}\}\) equipped with the bilinear operation \([\cdot,\cdot]\) and a linear map \(\alpha\) on \({\cal V}^{q}\) on the basis, for all \(m,n\in{\mathbb{Z}}\), by \[\begin{array}{l}[L_{m},L_{n}]=(\{m\}-\{n\})L_{m+n},\\ \alpha(L_{n})=(1+q^{n})L_{n}.\end{array} \tag{2.6}\] Then, \(({\cal V}^{q},[\cdot,\cdot],\alpha)\) is a Hom-Lie algebra [39, 75, 76], called the \(q\)_-deformed Witt Hom-Lie algebra_ or \(q\)_-Witt Hom-Lie algebra_. There is a natural \({\mathbb{Z}}\)-grading on \({\cal V}^{q}\), \[{\cal V}^{q}=\bigoplus_{n\in{\mathbb{Z}}}{\cal V}^{q}_{n},\ {\cal V}^{q}_{n}={ \mathbb{K}}L_{n},\ n\in{\mathbb{Z}}.\] _Example 2.8_.: If, in Example 2.7, the linear operator \(\alpha\), homogeneous of degree \(k\), is defined for all \(n\in{\mathbb{Z}}\), by \(\alpha=\alpha_{k}(L_{n})=(1+q^{n-k})L_{n+k}\), then \(({\cal V}^{q},[\cdot,\cdot],\alpha_{k})\) are \({\mathbb{Z}}\)-graded Hom-Lie algebras for all \(k\in{\mathbb{Z}}\). _Example 2.9_.: For \(q\neq 0\) and \(n\in{\mathbb{Z}}\), let \([n]\) denote the \(q\)-number \[[n]=[n]_{q}=\left\{\begin{array}{l}\frac{q^{n}-q^{-n}}{q-q^{-1}},\ \mbox{if}\ q\neq\pm 1\\ n,\ \mbox{if}\ q=1\\ (-1)^{n-1}n=(-1)^{n+1}n=-(-1)^{n}n,\ \mbox{if}\ q=-1.\end{array}\right.\] Note that these \(q\)-numbers are invariant under transformation replacing \(q\) by \(q^{-1}\), and satisfy for all \(m,n\in{\mathbb{Z}}\), \[\left\{\begin{array}{l}[-n]=-[n],\ q^{n}[m]-q^{m}[n]=[m-n],\ q^{-n}[m]+q^{m} [n]=[m+n],\ \mbox{for all}\ q\in{\mathbb{K}}\setminus\{0\}\\ [n]=0\ \Leftrightarrow\ q^{2n}=1,\ \mbox{for all}\ q\neq\pm 1\\ [n]=n=0\ \Rightarrow\ n=0,q^{2n}=1^{0}=1,\ \mbox{for}\ q=1\\ [n]=(-1)^{n-1}n=0\ \Rightarrow\ n=0,q^{2n}=(-1)^{0}=1,\ \mbox{for}\ q=-1.\end{array}\right.\] Note that if \(q=\pm 1\), then \(q^{2n}=1\) for all \(n\in{\mathbb{Z}}\), while \([n]=\left\{\begin{array}{l}n,\ \mbox{if}\ q=1\\ (-1)^{n-1}n,\ \mbox{if}\ q=-1\\ \end{array}\right.=0\) only for \(n=0\). Let \({\cal W}^{q}\) be a linear space with basis \(\{L_{n},W_{n}|n\in{\mathbb{Z}}\}\), and a bilinear operation on \({\cal W}^{q}\) is defined on the basis, for all \(m,n\in{\mathbb{Z}}\), by \[[L_{m},L_{n}]=[m-n]L_{m+n},\ \ [L_{m},W_{n}]=[m-n]W_{m+n}, \tag{2.7}\] and with other brackets obtained by skew-symmetry or equal to \(0\). The linear map \(\alpha\) on \({\cal W}^{q}\) is defined, for all \(n\in{\mathbb{Z}}\), by \[\alpha(L_{n})=(q^{n}+q^{-n})L_{n},\ \ \alpha(W_{n})=(q^{n}+q^{-n})W_{n}.\] It was proved in [83] that the triple \(({\cal W}^{q},[\cdot,\cdot],\alpha)\) forms a Hom-Lie algebra, which is called the \(q\)_-deformed \(W(2,2)\) Hom-Lie algebra_. By defining \(\deg(L_{n})=\deg(W_{n})=n\), we obtain that \({\cal W}^{q}\) is \({\mathbb{Z}}\)-graded Hom-Lie algebra, namely \({\cal W}^{q}=\bigoplus_{n\in{\mathbb{Z}}}{\cal W}^{q}_{n}\) with \({\cal W}^{q}_{n}=\mbox{span}_{{\mathbb{K}}}\{L_{n},W_{n}\}\). Note that \({\cal W}^{q}\) is not multiplicative since \(\alpha\) is not a homomorphism of Hom-Lie algebras. _Example 2.10_.: In Example 2.9, if the homogeneous linear operator of degree \(k\) is defined by \(\beta(L_{n})=(q^{n-k}+q^{k-n})L_{n+k}\), \(\beta(W_{n})=(q^{n-k}+q^{k-n})W_{n+k}\) for all \(n\in\mathbb{Z}\), then \((\mathcal{W}^{q},[\cdot,\cdot],\beta)\) is a \(\mathbb{Z}\)-graded Hom-Lie algebra. **Proposition 2.11**.: _For any \(k\in\mathbb{Z}\), the Hom-Lie algebra \((\mathcal{V}^{q},[\cdot,\cdot],\alpha_{k})\) is multiplicative if and only if \(k=0\) and \(q=-1\)._ Proof.: If \(k=\deg\alpha_{k}=\deg\alpha_{0}=0\), then by Corollary 2.2 (ii), \[(\mathcal{V}^{q},[\cdot,\cdot],\alpha)\ \ \ \text{is multiplicative}\ \Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ (\{m\}-\{n\})\big{(}(1+q^{m})(1+q^{n})-(1+q^{m+n })\big{)}=0\ \Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ (\{m\}-\{n\})(q^{n}+q^{m})=0\ \Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{ll}(q^{n}-q^{m})(q^{n}+q^{m})=0,& \text{for}\ q\neq 1\\ (m-n)(q^{n}+q^{m})=(m-n)\cdot 2=0,&\text{for}\ q=1\end{array}\right.\Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{ll}q^{2(n-m)}=1,& \text{for}\ q\neq 1\\ m-n=0,&\text{for}\ q=1\end{array}\right.\Leftrightarrow\] \[q\neq 1,\forall\ p\in\mathbb{Z}:\ q^{2p}=1\Leftrightarrow q\neq 1,q ^{2}=1\ (\text{for}\ p=1)\Leftrightarrow\ q=-1.\] If \(k=\deg\alpha_{k}\neq 0\), then by Corollary 2.2 (i), \[(\mathcal{V}^{q},[\cdot,\cdot],\beta)\ \ \ \text{is multiplicative}\ \Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{ll}(\{m\}-\{n\})(1+q^{m+n-k})=0; \\ (1+q^{m-k})(1+q^{n-k})(\{m+k\}-\{n+k\})=0,&\Leftrightarrow\\ \forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{ll}(q^{n}-q^{m})(1+q^{m+n-k})=0, \text{ if }q\neq 1\\ (m-n)\cdot 2=0,\text{ if }q=1\end{array}\right.;\\ \left\{\begin{array}{ll}(1+q^{m-k})(1+q^{n-k})(q^{n+k}-q^{m+k})=0,&\text{if }q \neq 1\\ 4\cdot(m-n)=0,&\text{if }q=1\end{array}\right.\Leftrightarrow\end{array}\right.\] \[q\neq 1\ \text{and}\ \forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{ll}q^{n}=q^{m} \text{ or }q^{n+m-k}=-1;\\ q^{m-k}=-1\text{ or }q^{n-k}=-1\text{ or }q^{n}=q^{m}\end{array}\right.\Leftrightarrow\] \[q\neq 1\ \text{and}\ \forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{ll}q^{m-n}=1\\ q^{n+m-k}=-1;&\text{if }q^{m-n}\neq 1,\ q^{m-k}\neq-1\\ q^{n+m-k}=-1;&\text{if }q^{m-n}\neq 1,\ q^{m-k}\neq-1,\end{array}\right.\Leftrightarrow\] \[q\neq 1\ \text{and}\ \forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{ll}q^{m-n}=1;\\ \left\{\begin{array}{ll}q^{n+m-k}=-1\\ q^{m-k}=-1,&\text{if }q^{m-n}\neq 1,\ q^{n-k}\neq-1\\ q^{n+m-k}=-1;&\text{if }q^{m-n}\neq 1,\ q^{m-k}\neq-1,\end{array}\right. \Leftrightarrow\end{array}\right.\] \[q\neq 1\ \text{and}\ \forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{ll}q^{m-n}=1;\\ q^{n}=1\text{ for }q^{m}\neq 1,q^{k}\neq-1;\\ q^{m}=1\text{ for }q^{n}\neq 1,q^{k}\neq-1,\end{array}\right. \tag{2.8}\] If \(q\neq 1\) and \(q^{k}=-1\), then (2.8) reduces to \(q\neq 1\) and \(\forall\ m,n\in\mathbb{Z}:q^{m-n}=1\), which does not hold because \(q^{m-n}=-1\neq 1\) when \(m-n=k\). If \(q\neq 1\), \(q^{k}\neq-1\) and \(q^{k}=1\), the (2.8) does not hold since for \(m=2k+1\) and \(n=k\), \[q^{m-n}=q^{k+1}=q\neq 1,\ q^{m}=q^{2k+1}=q\neq 1,\ q^{n}=q^{k+1}=q\neq 1,\] and if \(q\neq 1\), \(q^{k}\neq-1\) and \(q^{k}\neq 1\), then (2.8) does not hold since for \(m=2k\) and \(n=k\), \[q^{m-n}=q^{k}\neq 1,\ q^{m}=q^{2k}\neq 1,\ q^{n}=q^{k}=q\neq 1.\] Hence, if \(k=\deg\alpha_{k}\neq 0\), then \((\mathcal{V}^{q},[\cdot,\cdot],\alpha_{k})\) is not multiplicative for any \(q\). **Proposition 2.12**.: _The \(q\)-deformed \(W(2,2)\) Hom-Lie algebra \((\mathcal{W}^{q},[\cdot,\cdot],\alpha)\) is multiplicative if and only if \(q^{2}=-1\)_(_which is equivalent to \(q=\pm i\) if there exists \(i\in\mathbb{K}\) such that \(i^{2}=-1\), for example when \(\mathbb{K}\) is algebraically closed field, like \(\mathbb{C}\)_)_._ Proof.: For all \(n,m\in\mathbb{Z}\), we have \[[\alpha(L_{m}),\alpha(L_{n})]-\alpha([L_{m},L_{n}])=[(q^{m}+q^{-m })L_{m},(q^{n}+q^{-n})L_{n}]-\alpha([m-n]L_{m+n})\] \[\quad=(q^{m}+q^{-m})(q^{n}+q^{-n})[m-n]L_{m+n}-[m-n](q^{m+n}+q^{- m-n})L_{m+n}\] \[\quad=[m-n](q^{m-n}+q^{n-m})L_{m+n}=\left\{\begin{array}{l} \frac{(q^{m-n}-q^{n-m})(q^{m-n}+q^{n-m})}{q-q^{-1}}L_{m+n},\ \mbox{if}\ q\neq\pm 1\\ 2(m-n)L_{m+n},\ \mbox{if}\ q=1\\ 2(n-m)L_{m+n}\ \mbox{if}\ q=-1\end{array}\right.,\] \[[\alpha(L_{m}),\alpha(W_{n})]-\alpha([L_{m},W_{n}])=[(q^{m}+q^{- m})L_{m},(q^{n}+q^{-n})W_{n}]-\alpha([m-n]W_{m+n})\] \[\quad=(q^{m}+q^{-m})(q^{n}+q^{-n})[m-n]W_{m+n}-[m-n](q^{m+n}+q^{- m-n})W_{m+n}\] \[\quad=[m-n](q^{m-n}+q^{n-m})W_{m+n}=\left\{\begin{array}{l} \frac{(q^{m-n}-q^{n-m})(q^{m-n}+q^{n-m})}{q-q^{-1}}W_{m+n},\ \mbox{if}\ q\neq\pm 1\\ 2(m-n)W_{m+n},\ \mbox{if}\ q=1\\ 2(n-m)W_{m+n}\ \mbox{if}\ q=-1\end{array}\right.,\] \[\alpha([W_{m},W_{n}])-[\alpha(W_{m}),\alpha(W_{n})]=0.\] So, by Theorem 2.1 (i), \[(\mathcal{W}^{q},[\cdot,\cdot],\alpha)\ \mbox{is multiplicative}\ \Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{l}(q^{m-n}-q^{n-m})(q^{m-n}+q^{n-m})=0,\ \mbox{if}\ q\neq\pm 1\\ m-n=0,\ \mbox{if}\ q=\pm 1\end{array}\right.\Leftrightarrow\] \[q\neq\pm 1\ \mbox{and}\ \forall\ m,n\in\mathbb{Z}:\ (q^{2(m-n)}-q^{2(n-m)})=0\ \Leftrightarrow\] \[q\neq\pm 1\ \mbox{and}\ \forall\ m,n\in\mathbb{Z}:\ q^{4(m-n)}=1 \Leftrightarrow q\neq\pm 1\ \mbox{and}\ \forall\ p\in\mathbb{Z}:\ q^{4p}=1\Leftrightarrow\] \[q\neq\pm 1\ \mbox{and}\ \forall\ p\in\mathbb{Z}:\ q^{4}=1\ \Leftrightarrow\ q^{2}=-1.\] \[\Leftrightarrow q=\pm i\ \mbox{if}\ \exists\ i\in\mathbb{K}:\ i^{2}=-1 \ \mbox{(for example if}\ \mathbb{K}\ \mbox{is algebraically closed)}.\qed\] **Proposition 2.13**.: _The Hom-Lie algebra \((\mathcal{W}^{q},[\cdot,\cdot],\beta)\) is not multiplicative for any \(q\in\mathbb{K}\setminus\{0\}\)._ Proof.: By Theorem 2.1 (iv), \[(\mathcal{W}^{q},[\cdot,\cdot],\beta)\ \mbox{is multiplicative}\ \Leftrightarrow\ \forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{l}\beta([L_{m},L_{n}])=[\beta(L_{m}),\beta(L_{n})]=0\\ \beta([L_{m},W_{n}])=[\beta(L_{m}),\beta(W_{n})]=0,\end{array}\right. \Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{l}(q^{m-k}+q^{k-m})(q^{n-k}+q^{k- n})[m-n]L_{m+n+2k}=0\\ (q^{m+n-k}+q^{k-m-n})[m-n]L_{m+n+k}=0,\\ (q^{m-k}+q^{k-m})(q^{n-k}+q^{k-n})[m-n]W_{m+n+2k}=0\\ (q^{m+n-k}+q^{k-m-n})[m-n]W_{m+n+k}=0,\end{array}\right.\Leftrightarrow\] \(q\neq\pm 1\) and \(\forall\ m,n\in\mathbb{Z}:\left\{\begin{array}{l}(q^{m-k}+q^{k-m})(q^{n-k}+q^{k-n })(q^{m-n}+q^{n-m})=0,\\ (q^{m+n-k}+q^{k-m-n})(q^{m-n}+q^{n-m})=0,\end{array}\right.\) which does not hold since for \(m=k+1,n=k\) it reduces to the impossible \[q\neq\pm 1\ \text{and}\ \left\{\begin{array}{l}2(q+q^{-1})^{2}=0,\\ (q^{k+1}+q^{-(k+1)})(q+q^{-1})=0,\end{array}\right.\] Hence \((\mathcal{W}^{q},[\cdot,\cdot],\beta)\) is not multiplicative for any \(q\in\mathbb{K}\setminus\{0\}.\) ### Averaging operators on Hom-algebras **Definition 2.14**.: An averaging operator on a Hom-algebra \((A,[\cdot,\cdot],\alpha)\) over \(\mathbb{K}\) is a linear operator \(P:A\to A,\) satisfying for all \(x,y\in A,\) \[\alpha\circ P=P\circ\alpha, \text{(commutativity of $P$ with $\alpha$)} \tag{2.9}\] \[[P(x),P(y)]=P([P(x),y])=P([x,P(y)]), \text{(averaging operator axiom)} \tag{2.10}\] _Remark 2.15_.: In skewsymmetric Hom-algebras, and thus in the Hom-Lie algebras in particular, the skewsymmetry of multiplication (2.2) implies that (2.10) is equivalent to \[[P(x),P(y)]=P([P(x),y]),\ \ \forall\ x,y\in A. \tag{2.11}\] **Proposition 2.16**.: _If \(P\) is an averaging operator on a Hom-algebra \(\mathcal{A}=(A,[\cdot,\cdot],\alpha)\), then_ * \((P(A),[\cdot,\cdot],\alpha)\) _is a Hom-subalgebra of the Hom-Lie algebra_ \((A,[\cdot,\cdot],\alpha)\)_;_ * \([P(A),ker(P)]\subseteq ker(P)\) _and_ \([ker(P),P(A)]\subseteq ker(P)\)_._ * _If_ \(P\) _is surjective, that is if_ \(P(A)=A\)_, then_ \(ker(P)\) _is a two-sided Hom-ideal in the Hom-algebra_ \(\mathcal{A}\)_, meaning that_ \([A,ker(P)]\subseteq ker(P)\)_,_ \([ker(P),A]\subseteq ker(P)\) _and_ \(\alpha(ker(P))\subseteq ker(P)\)_._ Proof.: Since \(P\) is a linear operator, \(P(A)\) is a linear subspace of \(A.\) * By (2.9), \(P\) and \(\alpha\) commute, and hence \(\alpha(P(x))=P(\alpha(x))\in P(A).\) Since, for any \(x,y\in P(A),\) there exist \(x^{\prime},y^{\prime}\in A\) such that \(x=P(x^{\prime}),\ y=P(y^{\prime}),\) the averaging operator axiom (2.10) yields \([x,y]=[P(x^{\prime}),P(y^{\prime})]=P([P(x^{\prime}),y^{\prime}])\in P(A).\) * Let \(x=P(x^{\prime})\) for some \(x^{\prime}\in A.\) If \(y^{\prime}\in ker(P),\) then (2.10) yields \(P([x,y^{\prime}])=P([P(x^{\prime}),y^{\prime}])=[P(x^{\prime}),P(y^{\prime})]=0,\) and hence, \([P(A),ker(P)]\subseteq ker(P).\) Let \(y=P(y^{\prime})\) for some \(y^{\prime}\in A.\) If \(x^{\prime}\in ker(P),\) then (2.10) yields \(P([x^{\prime},y])=P([x^{\prime},P(y^{\prime})])=[P(x^{\prime}),P(y^{\prime})]=0,\) and hence \([ker(P),P(A)]\subseteq ker(P).\) * The first two inclusions are a special case of (ii), and \(\alpha(ker(P))\subseteq ker(P)\) follows from commutativity of \(\alpha\) and \(P\) The defining axioms of Hom-Leibniz algebras and Hom-Lie algebras are multilinear in their arguments and are inherited by Hom-subalgebras. **Corollary 2.17**.: _Let \(P\) be an averaging operator on a Hom-algebra \((A,[\cdot,\cdot],\alpha).\) Then, if \((A,[\cdot,\cdot],\alpha)\) is a skewsymmetric Hom-algebra, or a Hom-Leibniz algebra or a Hom-Lie algebra, then \((P(A),[\cdot,\cdot],\alpha)\) is also a skewsymmetric Hom-algebra, or a Hom-Leibniz algebra or a Hom-Lie algebra respectively._ **Proposition 2.18**.: _If \(\mathcal{A}=(A,[\cdot,\cdot],\alpha)\) is a Hom-Leibniz algebra and \(P\) is an averaging operator on \(\mathcal{A}\), then with \(\{\cdot,\cdot\}:A\times A\to A\) defined for all \(x,y\in A\) by \(\{x,y\}=[P(x),y],\)_ * _the triple_ \(\mathcal{A}^{\prime}=(A,\{\cdot,\cdot\},\alpha)\) _is a Hom-Leibniz algebra;_ * _If_ \(\mathcal{A}=(A,[\cdot,\cdot],\alpha)\) _is a Hom-Leibniz algebra, then_ \(\mathcal{A}^{\prime}=(A,\{\cdot,\cdot\},\alpha)\) _is a Hom-Lie algebra if and only if_ \([P(x),y]=-[P(y),x]\) _for all_ \(x,y\in A\)_;_ * _If_ \(\mathcal{A}=(A,[\cdot,\cdot],\alpha)\) _is a Hom-Lie algebra, and the averaging linear operator_ \(P\) _is surjective, that is_ \(P(A)=A\)_, then_ \(\mathcal{A}^{\prime}=(A,\{\cdot,\cdot\},\alpha)\) _is a Hom-Lie algebra._ Proof.: Let \(x,y,z\in A\), Then (2.4) in \(\mathcal{A}^{\prime}\) is proved as follows: \[\begin{array}{l}\{\alpha(x),\{y,z\}\}-\{\{x,y\},\alpha(z)\}-\{\alpha(y),\{x,z\}\}\\ =[P(\alpha(x)),[P(y),z]]-[P([P(x),y],\alpha(z)]-[P(\alpha(y)),[P(x),P(z)]]\\ \stackrel{{(\alpha,\text{ $P$ commute})}}{{=}}\\ =[\alpha(P(x)),[P(y),z]]-[[P(x),P(y)],\alpha(z)]-[\alpha(P(y)),[P(x),z]]\stackrel{{ (\mathcal{A}\text{ is Hom-Leibniz algebra})}}{{=0}}.\qed\end{array}\] **Proposition 2.19**.: _Let \(\{P_{j}\}_{1\leq j\leq n}\) be a finite set of averaging operators on a Hom-Lie algebra \((A,[\cdot,\cdot],\alpha)\), and \(\{\lambda_{j}\}_{1\leq j\leq n}\subseteq\mathbb{K}.\) Then_ * _The operator_ \(S=\sum_{j=1}^{n}\lambda_{j}P_{j}\) _is an averaging operator on_ \(A\) _if_ \[\sum_{\begin{subarray}{c}j,k=1\\ i\neq j\end{subarray}}^{n}\lambda_{j}\lambda_{k}P_{j}([P_{k}(x),y])=\sum_{ \begin{subarray}{c}j,k=1\\ j\neq k\end{subarray}}^{n}\lambda_{j}\lambda_{k}[P_{j}(x),P_{k}(y)]\] * _If_ \(P_{j}\circ P_{k}=P_{k}\circ P_{j}\) _for_ \(1\leq k,j\leq n\)_, then_ \(T=\underset{j=1}{\overset{n}{\prod}}P_{j}=P_{1}\circ\cdots\circ P_{n}\) _is an averaging operator._ * _If_ \(P_{j}\circ P_{k}=P_{k}\circ P_{j}\) _for_ \(1\leq k,j\leq n\)_, then for any polynomial_ \(F\in\mathbb{K}[t_{1},\ldots,t_{n}]\) _with zero constant term_ \(F(0,\ldots,0)=0\)_, the operator_ \(F(P_{1},\ldots,P_{n})\) _is an averaging operator._ * _If an averaging operator_ \(P\) _is invertible, then_ \(P^{-1}\) _is an averaging operator._ Proof.: (i) The map \(S\) is a linear operator on \(A\) as a linear combination of the linear operators \(\{P_{j}\}_{1\leq j\leq n}\), and \(\alpha\circ S=S\circ\alpha\), since \(\alpha\circ P_{j}=P_{j}\circ\alpha,\ 1\leq j\leq n\). For all \(x,y\in A\). \[S([S(x),y])= (\sum_{j=1}^{n}\lambda_{j}P_{j})([(\sum_{k=1}^{n}\lambda_{k}P_{k} )(x),y])=(\sum_{j=1}^{n}\lambda_{j}P_{j})([\sum_{k=1}^{n}\lambda_{k}P_{k}(x),y])\] \[= \sum_{j,k=1}^{n}\lambda_{j}\lambda_{k}P_{j}([P_{k}(x),y])\] \[= \sum_{j=1}^{n}\lambda_{j}^{2}P_{j}([P_{k}(x),y])+\sum_{\begin{subarray} {c}j,k=1\\ j\neq k\end{subarray}}^{n}\lambda_{j}\lambda_{k}P_{j}([P_{k}(x),y])\] \[= \sum_{j=1}^{n}\lambda_{j}^{2}[P_{j}(x),P_{j}(y)]+\sum_{ \begin{subarray}{c}j,k=1\\ j\neq k\end{subarray}}^{n}\lambda_{j}\lambda_{k}([P_{j}(x),P_{k}(y)])\] \[= \sum_{j=1}^{n}[\lambda_{j}P_{j}(x),\lambda_{j}P_{j}(y)]+\sum_{ \begin{subarray}{c}j,k=1\\ j\neq k\end{subarray}}^{n}([\lambda_{j}P_{j}(x),\lambda_{k}P_{k}(y)])\] \[= [\sum_{j=1}^{n}\lambda_{j}P_{j},\sum_{k=1}^{n}\lambda_{k}P_{k}]=[S (x),S(y)].\] (ii) The operator \(T_{n}=P_{1}\circ\cdots\circ P_{n}\) is linear as a composition of the linear operators, and also \(\alpha\circ T_{n}=T_{n}\circ\alpha\) since \(\alpha\circ P_{j}=P_{j}\circ\alpha\) for all \(1\leq j\leq n\). For \(n=1\), \(T_{1}=\prod_{j=1}^{n}P_{j}=P_{1}\) is an averaging operator. Suppose that for \(n=k\) the statement holds, that is \(T_{k}=P_{1}\circ\cdots\circ P_{k}\) is an averaging operator. Then, for all \(x,y\in A\), \[T_{k+1}([T_{k+1}(x),y])= T_{k}\circ P_{k+1}([T_{k}(P_{k+1}(x)),y])\] \[(T_{k},P_{k+1\text{ commute}})= T_{k}(P_{k+1}([P_{k+1}(T_{k}(x)),y]))\] \[(P_{k+1\text{ is an averaging operator}})= T_{k}([P_{k+1}(T_{k}(x)),P_{k+1}(y)])\] \[(T_{k},P_{k+1\text{ commute}})= T_{k}([T_{k}(P_{k+1}(x)),P_{k+1}(y)])\] \[(T_{k}\text{ is an averaging operator})= [T_{k}(P_{k+1}(x)),T_{k}(P_{k+1}(y))]\] \[= [T_{k+1}(x),T_{k+1}(y)].\] proving that \(T_{n}\) is averaging operator for \(n=k+1\), which completes the proof by the principle of mathematical induction. (iii) Since \(F(P_{1},\ldots,P_{n})\) is a linear combination of compositions of averaging operators, it is also an averaging operator by (ii) and (i). (iv) It is clear that if \(P\) is invertible, then \(P^{-1}\) is a linear map of \(A\) and \(\alpha\circ P^{-1}=P^{-1}\circ\alpha\). Let \(x,y\in A\). Since \(P\) is surjective, there exists \(y^{\prime}\in A\) such that \(P(x^{\prime})=P^{-1}(x)\). Then, \[P(P^{-1}([P^{-1}(x),y])) =[P^{-1}(x),y]=[P^{-1}(x),P(P^{-1}(y)]\] \[=[P(x^{\prime}),P(P^{-1}(y)]=P([P(x^{\prime}),P^{-1}(y)])=P([P^{-1 }(x),P^{-1}(y)]).\] Since \(P\) is injective, we have \(P^{-1}([P^{-1}(x),y])=[P^{-1}(x),P^{-1}(y)]\). _Remark 2.20_.: By Proposition 2.19, if \(P\) and \(Q\) are two averaging operators on a Hom-Lie algebra \((A,[\cdot,\cdot],\alpha)\), then 1. for any \(\lambda\in\mathbb{K}\), \(\lambda P\) is an averaging operator on \(A\); 2. If \(P([Q(x),y])+Q([P(x),y])=[Q(x),P(y)]+[P(x),Q(y)]\) holds for all \(x,y\in A\), then \(P+Q\) is an averaging operator. 3. If \(P\circ Q=Q\circ P\), then \(P\circ Q\) is an averaging operator; 4. for any polynomial \(F\in\mathbb{K}[t]\) with zero constant term \(F(0)=0\), the operator \(F(P)\) is an averaging operator. _Remark 2.21_.: It is known that \(P\) is an averaging operator on a Hom-Lie algebra \((A,[\cdot,\cdot],\alpha)\) if and only if \(\lambda P\) is an averaging operator on \(A\) for \(\lambda\in\mathbb{K}^{*}=\mathbb{K}\setminus\{0\}\). So the set of averaging operators on any Hom-Lie algebra carries an action of \(\mathbb{K}^{*}\) by scalar multiplication. Next we provide the necessary and sufficient conditions for an idempotent linear operator to be an averaging operator. **Definition 2.22**.: An idempotent operator on a Hom-Lie algebra \((A,[\cdot,\cdot],\alpha)\) over \(\mathbb{K}\) is a linear map \(P:A\to A\) satisfying \(\alpha\circ P=P\circ\alpha\) and \(P^{2}=P\). _Remark 2.23_.: Recall that there is a bijection \[\{\text{idempotent linear operators on }A\}\leftrightarrow\{\text{direct sum decompositions }A=A_{0}\oplus A_{1}\}\] where \(A_{0}=im(P)\) and \(A_{1}=ker(P)\). The linear map \(P\) corresponding to \(A=A_{0}\oplus A_{1}\) is called the projection onto \(A_{0}\) along \(A_{1}\). If \(P\) is the projection onto \(A_{0}\) along \(A_{1}\), then \(I-P\) is the projection onto \(A_{1}\) along \(A_{0}\) since \((I-P)^{2}=I-2P+P^{2}=I-2P+P=I-P\) and \(im(I-P)=ker(P),\ Ker(I-P)=im(P)\). **Proposition 2.24**.: _Let \((A,[\cdot,\cdot],\alpha)\) be a Hom-Lie algebra and let \(P:A\to A\) be an idempotent linear map. Let \(A=A_{0}\oplus A_{1}\) be the corresponding linear decomposition. Then \(P\) is an averaging operator if and only if_ \[[A_{0},A_{0}]\subseteq A_{0},\quad[A_{0},A_{1}]\subseteq A_{1}. \tag{2.12}\] Proof.: For any \(x,y\in A\), denote \(x=x_{0}+x_{1}\) and \(y=y_{0}+y_{1}\) with \(x_{i},y_{i}\in A_{i},i=0,1\). Suppose \(P\) is an averaging operator. Then from \(P(A)=A_{0}\) and \([P(x),P(y)]=P([P(x),y])\), we obtain \([A_{0},A_{0}]\subseteq A_{0}\). Then we have \[[P(x),P(y)] = [x_{0},y_{0}],\] \[P([P(x),y]) = P([x_{0},y_{0}]+[x_{0},y_{1}])=P([x_{0},y_{0}])+P([x_{0},y_{1}]) =[x_{0},y_{0}]+P([x_{0},y_{1}]).\] Thus from (2.11) we obtain \(P([x_{0},y_{1}])=0\) for all \(x_{i},y_{i}\in A_{i},i=0,1\). Therefore (2.12) holds since \(A_{1}=kerP\) by the definition of \(P\). Conversely, suppose (2.12) holds. Then we have \[P([P(x),y])=P([x_{0},y_{0}]+[x_{0},y_{1}])=P([x_{0},y_{0}])+P([x_{0},y_{1}])=[x _{0},y_{0}]=[P(x),P(y)].\] Thus \(P\) is an averaging operator. **Corollary 2.25**.: _An idempotent endomorphism \(P:A\to A\) is an averaging operator._ Proof.: Let \(A_{0}:=imP\) and \(A_{1}:=kerP.\) Then we have \(A=A_{0}\oplus A_{1}\) and \(P\) is the projection to \(A_{0}\) along \(A_{1}.\) Since \(A_{1}\) is an ideal of \(A,\) then (2.12) holds. Hence \(P\) is an averaging operator. ## 3 On homogeneous averaging operators on \(q\)-deformed Witt Hom-algebra In this section we classify the homogeneous averaging operators on the \(q\)-deformed Witt algebra \(\mathcal{V}^{q}\) and we give the induced Hom-Leibniz algebras from the averaging operators on the \(q\)-deformed Witt algebra \(\mathcal{V}^{q}\) and its multiplicativity condition is studied. **Definition 3.1**.: A homogeneous operator \(F\) with degree \(d\in\mathbb{Z}\) on the \(q\)-deformed Witt Hom-algebra \(\mathcal{V}^{q}\) is a linear operator on \(\mathcal{V}^{q}\) satisfying \(F(\mathcal{V}^{q}_{m})\subseteq\mathcal{V}^{q}_{m+d}\) for all \(m\in\mathbb{Z}.\) Therefore, a homogeneous averaging operator \(P_{d}\) with degree \(d\) on the \(q\)-deformed Witt Hom-algebra \(\mathcal{V}^{q}\) is an averaging operator on \(\mathcal{V}^{q}\) of the following form \[P_{d}(L_{m})=f(m+d)L_{m+d},\quad\forall\ m\in\mathbb{Z}, \tag{3.1}\] where \(f\) is a \(\mathbb{K}\)-valued function defined on \(\mathbb{Z}.\) Let \(P_{d}\) be a homogeneous averaging operator with degree \(d\) on the \(q\)-deformed Witt Hom-algebra \((\mathcal{V}^{q},[\cdot,\cdot],\alpha)\) satisfying (3.1). Then by (2.6) and (2.11), \[[P_{d}(L_{m}),P_{d}(L_{n})] =[f(m+d)L_{m+d},f(n+d)L_{n+d}]\] \[=f(m+d)f(n+d)(\{m+d\}-\{n+d\})L_{m+n+2d},\] \[P_{d}([P_{d}(L_{m}),L_{n}]) =P_{d}([f(m+d)L_{m+d},L_{n}])\] \[=f(m+d)f(m+n+2d)(\{m+d\}-\{n\})L_{m+n+2d}.\] We see that the function \(f\) satisfies for all \(m,n\in\mathbb{Z},\) \[f(m+d)f(n+d)(\{m+d\}-\{n+d\})=f(m+d)f(m+n+2d)(\{m+d\}-\{n\}),\] or equivalently, after changing \(m\to m-d\) and \(n\to n-d,\) \[f(m)\big{(}f(n)(\{m\}-\{n\})-f(m+n)(\{m\}-\{n-d\})\big{)}=0. \tag{3.2}\] **Lemma 3.2**.: _If \(P_{d}\) is a non-zero averaging operator on the \(q\)-deformed Witt Hom-algebra \((\mathcal{V}^{q},[\cdot,\cdot],\alpha)\) with degree \(d\), then \(\alpha\circ P_{d}=P_{d}\circ\alpha\) if and only if \(q^{d}=1.\)_ Proof.: For all \(m\in\mathbb{Z},\) \[\alpha\circ P_{d}(L_{m}) =\alpha(f(m+d)L_{m+d})=f(m+d)\alpha(L_{m+d})=f(m+d)(1+q^{m+d})L_{ m+d},\] \[P_{d}\circ\alpha(L_{m}) =P_{d}(\alpha(L_{m}))=P_{d}((1+q^{m})L_{m})=(1+q^{m})P_{d}(L_{m})= (1+q^{m})f(m+d)L_{m+d},\] \[\alpha\circ P_{d}(L_{m}) =P_{d}\circ\alpha(L_{m})\ \Leftrightarrow\forall\ m\in\mathbb{Z}:\ f(m+d)(q^{d}-1)=0,\] is equivalent to \(q^{d}=1\) when \(P_{d}\neq 0,\) as in this case \(f(m+d)\neq 0\) for some \(m\in\mathbb{Z}.\) ### Case 1: \(q=1\) When \(q=1\), equation (3.2) becomes for all \(m,n\in\mathbb{Z}\), \[f(m)\big{(}f(n)(m-n)-f(m+n)(m-n+d)\big{)}=0. \tag{3.3}\] Plugging \(n=0\) in (3.3), we have \[f(m)\big{(}mf(0)-(m+d)f(m)\big{)}=0. \tag{3.4}\] **Subcase 1:**\(d=0\) **Proposition 3.3**.: _With the notations as above, the averaging operator \(P_{0}\) with degree \(d=0\) is given by_ \[f(m)=\mu f(0)+\nu\delta_{m,0},\ \ \forall\ m\in\mathbb{Z},\ for\ some\ (\nu,\mu)\in\mathbb{K}\times\{0,1\},\] _where for any \(x,y\in\mathbb{K},\)\(\delta_{x,y}=\left\{\begin{array}{ll}1\ \mbox{if}\ x=y\\ 0\ \mbox{if}\ x\neq y.\end{array}\right.\)_ Proof.: When \(d=0\), (3.4) becomes \(mf(m)(f(0)-f(m))=0.\) Hence \(f(m)=\mu f(0)+\nu\delta_{m,0}\) for some \(\nu\in\mathbb{K}\) and \(\mu\in\{0,1\}.\) **Subcase 2:**\(d\in\mathbb{Z}^{*}\) **Proposition 3.4**.: _With the notations as above, when the degree \(d\in\mathbb{Z}^{*}\) and \(f(0)=0\), we have_ \[f(m)=\nu\delta_{m+d,0},\ \ \forall\ m\in\mathbb{Z},\ where\ \nu\in\mathbb{K},\] Proof.: When \(d\neq 0\) and \(f(0)=0\), then by equation (3.4), we have for all \(m\in\mathbb{Z}\), \((m+d)f^{2}(m)=0.\) Thus the function \(f\) satisfies for any \(m\in\mathbb{Z}\), \(f(m)=\nu\delta_{m+d,0}\) for some \(\nu\in\mathbb{K}\). **Proposition 3.5**.: _With the notations as above, when the degree \(d\in\mathbb{Z}^{*}\) and \(f(0)\neq 0\), we have_ \[f(m)=\mu\frac{m}{m+d}f(0)\delta_{m,\mathbb{Z}\setminus\{-d\}},\ \ \forall\ m\in\mathbb{Z},\ where\ \mu\in\{0,1\},\] _where \(\delta_{m,\mathbb{Z}\setminus\{-d\}}=\left\{\begin{array}{ll}1\ \mbox{if}\ m\neq-d\\ 0\ \mbox{if}\ m=-d.\end{array}\right.\)_ Proof.: When \(d\neq 0\) and \(f(0)\neq 0\), it follows from equation (3.4) for \(m=-d\) that \(f(-d)=0\). Moreover for \(m\neq-d\) in equation (3.4), we have \(f(m)=\mu\frac{m}{m+d}f(0),\ \ \mbox{ for some }\mu\in\{0,1\}.\) Therefore for all \(m\in\mathbb{Z}\), \(f(m)=\mu\frac{m}{m+d}f(0)\delta_{m,\mathbb{Z}\setminus\{-d\}},\ \ where\ \mu\in\{0,1\}.\) ### Case 2: \(q\neq 1\) and \(q^{d}=1\) **Proposition 3.6**.: _With the notations as above, Suppose that the degree \(d\) satisfying \(q\neq 1\) and \(q^{d}=1\), then we have_ \[f(m)=\mu f(0)+\nu\delta_{q^{m},1},\ \ \forall\ m\in\mathbb{Z},\ where\ (\nu,\mu)\in \mathbb{K}\times\{0,1\}.\] Proof.: When \(q\neq 1\) and \(q^{d}=1\), (3.2) becomes for all \(m,n\in\mathbb{Z}\), \[f(m)\big{(}f(n)(\{m\}-\{n\})-f(m+n)(\{m\}-\{n\})\big{)}=0. \tag{3.5}\] Taking \(n=0\) in (3.5) yields \(\{m\}f(m)(f(0)-f(m))=0.\) Hence \(f(m)=\mu f(0)+\nu\delta_{q^{m},1}\) for some \(\nu\in\mathbb{K}\) and \(\mu\in\{0,1\}\). **Theorem 3.7**.: _The homogeneous averaging operator \(P_{d}\) with degree \(d\) on the \(q\)-deformed Witt Hom-algebra \((\mathcal{V}^{q},[\cdot,\cdot],\alpha)\) must be one of the following operators, given for all \(m\in\mathbb{Z}\), by_ \[P_{d}^{1}(L_{m}) =\beta+\nu\delta_{m+2d,0}L_{m+d},\qquad\text{ for }q=1\text{ and }d\in\mathbb{Z},\] \[P_{d}^{2}(L_{m}) =\mu\frac{m+d}{m+2d}\gamma\delta_{m+d,\mathbb{Z}\setminus\{-d \}}L_{m+d},\qquad\text{ for }q=1\text{ and }d\in\mathbb{Z},\] \[P_{d}^{3}(L_{m}) =\beta+\nu\delta_{q^{m},1}L_{m+d},\qquad\text{ for }q\neq 1\text{ and }q^{d}=1,\] _where \(\beta,\nu\in\mathbb{K}\), \(\gamma\in\mathbb{K}^{*}\) and \(\mu\in\{0,1\}\)._ Proof.: Directly by combining Lemma 3.2 and Propositions 3.3, 3.4, 3.5 and 3.6. Now, using the construction given in Proposition 2.18, we have the following: **Theorem 3.8**.: _The homogeneous averaging operators obtained in Theorem 3.7 for the \(q\)-deformed Witt Hom-algebra \((\mathcal{V}^{q},[\cdot,\cdot],\alpha)\), give rise to the following Hom-Leibniz algebras on the underlying linear space \(\mathcal{V}^{q}\),_ \[\{L_{m},L_{n}\}^{1} =(\beta+\nu\delta_{m+d,0})(m-n)L_{m+n},\qquad\forall m,n\in \mathbb{Z},\text{ where }q=1\text{ and }d\in\mathbb{Z},\] \[\{L_{m},L_{n}\}^{2} =(\mu\frac{m}{m+d}\gamma\delta_{m,\mathbb{Z}\setminus\{-d\}})(m- n)L_{m+n},\qquad\forall m,n\in\mathbb{Z},\text{ where }q=1\text{ and }d\in\mathbb{Z},\] \[\{L_{m},L_{n}\}^{3} =(\beta+\nu\delta_{q^{m},1})(\{m\}-\{n\})L_{m+n},\qquad\forall m,n\in\mathbb{Z},\text{ where }q\neq 1\text{ and }q^{d}=1,\] _where \(\beta,\nu\in\mathbb{K}\), \(\gamma\in\mathbb{K}^{*}\) and \(\mu\in\{0,1\}\)._ Proof.: By Proposition 2.18, the Hom-Leibniz algebra induced by \(P_{d}^{i},\ i=1,2,3\), is given for \(m,n\in\mathbb{Z}\) by \[\{L_{m},L_{n}\}^{1}=[P_{d}^{1}(L_{m}),L_{n}]=[(\beta+\nu\delta_{m +2d,0}L_{m+d},L_{n}]=(\beta+\nu\delta_{m+2d,0})(m+d-n)L_{m+n+d}\] \[=(\beta+\nu\delta_{m+d,0})(m-n)L_{m+n},\] \[\{L_{m},L_{n}\}^{2}=[P_{d}^{2}(L_{m}),L_{n}]=[\mu\frac{m+d}{m+2d} \gamma\delta_{m+d,\mathbb{Z}\setminus\{-d\}}L_{m+d},L_{n}]\] \[=(\mu\frac{m+d}{m+2d}\gamma\delta_{m+d,\mathbb{Z}\setminus\{-d\}})(m+d-n)L_{m+n+d}=( \mu\frac{m}{m+d}\gamma\delta_{m,\mathbb{Z}\setminus\{-d\}})(m-n)L_{m+n},\] \[\{L_{m},L_{n}\}^{3}=[P_{d}^{3}(L_{m}),L_{n}]=[(\beta+\nu\delta_{q^{m},1})L_{m+d },L_{n}]=(\beta+\nu\delta_{q^{m},1})(\{m\}-\{n\})L_{m+n+d}\] \[=(\beta+\nu\delta_{q^{m},1})(\{m\}-\{n-d\})L_{m+n}=(\beta+\nu\delta_{q^{m},1}) (\{m\}-(\{n\}+q^{n}\{-d\})L_{m+n}\] \[(\{-d\}=0)=(\beta+\nu\delta_{q^{m},1})(\{m\}-\{n\})L_{m+n}.\qed\] **Proposition 3.9**.: _The non-trivial Hom-Leibniz algebra \((\mathcal{V}^{q},\ \{\cdot,\cdot\}^{i},\alpha)\) induced by \(P_{d}^{i}\) is multiplicative if and only if \(i=3\) and \(q=-1\)._ Proof.: By Corollary 2.2 item (ii): \((\mathcal{V}^{q},\{\cdot,\cdot\}^{1},\alpha)\) is multiplicative \(\Leftrightarrow\forall\ m,n\in\mathbb{Z}:\ 2(\beta+\nu\delta_{m+d,0})(m-n)=0\)\(\Leftrightarrow\) \(\forall\ m\in\mathbb{Z}:\ (\beta+\nu\delta_{m+d,0})=0\Leftrightarrow\beta=\nu=0 \Leftrightarrow\{\cdot,\cdot\}^{1}=0.\) \((\mathcal{V}^{q},\{\cdot,\cdot\}^{2},\alpha)\) is multiplicative \(\Leftrightarrow\)\(\forall\ m,n\in\mathbb{Z}:\ 2\mu\frac{m}{m+d}\gamma\delta_{m,\mathbb{Z}\setminus\{-d\}})(m-n)=0\)\(\Leftrightarrow\)\(\forall\ m\in\mathbb{Z}:\ \mu\frac{m}{m+d}\gamma\delta_{m,\mathbb{Z}\setminus\{-d\}})=0\)\(\Leftrightarrow\)\(\mu=0\)\(\Leftrightarrow\)\(\{\cdot,\cdot\}^{2}=0.\) \((\mathcal{V}^{q},\{\cdot,\cdot\}^{3},\alpha)\) is multiplicative \(\Leftrightarrow\) \(\forall\ m,n\in\mathbb{Z}:\ (\beta+\nu\delta_{q^{m},1})(\{m\}-\{n\})\big{(}(1+q^{m+n})-(1+ q^{m})(1+q^{n})\big{)}=0\)\(\Leftrightarrow\) \(\forall\ m\in\mathbb{Z}:\ (\beta+\nu\delta_{q^{m},1})(q^{n}-q^{m})(q^{n}+q^{m})=0\)\(\Leftrightarrow\) \(\forall\ m\in\mathbb{Z}:\ (\beta+\nu\delta_{q^{m},1})(q^{2n}-q^{2m})=0\)\(\Leftrightarrow\) \(\forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{l}(q^{2(n-m)})=1,\ \mbox{or}\\ (\beta+\nu\delta_{q^{m},1})=0,\end{array}\right.\) \(\Leftrightarrow\) \(\left\{\begin{array}{l}q^{2p}=1,\ \forall p\in\mathbb{Z},\ \mbox{or}\\ \forall\ m\in\mathbb{Z}:\ \beta+\nu\delta_{q^{m},1}=0,\end{array}\right.\) ## 4 On homogeneous averaging operators on _q_-deformed _W_(2,2) Hom-algebra In this section we classify the homogeneous averaging operators on the \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}.\) Also, we give the induced Hom-Leibniz algebras from the averaging operators on the \(q\)-deformed \(W(2,2)\) Hom-algebra and its multiplicativity condition is studied. **Definition 4.1**.: A homogeneous operator \(F\) with degree \(d\in\mathbb{Z}\) on the \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}\) is a linear operator on \(\mathcal{W}^{q}\) satisfying \(F(\mathcal{W}^{q}_{m})\subseteq\mathcal{W}^{q}_{m+d}\) for all \(m\in\mathbb{Z}\). Hence a homogeneous averaging operator \(P_{d}\) with degree \(d\) on the \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}\) is an averaging operator on \(\mathcal{W}^{q}\) with the following form: \[P_{d}(L_{m})=f_{1}(m+d)L_{m+d}+g_{1}(m+d)W_{m+d}, \tag{4.1}\] \[P_{d}(W_{m})=f_{2}(m+d)L_{m+d}+g_{2}(m+d)W_{m+d}, \tag{4.2}\] where \(f_{i}\) and \(g_{i}\) are \(\mathbb{K}\)-valued functions defined on \(\mathbb{Z}\). Let \(P_{d}\) be a homogeneous averaging operator of degree \(d\) on the \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}\) satisfying equations (4.1) and (4.2). Then, by equations (2.7) and (2.11), \[[P_{d}(L_{m}),P_{d}(L_{n})]=[f_{1}(m+d)L_{m+d}+g_{1}(m+d)W_{m+d},f_ {1}(n+d)L_{n+d}+g_{1}(n+d)W_{n+d}]\\ =f_{1}(m+d)f_{1}(n+d)[m-n]L_{m+n+2d}+f_{1}(m+d)g_{1}(n+d)[m-n]W_{n+ m+2d}\\ -g_{1}(m+d)f_{1}(n+d)[n-m]W_{m+n+2d},\] \[P_{d}([P_{d}(L_{m}),L_{n}])=P_{d}([f_{1}(m+d)L_{m+d}+g_{1}(m+d)W_ {m+d},L_{n}])\\ =f_{1}(m+d)[m+d-n]P_{d}(L_{m+n+d})-g_{1}(m+d)[n-m-d]P_{d}(W_{m+n+ d})\\ =f_{1}(m+d)f_{1}(m+n+2d)[m+d-n]L_{m+n+2d}\\ +f_{1}(m+d)g_{1}(m+n+2d)[m+d-n]W_{m+n+2d}\\ -g_{1}(m+d)f_{2}(m+n+2d)[n-m-d]L_{m+n+2d}\\ -g_{1}(m+d)g_{2}(m+n+2d)[n-m-d]W_{m+n+2d},\] we see that the functions \(f_{i}\) and \(g_{i}\) satisfy, for all \(m,n\in\mathbb{Z}\), the following equations: \[f_{1}(m)f_{1}(n)[m-n]-f_{1}(m)f_{1}(m+n)[m+d-n]+g_{1}(m)f_{2}(m+n )[n-m-d]=0, \tag{4.3}\] \[f_{1}(m)g_{1}(n)[m-n]-f_{1}(n)g_{1}(m)[n-m]-f_{1}(m)g_{1}(m+n)[m +d-n]\\ +g_{1}(m)g_{2}(m+n)[n-m-d]=0. \tag{4.4}\] and from \[[P_{d}(L_{m}),P_{d}(W_{n})]=[f_{1}(m+d)L_{m+d}+g_{1}(m+d)W_{m+d}, f_{2}(n+d)L_{n+d}+g_{2}(n+d)W_{n+d}]\\ =f_{1}(m+d)f_{2}(n+d)[m-n]L_{m+n+2d}+f_{1}(m+d)g_{2}(n+d)[m-n]W_{ m+n+2d}\\ -g_{1}(m+d)f_{2}(n+d)[n-m]W_{m+n+2d},\] \[P_{d}([P_{d}(L_{m}),W_{n}])=P_{d}([f_{1}(m+d)L_{m+d}+g_{1}(m+d)W_ {m+d},W_{n}])\\ =f_{1}(m+d)f_{2}(m+n+2d)[m+d-n]L_{m+n+2d}\\ +f_{1}(m+d)g_{2}(m+n+2d)[m+d-n]W_{m+n+2d},\] we see that the functions \(f_{i}\) and \(g_{i}\) satisfy, for all \(m,n\in\mathbb{Z}\), the following equations: \[f_{1}(m)f_{2}(n)[m-n]-f_{1}(m)f_{2}(m+n)[m+d-n]=0, \tag{4.5}\] \[f_{1}(m)g_{2}(n)[m-n]-g_{1}(m)f_{2}(n)[n-m]-f_{1}(m)g_{2}(m+n)[m +d-n]=0. \tag{4.6}\] In the same way, from \[[P_{d}(W_{m}),P_{d}(W_{n})]=[f_{2}(m+d)L_{m+d}+g_{2}(m+d)W_{m+d},f_{2}(n+d)L_{n+d}+g_{2}(n+d)W_{n+d}]\\ =[f_{2}(m+d)f_{2}(n+d)[m-n]L_{n+m+2d}+f_{2}(m+d)g_{2}(n+d)[m-n]W_ {m+n+2d}\\ -g_{2}(m+d)f_{2}(n+d)[n-m]W_{m+n+2d},\] \[P_{d}([P_{d}(W_{m}),W_{n}])=P_{d}([f_{2}(m+d)L_{m+d}+g_{2}(m+d)W_{m+d},W_{n}]\\ =f_{2}(m+d)f_{2}(m+n+2d)[m+d-n]L_{m+n+2d}+f_{2}(m+d)g_{2}(m+n+2d)W_ {m+n+2d},\] \[P_{d}([W_{m},P_{d}(W_{m})]=P_{d}([W_{m},f_{2}(n+d)L_{n+d}+g_{2}(n+d)W_{n+d}])\] \[-f_{2}(n+d)f_{2}(m+n+2d)[m-n-d]L_{m+n+2d}\] \[-f_{2}(n+d)g_{2}(m+n+2d)[m-n-d]W_{m+n+2d},\] we see that the functions \(f_{i}\) and \(g_{i}\) satisfy for all \(m,n\in\mathbb{Z}\), the following equations: \[f_{2}(m)f_{2}(n)[m-n]-f_{2}(m)f_{2}(m+n)[m+d-n]=0, \tag{4.7}\] \[f_{2}(m)g_{2}(n)[m-n]-f_{2}(n)g_{2}(m)[n-m]-f_{2}(m)g_{2}(m+n)[m+ d-n]=0. \tag{4.8}\] Similarly from \[[P_{d}(W_{m}),P_{d}(L_{n})]=[f_{2}(m+d)L_{m+d}+g_{2}(m+d)W_{m+d}, f_{1}(n+d)L_{n+d}+g_{1}(n+d)W_{n+d}]\] \[\quad=f_{2}(m+d)f_{1}(n+d)[m-n]L_{m+n+2d}\] \[\quad\quad+f_{2}(m+d)g_{1}(n+d)[m-n]W_{m+n+2d}-g_{2}(m+d)f_{1}(n+ d)[m-n]W_{m+n+2d},\] \[P_{d}([P_{d}(W_{m}),L_{n}])=P_{d}([f_{2}(m+d)L_{m+d}+g_{2}(m+d)W _{m+d},L_{n}])\] \[\quad=f_{2}(m+d)[m+d-n](f_{1}(m+n+2d)L_{m+n+2d}+g_{1}(m+n+2d)L_{ m+n+2d})\] \[\quad\quad g_{2}(m+d)[n-d-m](f_{2}(m+n+2d)L_{m+n+2d}+g_{2}(m+n+2d )W_{m+n+2d}),\] we see that the functions \(f_{i}\) and \(g_{i}\) satisfy, for all \(m,n\in\mathbb{Z}\), \[f_{2}(m)f_{1}(n)[m-n]-f_{2}(m)f_{1}(m+n)[m+d-n]+g_{2}(m)f_{2}(m+n )[n-m-d]=0, \tag{4.9}\] \[f_{2}(m)g_{1}(n)[m-n]-g_{2}(m)f_{1}(n)[n-m]-f_{2}(m)g_{1}(m+n)[m+ d-n]\] (4.10) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+g_{ 2}(m)g_{2}(m+n)[n-m-d]=0.\] **Lemma 4.2**.: _If \(P_{d}\) be a non zero averaging operator on \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}\) with degree \(d\). Then \(\alpha\circ P_{d}=P_{d}\circ\alpha\) if and only if \(q^{d}=1\)._ Proof.: For all \(m\in\mathbb{Z}\), we have \[\alpha\circ P_{d}(L_{m}) =\alpha(P_{d}(L_{m}))=\alpha(f_{1}(m+d)L_{m+d}+g_{1}(m+d)W_{m+d})\] \[=f_{1}(m+d)(q^{m+d}+q^{-(m+d)})L_{m+d}+g_{1}(m+d)(q^{m+d}+q^{-(m+ d)})W_{m+d},\] \[P_{d}\circ\alpha(L_{m}) =P_{d}(\alpha(L_{m}))=P_{d}((q^{m}+q^{-m})L_{m})\] \[=f_{1}(m+d)(q^{m}+q^{-m})L_{m}+g_{1}(m+d)(q^{m}+q^{-m})W_{m}.\] Then \(\forall m\in\mathbb{Z},\ \alpha\circ P_{d}(L_{m})=P_{d}\circ\alpha(L_{m})\) if and only if \(q^{m}+q^{-m}=q^{m+d}+q^{-m-d}\). Similarly, \(\forall m\in\mathbb{Z},\ \alpha\circ P_{d}(W_{m})=P_{d}\circ\alpha(W_{m})\) if and only if \(q^{m}+q^{-m}=q^{m+d}+q^{-m-d}\). Thus, 1. if \(q^{d}=1\), it is clear that \(\alpha\circ P_{d}=P_{d}\circ\alpha\), 2. if \(q^{d}\neq 1\), we have \[\alpha\circ P_{d}=P_{d}\circ\alpha\] \[\iff \forall m\in\mathbb{Z},\ q^{m}+q^{-m}=q^{m+d}+q^{-m-d}\iff \forall m\in\mathbb{Z},\ q^{m}(1-q^{d})=q^{-m-d}(1-q^{d})\] \[\iff \forall m\in\mathbb{Z},\ q^{m}=q^{-m-d}\iff\forall m\in\mathbb{Z}, \ q^{2m+d}=1,\] this implies for \(m=0\), \(q^{d}=1\) which impossible since \(q^{d}\neq 1\) ### Case 1: \(q\neq-1,1\) and \(q^{d}=1\) #### Subcase 1: \(q^{2m}=1\) Take \(n=0\) in (4.3)-(4.10). For \(q^{d}=1\) and \(q^{2m}=1\), the functions \(f_{1},\ f_{2},\ g_{1}\) and \(g_{2}\) satisfy \[f_{1}(m)=\nu_{1},\ f_{2}(m)=\nu_{2},\ g_{1}(m)=\nu_{3},\ g_{2}(m)=\nu_{4},\ \nu_{i}\in\mathbb{K}.\] Then we have the following Proposition. **Proposition 4.3**.: _If \(P_{d}\) is an averaging operator on \(\mathcal{W}^{q}\) with degree \(d\), where \(q^{d}=1,q^{2m}=1\), then_ \[\left\{\begin{array}{ll}f_{1}(m)=\nu_{1},&f_{2}(m)=\nu_{2},\\ g_{1}(m)=\nu_{3},&g_{2}(m)=\nu_{4},\end{array}\right.\quad\text{where }\nu_{i}\in \mathbb{K}.\] #### Subcase 2: \(q^{2m}\neq 1\). Taking \(n=0\) in (4.7) yields \[\begin{array}{ll}f_{2}(m)f_{2}(0)[m]=f_{2}^{2}(m)[m+d]=f_{2}^{2}(m)(q^{-d}[m ]+q^{m}[d])&=f_{2}^{2}(m)[m].\\ &\text{(since }q^{d}=1)\end{array}\] This gives \(f_{2}(m)(f_{2}(0)-f_{2}(m))=0\). Hence, \[f_{2}(m)=\mu_{1}f_{2}(0),\quad\mu_{1}\in\{0,1\}.\] Then, we have the following Proposition. **Proposition 4.4**.: _If \(P_{d}\) is an averaging operators on \(\mathcal{W}^{q}\) with degree \(d\) such that \(q^{d}=1\), \(q^{2m}\neq 1\) and \(f_{2}(0)=0,\) then_ * _if_ \(f_{1}(0)=0\)_, then_ \(f_{1}(m)=0,\ f_{2}(m)=0,\ g_{1}(m)=\gamma,\ g_{2}(m)=0,\) _where_ \(\gamma\in\mathbb{K}\)_;_ * _if_ \(f_{1}(0)\neq 0\)_, then_ * \(f_{1}(m)=f_{1}(0),\ f_{2}(m)=0,\ g_{1}(m)=\gamma,\ g_{2}(m)=0\)_, where_ \(\gamma\in\mathbb{K};\)__ * \(f_{1}(m)=f_{1}(0),\ f_{2}(m)=0,\ g_{1}(m)=0,\ g_{2}(m)=f_{1}(0)\)_, where_ \(\gamma\in\mathbb{K};\)__ * \(f_{1}(m)=0,\ f_{2}(m)=0,\ g_{1}(m)=\gamma,\ g_{2}(m)=f_{1}(0)\)_, where_ \(\gamma\in\mathbb{K}.\)__ Proof.: Let \(q^{d}=1\), \(q^{2m}\neq 1\) and \(f_{2}(0)=0\) as assumed. Taking \(n=0\) in (4.3) yields \[\begin{array}{ll}f_{1}(m)f_{1}(0)[m]=&f_{1}^{2}(m)[m+d]-g_{1}(m)f_{2}(m)[-m -d]\\ =&f_{1}^{2}(m)[m]+g_{1}(m)f_{2}(m)[m].\quad\text{(since }q^{d}=1)\end{array}\] Since \(q^{2m}\neq 0\), we get \(f_{1}(m)f_{1}(0)=f_{1}^{2}(m)+g_{1}(m)f_{2}(m)\), and since \(f_{2}(m)=0\), we obtain \(f_{1}(m)(f_{1}(0)-f_{1}(m))=0\). Thus, \(f_{1}(m)=\mu_{2}f_{1}(0)\), \(\mu_{2}\in\{0,1\}\). Setting \(n=0\) in (4.9) yields \[\begin{array}{ll}f_{2}(m)g_{1}(0)[m]=&g_{2}(m)f_{1}(0)[-m]+f_{2}(m)g_{1}(m)[m +d]-g_{2}^{2}(m)[-m-d]\\ =&-g_{2}(m)f_{1}(0)[m]+f_{2}(m)g_{1}(m)[m]+g_{2}^{2}(m)[m].\quad\text{(since }q^{d}=1)\end{array}\] Since \(q^{2m}\neq 1\), we have \(f_{2}(m)g_{1}(0)=-g_{2}(m)f_{1}(0)+f_{2}(m)g_{1}(m)+g_{2}^{2}(m)\), from which together with \(f_{2}(m)=0\), we get \(g_{2}(m)=\mu_{3}f_{1}(0)\), \(\kappa_{3}\in\{0,1\}\). Taking \(n=0\) in (4.4) gives \[f_{1}(m)g_{1}(0)[m]= f_{1}(0)g_{1}(m)[-m]+f_{1}(m)g_{1}(m)[m+d]-g_{1}(m)g_{2}(m)[-m-d]\] \[= -f_{1}(0)g_{1}(m)[m]+f_{1}(m)g_{1}(m)[m]+g_{1}(m)g_{2}(m)[m],\quad \text{(since $q^{d}=1$)}\] from which with \(q^{2m}\neq 1\) we get \(f_{1}(m)g_{1}(0)=-f_{1}(0)g_{1}(m)+f_{1}(m)g_{1}(m)+g_{1}(m)g_{2}(m)\), and with \(f_{1}(m)=\mu_{2}f_{1}(0)\) and \(g_{2}(m)=\mu_{3}f_{1}(0)\), we obtain \(g_{1}(m)(\mu_{2}+\mu_{3}-1)f_{1}(0)=\mu_{2}f_{1}(0)g_{1}(0)\). Then, we have the two cases: * if \(f_{1}(0)=0\), then \(g_{1}(m)=\gamma\), * if \(f_{1}(0)\neq 0\) we have \(\mu_{2}g_{1}(0)=(\mu_{2}+\mu_{3}-1)g_{1}(m)\) then for \(\mu_{2}=1\) and \(\mu_{3}=0\) gives \(g_{1}(0)=0\). Then \(\left\{\begin{array}{ll}g_{1}(m)=0&\text{ if }(\mu_{2},\mu_{3})\in\{(0,0),(1,1)\},\\ g_{1}(m)=\gamma&\text{ if }(\mu_{2},\mu_{3})\in\{(1,0),(0,1)\}.\end{array}\right.\) **Proposition 4.5**.: _If \(P_{d}\) is the averaging operator on \(\mathcal{W}^{q}\) with degree \(d\) satisfying \(q^{d}=1\), \(q^{2m}\neq 1\) and \(f_{2}(0)\neq 0\), then \(\left\{\begin{array}{ll}f_{1}(m)=\gamma,&f_{2}(m)=f_{2}(0),\\ g_{1}(m)=\frac{\gamma f_{1}(0)-\gamma^{2}}{f_{2}(0)},&g_{2}(m)=f_{1}(0)-\gamma, \end{array}\right.\) where \(\gamma\in\mathbb{K}\)._ Proof.: Let \(q^{d}=1,\ q^{2m}\neq 1\) and \(f_{2}(0)\neq 0\), as assumed. Taking \(n=0\) in (4.5) yields \[f_{1}(m)f_{2}(0)[m]= f_{1}(m)f_{2}(m)[m+d]\] \[= f_{1}(m)f_{2}(m)[m].\quad\text{(since $q^{d}=1$)}\] Since \(q^{2m}\neq 1\) we have \(f_{1}(m)f_{2}(0)=f_{1}(m)f_{2}(m)\). This together with \(f_{2}(m)=\mu_{1}f_{2}(0)\) gives \(f_{1}(m)(\mu_{1}-1)=0\). Then \(f_{1}(m)=\left\{\begin{array}{ll}0&if&\mu=0,\\ \gamma&if&\mu_{1}=1.\end{array}\right.\) Taking \(n=0\) in (4.8) yields \[f_{2}(m)g_{2}(0)[m]= f_{2}(0)g_{2}(m)[-m]+f_{2}(m)g_{2}(m)[m+d]\] \[= -f_{2}(0)g_{2}(m)[m]+f_{2}(m)g_{2}(m)[m].\quad\text{(since $q^{d}=1$)}\] Since \(q^{2m}\neq 1\), we have \(f_{2}(m)g_{2}(0)=-f_{2}(0)g_{2}(m)+f_{2}(m)g_{2}(m)\). This, with \(f_{2}(m)=\mu_{1}f_{2}(0)\), gives \(\mu_{1}f_{2}(0)g_{2}(0)=-f_{2}(0)g_{2}(m)+\mu_{1}f_{2}(0)g_{2}(m)\). Then * if \(\mu_{1}=0\) we have \(g_{2}(m)=0\), * if \(\mu_{1}=1\) we have \(g_{2}(0)=0\). Taking \(n=0\) in (4.3) yields \[f_{1}(m)f_{1}(0)[m]= f_{1}^{2}(m)[m+d]-g_{1}(m)f_{2}(m)[-m-d]\] \[= f_{1}^{2}(m)[m]+g_{1}(m)f_{2}(m)[m].\quad\text{(since $q^{d}=1$)}\] Since \(q^{2m}\neq 1\), we have \(f_{1}(m)f_{1}(0)=f_{1}^{2}(m)+g_{1}(m)f_{2}(m)\). This, with \(f_{2}(m)=\mu_{1}f_{2}(0)\), \(f_{1}(m)=\gamma\) and \(\mu_{1}=1\), yields \(g_{1}(m)=\frac{\gamma f_{1}(0)-\gamma^{2}}{f_{2}(0)}\). Taking \(n=0\) in (4.6) yields \[f_{1}(m)g_{2}(0)[m]= f_{2}(0)g_{1}(m)[-m]+f_{1}(m)g_{2}(m)[m+d]\] \[= -f_{2}(0)g_{1}(m)[m]+f_{1}(m)g_{2}(m)[m].\quad\text{(since $q^{d}=1$)}\] Since \(q^{2m}\neq 0\) we have \(f_{1}(m)g_{2}(0)=-f_{2}(0)g_{1}(m)+f_{1}(m)g_{2}(m).\) This together with \(g_{2}(0)=0\) and \(f_{1}(m)=g_{2}(m)=0\) for \(\mu_{1}=0\) gives \(g_{1}(m)=0.\) Taking \(n=0\) in the equation (4.10), we have \[f_{2}(m)f_{1}(0)[m]= f_{2}(m)f_{1}(m)[m+d]-g_{2}(m)f_{2}(m)[-m-d]\] \[= f_{2}(m)f_{1}(m)[m]+g_{2}(m)f_{2}(m)[m].\quad\text{(since $q^{d}\neq 1$)}\] Then \(f_{2}(m)f_{1}(0)=f_{2}(m)f_{1}(m)+g_{2}(m)f_{2}(m).\) This, together with \(f_{1}(m)=\gamma\) for \(\mu_{1}=1,\) gives \(g_{2}(m)=f_{1}(0)-\gamma.\) **Theorem 4.6**.: _Homogeneous averaging operators on the \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}\) with degree \(d\) such that \(q^{d}=1\) and \(q\neq-1,1\) must be one of the following operators, given for all \(m\in\mathbb{Z}\), by_ \[\left\{\begin{array}{l}P_{d}^{1}(L_{m})=\nu_{1}\delta_{q^{2m},1}L_{m+d}+(\nu _{3}\delta_{q^{2m},1}+\gamma)W_{m+d},\\ P_{d}^{1}(W_{m})=\nu_{2}\delta_{q^{2m},1}L_{m+d}+\nu_{4}\delta_{q^{2m},1}W_{m+ d},\end{array}\right.\] \[\left\{\begin{array}{l}P_{d}^{2}(L_{m})=(\nu_{1}\delta_{q^{2m},1}+\beta)L_{m +d}+(\nu_{3}\delta_{q^{2m},1}+\gamma)W_{m+d},\\ P_{d}^{2}(W_{m})=\nu_{2}\delta_{q^{2m},1}L_{m+d}+\nu_{4}\delta_{q^{2m},1}W_{m+ d},\end{array}\right.\] \[\left\{\begin{array}{l}P_{d}^{3}(L_{m})=(\nu_{1}\delta_{q^{2m},1}+\beta)L_{ m+d}+\nu_{3}\delta_{q^{2m},1}W_{m+d},\\ P_{d}^{3}(W_{m})=\nu_{2}\delta_{q^{2m},1}L_{m+d}+(\nu_{4}\delta_{q^{2m},1}+ \beta)W_{m+d},\end{array}\right.\] \[\left\{\begin{array}{l}P_{d}^{4}(L_{m})=(\nu_{1}\delta_{q^{2m},1}+\gamma)L_{ m+d}+\nu_{3}\delta_{q^{2m},1}W_{m+d},\\ P_{d}^{4}(W_{m})=\nu_{2}\delta_{q^{2m},1}L_{m+d}+(\nu_{4}\delta_{q^{2m},1}+ \beta)W_{m+d},\end{array}\right.\] \[\left\{\begin{array}{l}P_{d}^{5}(L_{m})=(\nu_{1}\delta_{q^{2m},1}+\gamma)L _{m+d}+(\nu_{3}\delta_{q^{2m},1}+\frac{\gamma\theta-\gamma^{2}}{\beta})W_{m+ d},\\ P_{d}^{5}(W_{m})=(\nu_{2}\delta_{q^{2m},1}+\beta)L_{m+d}+(\nu_{4}\delta_{q^{2m},1}+\theta-\gamma)W_{m+d},\end{array}\right.\] _where \(\gamma,\theta,\nu_{1},\nu_{2},\nu_{3},\nu_{4}\in\mathbb{K}\) and \(\beta\in\mathbb{K}^{*}\)._ Proof.: Directly by combining Lemma 4.2 and Propositions4.3-4.5. **Theorem 4.7**.: _The homogeneous averaging operators on the \(q\)-deformed \(W(2,2)\) Hom-algebra \(\mathcal{W}^{q}\) with of degree \(d\) such that \(q^{d}=1\) and \(q\neq-1,1\) obtained in Theorem 4.6 provide the following Hom-Leibniz algebras on the underlying linear space \(\mathcal{W}^{q}:\)_ 1. \(\{L_{m},L_{n}\}^{1}=\nu_{1}\delta_{q^{2m},1}[m-n]L_{m+n}+(\nu_{3}\delta_{q^{2m},1}+\gamma)[m-n]W_{m+n}\)__ \(\{L_{m},W_{n}\}^{1}=\nu_{1}\delta_{q^{2m},1}[m-n]W_{m+n}\)__ \(\{W_{m},L_{n}\}^{1}=\nu_{2}[m-n]\delta_{q^{2m},1}L_{m+n}+\nu_{4}\delta_{q^{2m},1}[m-n]W_{m+n}\)__ \(\{W_{m},W_{n}\}^{1}=\nu_{2}\delta_{q^{2m},1}[m-n]L_{m+n},\)__ * \(\{L_{m},L_{n}\}^{2}=(\nu_{1}\delta_{q^{2m},1}+\beta)[m-n]L_{m+n}+(\nu_{3}\delta_{q ^{2m},1}+\gamma)[m-n]W_{m+n}\) \(\{L_{m},W_{n}\}^{2}=(\nu_{1}\delta_{q^{2m},1}+\beta)[m-n]W_{m+n}\) \(\{W_{m},L_{n}\}^{2}=\nu_{2}[m-n]\delta_{q^{2m},1}L_{m+n}+\nu_{4}\delta_{q^{2m },1}[m-n]W_{m+n}\) \(\{W_{m},W_{n}\}^{2}=\nu_{2}\delta_{q^{2m},1}[m-n]W_{m+n}\), * \(\{L_{m},L_{n}\}^{3}=(\nu_{1}\delta_{q^{2m},1}+\beta)[m-n]L_{m+n}+\nu_{3}\delta_ {q^{2m},1}[m-n]W_{m+n}\) \(\{L_{m},W_{n}\}^{3}=(\nu_{1}\delta_{q^{2m},1}+\beta)[m-n]W_{m+n}\) \(\{W_{m},L_{n}\}^{3}=\nu_{2}[m-n]\delta_{q^{2m},1}L_{m+n}+(\nu_{4}\delta_{q^{2 m},1}+\beta)[m-n]W_{m+n}\) \(\{W_{m},W_{n}\}^{3}=\nu_{2}\delta_{q^{2m},1}[m-n]W_{m+n}\), * \(\{L_{m},L_{n}\}^{4}=(\nu_{1}\delta_{q^{2m},1}+\gamma)[m-n]L_{m+n}+\nu_{3} \delta_{q^{2m},1}[m-n]W_{m+n}\) \(\{L_{m},W_{n}\}^{4}=(\nu_{1}\delta_{q^{2m},1}+\gamma)[m-n]W_{m+n}\) \(\{W_{m},L_{n}\}^{4}=\nu_{2}[m-n]\delta_{q^{2m},1}L_{m+n}+(\nu_{4}\delta_{q^{2 m},1}+\beta)[m-n]W_{m+n}\) \(\{W_{m},W_{n}\}^{4}=\nu_{2}\delta_{q^{2m},1}[m-n]W_{m+n}\), * \(\{L_{m},L_{n}\}^{5}=(\nu_{1}\delta_{q^{2m},1}+\gamma)[m-n]L_{m+n}+(\nu_{3} \delta_{q^{2m},1}+\frac{\gamma\theta-\gamma^{2}}{\beta}[m-n]W_{m+n}\) \(\{L_{m},W_{n}\}^{5}=(\nu_{1}\delta_{q^{2m},1}+\gamma)[m-n]W_{m+n}\) \(\{W_{m},L_{n}\}^{5}=(\nu_{2}\delta_{q^{2m},1}+\beta)[m-n]L_{m+n}+(\nu_{4} \delta_{q^{2m},1}+\theta-\gamma)[m-n]W_{m+n}\) \(\{W_{m},W_{n}\}^{5}=(\nu_{2}\delta_{q^{2m},1}+\beta)[m-n]W_{m+n}\)_, _where \(\nu_{i},\gamma,\theta\in\mathbb{K}\) and \(\beta\in\mathbb{K}^{*}\)._ Proof.: We demonstrate a proof of (i). The others are proved analogously. For any \(m,n\in\mathbb{Z}\), \[\{L_{m},L_{n}\}^{1}=[P_{d}^{1}(L_{m}),L_{n}]=[\nu_{1}\delta_{q^{2 m},1}L_{m+d}+(\nu_{3}\delta_{q^{2m},1}+\gamma)W_{m+d},L_{n}]\] \[\quad=\nu_{1}[m+d-n]\delta_{q^{2m},1}L_{m+n+d}+(\nu_{3}\delta_{q^ {2m},1}+\gamma)[m+d-n]W_{m+n+d}\] \[\quad=\nu_{1}(m-n)\delta_{q^{2m},1}L_{m+n}+(\nu_{3}\delta_{q^{2m },1}+\gamma)[m-n]W_{m+n},\] \[\{L_{m},W_{n}\}^{1}=[P_{d}^{1}(L_{m}),W_{n}]=[\nu_{1}\delta_{q^{2 m},1}L_{m+d}+(\nu_{3}\delta_{q^{2m},1}+\gamma)W_{m+d},W_{n}]\] \[\quad=\nu_{1}(m+d-n)\delta_{q^{2m},1}L_{m+n+d},=\nu_{1}[m-n]\delta _{q^{2m},1}L_{m+n},\] \[\{W_{m},L_{n}\}^{1}=[P_{d}^{1}(W_{m}),L_{n}]=[\nu_{1}\delta_{q^{2 m},1}L_{m+d}+\nu_{4}\delta_{q^{2m},1}W_{m+d},L_{n}]\] \[\quad=\nu_{2}[m+d-n]\delta_{q^{2m},1}L_{m+n+d}+\nu_{4}\delta_{q^{ 2m},1}[m+d-n]W_{m+n+d}\] \[\quad=\nu_{2}(m-n)\delta_{q^{2m},1}L_{m+n}+\nu_{4}\delta_{q^{2m},1 }[m-n]W_{m+n},\] \[\{W_{m},W_{n}\}^{1}=[P_{d}^{1}(W_{m}),W_{n}]\] \[\quad=[\nu_{2}\delta_{q^{2m},1}L_{m+d}+\nu_{4}\delta_{q^{2m},1}W_{ m+d},W_{n}]=\nu_{2}(m+d-n)\delta_{q^{2m},1}W_{m+n+d}\] \[\quad=\nu_{2}(m-n)\delta_{q^{2m},1}W_{m+n}.\qed\] **Proposition 4.8**.: _The Hom-Leibniz algebras \((\mathcal{W}^{q},\ \{\cdot,\cdot\}^{i},\alpha)\) for \(i\in\{1,\cdots,5\}\) given in Theorem 4.7 items_ (i)_-_(v) _are respectively multiplicatives if and only if_ * \(q^{2}=-1\) _or_ \(\nu_{1}=\nu_{2}=\nu_{3}=\nu_{4}=\gamma=0\); * \(q^{2}=-1\)_;_ * \(q^{2}=-1\)_;_ * \(q^{2}=-1\)_;_ * \(q^{2}=-1\) * \(q^{2}=-1\). Proof.: We prove (i), the others are proved analogously. For any \(m,n\in\mathbb{Z}\), we have \[\alpha(\{L_{m},L_{n}\}^{1})-\{\alpha(L_{m}),\alpha(L_{n})\}^{1}\] \[\quad=\alpha\Big{(}\nu_{1}\delta_{q^{2m},1}[m-n]L_{m+n}+(\nu_{3} \delta_{q^{2m},1}+\gamma)[m-n]W_{m+n}\Big{)}\] \[\quad\quad-\{(q^{m}+q^{-m})L_{m},(q^{n}+q^{-n})L_{n}\}^{1}\] \[\quad=\nu_{1}\delta_{q^{2m},1}[m-n](q^{m+n}+q^{-m-n})L_{m+n}\] \[\quad\quad+(\nu_{3}\delta_{q^{2m},1}+\gamma)[m-n](q^{m+n}+q^{-m-n} )W_{m+n}\] \[\quad\quad-(q^{m}+q^{-m})(q^{n}+q^{-n})\Big{(}(\nu_{1}\delta_{q^{ 2m},1}+\beta)[m-n]L_{m+n}\] \[\quad\quad+(\nu_{3}\delta_{q^{2m},1}+\gamma)[m-n]W_{m+n}\Big{)}\] \[\quad=\nu_{1}\delta_{q^{2m},1}[m-n](q^{m-n}+q^{n-m})L_{m+n}+(\nu_ {3}\delta_{q^{2m},1}+\gamma)[m-n](q^{m-n}+q^{n-m})W_{m+n}\] \[\quad=-(q^{m-n}-q^{n-m})(q^{m-n}+q^{n-m})\big{(}\frac{\nu_{1} \delta_{q^{2m},1}}{q-q^{-1}}L_{m+n}-\frac{\nu_{3}\delta_{q^{2m},1}+\gamma}{q- q^{-1}}W_{m+n}\big{)}\] \[\quad=-(q^{2(m-n)}-q^{2(n-m)})\big{(}\frac{\nu_{1}\delta_{q^{2m},1}}{q-q^{-1}}L_{m+n}+\frac{\nu_{3}\delta_{q^{2m},1}+\gamma}{q-q^{-1}}W_{m+n} \big{)},\] \[\alpha(\{L_{m},W_{n}\}^{1})-\{\alpha(L_{m}),\alpha(W_{n})\}^{1}\] \[\quad=\alpha\Big{(}(\nu_{1}\delta_{q^{2m},1}][m-n]W_{m+n}-\{(q^{m} +q^{-m})L_{m},(q^{n}+q^{-n})W_{n}\}^{2}\] \[\quad=\nu_{1}\delta_{q^{2m},1}[m-n](q^{m+n}+q^{-m-n})W_{m+n}-\nu_ {1}\delta_{q^{2m},1}[m-n](q^{m}+q^{-m})(q^{n}+q^{-n})W_{m+n}\] \[\quad=-(q^{m-n}-q^{n-m})(q^{m-n}+q^{n-m})\big{(}\frac{\nu_{1} \delta_{q^{2m},1}}{q-q^{-1}}W_{m+n}\big{)}\] \[\quad=-(q^{2(m-n)}-q^{2(n-m)})\big{(}\frac{\nu_{1}\delta_{q^{2m},1}}{q-q^{-1}}W_{m+n}\big{)},\] \[\alpha(\{W_{m},L_{n}\}^{1})-\{\alpha(W_{m}),\alpha(L_{n})\}^{1}\] \[\quad=\alpha\Big{(}\nu_{2}\delta_{q^{2m},1}[m-n]L_{m+n}+\nu_{4} \delta_{q^{2m},1}[m-n]W_{m+n}\Big{)}\] \[\quad\quad-\{(q^{m}+q^{-m})W_{m},(q^{n}+q^{-n})L_{n}\}^{3}\] \[\quad=\nu_{2}\delta_{q^{2m},1}[m-n](q^{m+n}+q^{-m-n})L_{m+n}\] \[\quad\quad+\nu_{4}\delta_{q^{2m},1}[m-n](q^{m+n}+q^{-m-n})W_{m+n}\] \[\quad\quad-(q^{m}+q^{-m})(q^{n}+q^{-n})\Big{(}(\nu_{2}\delta_{q^ {2m},1}+\beta)[m-n]L_{m+n}\] \[\quad\quad+\nu_{4}\delta_{q^{2m},1}[m-n]W_{m+n}\Big{)}\] \[\quad=\nu_{2}\delta_{q^{2m},1}[m-n](q^{m-n}+q^{n-m})L_{m+n}+\nu_ {4}\delta_{q^{2m},1}[m-n](q^{m-n}+q^{n-m})W_{m+n}\] \[\quad=-(q^{m-n}-q^{n-m})(q^{m-n}+q^{n-m})\big{(}\frac{\nu_{2} \delta_{q^{2m},1}}{q-q^{-1}}L_{m+n}-\frac{\nu_{4}\delta_{q^{2m},1}}{q-q^{-1}}W_ {m+n}\big{)}\] \[\quad=-(q^{2(m-n)}-q^{2(n-m)})\big{(}\frac{\nu_{2}\delta_{q^{2m},1}}{q-q^{-1}}L_{m+n}+\frac{\nu_{4}\delta_{q^{2m},1}}{q-q^{-1}}W_{m+n}\big{)},\] \[\alpha(\{W_{m},W_{n}\}^{1})-\{\alpha(W_{m}),\alpha(W_{n})\}^{1}\] \[=\alpha\Big{(}(\nu_{2}\delta_{q^{2m},1})[m-n]W_{m+n}-\{(q^{m}+q^{-m})L_{m},(q^{n}+q ^{-n})W_{n}\}^{4}\] \[=\nu_{2}\delta_{q^{2m},1}[m-n](q^{m+n}+q^{-m-n})L_{m+n}-\nu_{2} \delta_{q^{2m},1}[m-n](q^{m}+q^{-m})(q^{n}+q^{-n})L_{m+n}\] \[=-(q^{m-n}-q^{n-m})(q^{m-n}+q^{n-m})\big{(}\frac{\nu_{2}\delta_{q^{ 2m},1}}{q-q^{-1}}L_{m+n}\big{)}\] \[=-(q^{2(m-n)}-q^{2(n-m)})\big{(}\frac{\nu_{2}\delta_{q^{2m},1}}{q- q^{-1}}L_{m+n}\big{)}.\] So, by Theorem 2.1 (i), \[(\mathcal{W}^{q},\{\cdot,\cdot\}^{1},\alpha)\text{ is multiplicative }\Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{l}q^{2(m-n)}-q^{2(n-m)}=0,or\\ \nu_{1}\delta_{q^{2m},1}=\nu_{3}\delta_{q^{2m},1}+\gamma=\\ \nu_{1}\delta_{q^{2m},1}=\nu_{2}\delta_{q^{2m},1}=\nu_{4}\delta_{q^{2m},1}=0, \end{array}\right.,\ \Leftrightarrow\] \[\forall\ m,n\in\mathbb{Z}:\ \left\{\begin{array}{l}q^{4(m-n)}=0,or\\ \nu_{1}=\nu_{2}=\nu_{3}=\nu_{4}=\gamma=0,\end{array}\right.,\ \Leftrightarrow\] \[\left\{\begin{array}{l}\forall p\in\mathbb{Z},\ q^{4p}=0,or\\ \nu_{1}=\nu_{2}=\nu_{3}=\nu_{4}=\gamma=0,\end{array}\right.,\ \Leftrightarrow\] \[\left\{\begin{array}{l}q^{4}=1,or\\ \nu_{1}=\nu_{2}=\nu_{3}=\nu_{4}=\gamma=0,\end{array}\right.,\ \Leftrightarrow\] \[\left\{\begin{array}{l}q^{2}=-1,or\\ \nu_{1}=\nu_{2}=\nu_{3}=\nu_{4}=\gamma=0,\end{array}\right.\ \Leftrightarrow\] \[q=\pm i\text{ if }\exists\ i\in\mathbb{K}:\ i^{2}=-1\text{ (for example if }\mathbb{K}\text{ is algebraically closed),}\] \[or\ \nu_{1}=\nu_{2}=\nu_{3}=\nu_{4}=\gamma=0.\] **Case 2: \((q,d)\in\{1\}\times\mathbb{Z}\cup\{-1\}\times 2\mathbb{Z}\)** _Remark 4.9_.: The equations (4.3)-(4.10) are equivalents for \((q,d)\in\{1\}\times\mathbb{Z}\) and for \((q,d)\in\{-1\}\times 2\mathbb{Z}\). **Subcase 1: \(m=0\) and \(d=0\)** Take \(n=0\) in (4.3)-(4.10). For \(d=0\) and \(m=0\), the functions \(f_{1},\ f_{2},\ g_{1}\) and \(g_{2}\) satisfy \[f_{1}(0)=\nu_{1},\ f_{2}(0)=\nu_{2},\ g_{1}(0)=\nu_{3},\ g_{2}(0)=\nu_{4},\ \nu_{i}\in\mathbb{K}.\] Then we have the following Proposition. **Proposition 4.10**.: _If \(P_{0}\) is an averaging operator on \(\mathcal{W}^{q}\) with degree \(d=0\), then_ \[\left\{\begin{array}{l}f_{1}(0)=\nu_{1},\ \ f_{2}(0)=\nu_{2},\\ g_{1}(0)=\nu_{3},\ \ g_{2}(0)=\nu_{4},\end{array}\right.\ \text{ where }\nu_{i}\in\mathbb{K}.\] **Subcase 1: \(m\neq 0\) and \(d=0\)** Taking \(n=0\) in (4.7) we give \(f_{2}(m)(f_{2}(0)-f_{2}(m))=0.\) Hence, \[f_{2}(m)=\mu_{1}f_{2}(0),\quad\mu_{1}\in\{0,1\}.\] Then, we have the following Proposition. **Proposition 4.11**.: _If \(P_{0}\) is an averaging operators on \(\mathcal{W}^{q}\) with degree \(d=0\) such that \(m\neq 0\) and \(f_{2}(0)=0,\) then_ * _if_ \(f_{1}(0)=0\)_, then_ \(f_{1}(m)=0,\ f_{2}(m)=0,\ g_{1}(m)=\gamma,\ g_{2}(m)=0,\) _where_ \(\gamma\in\mathbb{K}\)_;_ * _if_ \(f_{1}(0)\neq 0\)_, then_ * \(f_{1}(m)=f_{1}(0),\ f_{2}(m)=0,\ g_{1}(m)=\gamma,\ g_{2}(m)=0\)_, where_ \(\gamma\in\mathbb{K};\)__ * \(f_{1}(m)=f_{1}(0),\ f_{2}(m)=0,\ g_{1}(m)=0,\ g_{2}(m)=f_{1}(0)\)_, where_ \(\gamma\in\mathbb{K};\)__ * \(f_{1}(m)=0,\ f_{2}(m)=0,\ g_{1}(m)=\gamma,\ g_{2}(m)=f_{1}(0)\)_, where_ \(\gamma\in\mathbb{K}.\)__ Proof.: Let \(m\neq 0\), \(d=0\) and \(f_{2}(0)=0\) as assumed. Taking \(n=0\) in (4.3) we obtain \(f_{1}(m)(f_{1}(0)-f_{1}(m))=0.\) Thus, \(f_{1}(m)=\mu_{2}f_{1}(0),\)\(\mu_{2}\in\{0,1\}.\) Setting \(n=0\) in (4.9) yields \[mf_{2}(m)g_{1}(0)= -mg_{2}(m)f_{1}(0)+mf_{2}(m)g_{1}(m)+mg_{2}^{2}(m).\] Since \(m\neq 0\), we have \(f_{2}(m)g_{1}(0)=g_{2}(m)f_{1}(0)-f_{2}(m)g_{1}(m)-g_{2}^{2}(m),\) from which together with \(f_{2}(m)=0,\) we get \[g_{2}(m)=\mu_{3}f_{1}(0),\quad\mu_{3}\in\{0,1\}.\] Taking \(n=0\) in (4.4) and from \(m\neq 0\) we get \(f_{1}(m)g_{1}(0)=-f_{1}(0)g_{1}(m)+f_{1}(m)g_{1}(m)+g_{1}(m)g_{2}(m),\) and with \(f_{1}(m)=\mu_{2}f_{1}(0)\) and \(g_{2}(m)=\mu_{3}f_{1}(0),\) we obtain \(g_{1}(m)(\mu_{2}+\mu_{3}-1)f_{1}(0)=\mu_{2}f_{1}(0)g_{1}(0).\) Then, we have the two cases: * if \(f_{1}(0)=0\), then \(g_{1}(m)=\gamma,\) * if \(f_{1}(0)\neq 0\) we have \(\mu_{2}g_{1}(0)=(\mu_{2}+\mu_{3}-1)g_{1}(m).\) Then, \(\left\{\begin{array}{ll}g_{1}(m)=0&\mbox{ if }(\mu_{2},\mu_{3})\in\{(0,0),(1,1)\},\\ g_{1}(m)=\gamma&\mbox{ if }(\mu_{2},\mu_{3})\in\{(1,0),(0,1)\}.\end{array}\right.\) **Proposition 4.12**.: _If \(P_{d}\) is the averaging operator on \(\mathcal{W}^{q}\) with degree \(d=0\) such that \(m\neq 0\) and \(f_{2}(0)\neq 0\), then \(\left\{\begin{array}{ll}f_{1}(m)=\gamma,&f_{2}(m)=f_{2}(0),\\ g_{1}(m)=\frac{\gamma f_{1}(0)-\gamma^{2}}{f_{2}(0)},&g_{2}(m)=f_{1}(0)-\gamma, \end{array}\right.\) where \(\gamma\in\mathbb{K}.\)_ Proof.: Let \(d=0,\ m\neq 0\) and \(f_{2}(0)\neq 0,\) as assumed. Taking \(n=0\) in (4.5) yields \(mf_{1}(m)f_{2}(0)=mf_{1}(m)f_{2}(m).\) Since \(m\neq 0\) we have \(f_{1}(m)f_{2}(0)=f_{1}(m)f_{2}(m).\) This together with \(f_{2}(m)=\mu_{1}f_{2}(0)\) gives \(f_{1}(m)(\mu_{1}-1)=0.\) Then \(f_{1}(m)=\left\{\begin{array}{ll}0&if\quad\mu=0,\\ \gamma&if\quad\mu_{1}=1.\end{array}\right.\) Taking \(n=0\) in (4.8) yields \(mf_{2}(m)g_{2}(0)=-mf_{2}(0)g_{2}(m)+mf_{2}(m)g_{2}(m).\) Since \(m\neq 0\), we have \(f_{2}(m)g_{2}(0)=-f_{2}(0)g_{2}(m)+f_{2}(m)g_{2}(m).\) This, with \(f_{2}(m)=\mu_{1}f_{2}(0),\) gives \(\mu_{1}f_{2}(0)g_{2}(0)=-f_{2}(0)g_{2}(m)+\mu_{1}f_{2}(0)g_{2}(m).\) Then * if \(\mu_{1}=0\) we have \(g_{2}(m)=0\), * if \(\mu_{1}=1\) we have \(g_{2}(0)=0\). Taking \(n=0\) in (4.3) yields \(f_{1}(m)f_{1}(0)=mf_{1}^{2}(m)+mg_{1}(m)f_{2}(m).\) Since \(m\neq 0\), we have \(f_{1}(m)f_{1}(0)=f_{1}^{2}(m)+g_{1}(m)f_{2}(m).\) This, with \(f_{2}(m)=\mu_{1}f_{2}(0)\), \(f_{1}(m)=\gamma\) and \(\mu_{1}=1\), yields \(g_{1}(m)=\frac{\gamma f_{1}(0)-\gamma^{2}}{f_{2}(0)}.\) Taking \(n=0\) in (4.6) yields \(mf_{1}(m)g_{2}(0)=-mf_{2}(0)g_{1}(m)+mf_{1}(m)g_{2}(m).\) Since \(m\neq 0\) we have \(f_{1}(m)g_{2}(0)=-f_{2}(0)g_{1}(m)+f_{1}(m)g_{2}(m).\) This together with \(g_{2}(0)=0\) and \(f_{1}(m)=g_{2}(m)=0\) for \(\mu_{1}=0\) gives \(g_{1}(m)=0\). Taking \(n=0\) in the equation (4.10) yields \(mf_{2}(m)f_{1}(0)=mf_{2}(m)f_{1}(m)+mg_{2}(m)f_{2}(m).\) Then \(f_{2}(m)f_{1}(0)=f_{2}(m)f_{1}(m)+g_{2}(m)f_{2}(m).\) This, together with \(f_{1}(m)=\gamma\) for \(\mu_{1}=1\), gives \(g_{2}(m)=f_{1}(0)-\gamma\). **Theorem 4.13**.: _The Homogeneous averaging operators on the \(q\)-deformed \(W(2,2)\) Homalgebra \(\mathcal{W}^{q}\) with degree \(d=0\). must be one of the following operators, given for all \(m\in\mathbb{Z}\), by_ \[\left\{\begin{array}{l}P_{0}^{1}(L_{m})=\nu_{1}\delta_{m,0}L_{m}+(\nu_{3} \delta_{m,0}+\gamma)W_{m},\\ P_{0}^{1}(W_{m})=\nu_{2}\delta_{m,0}L_{m}+\nu_{4}\delta_{m,0}W_{m},\end{array}\right.\] \[\left\{\begin{array}{l}P_{0}^{2}(L_{m})=(\nu_{1}\delta_{m,0}+\beta)L_{m}+( \nu_{3}\delta_{m,0}+\gamma)W_{m},\\ P_{0}^{2}(W_{m})=\nu_{2}\delta_{m,0}L_{m}+\nu_{4}\delta_{m,0}W_{m},\end{array}\right.\] \[\left\{\begin{array}{l}P_{0}^{3}(L_{m})=(\nu_{1}\delta_{m,0}+\beta)L_{m}+ \nu_{3}\delta_{m,0}W_{m},\\ P_{0}^{3}(W_{m})=\nu_{2}\delta_{m,0}L_{m}+(\nu_{4}\delta_{m,0}+\beta)W_{m}, \end{array}\right.\] \[\left\{\begin{array}{l}P_{0}^{4}(L_{m})=(\nu_{1}\delta_{m,0}+\gamma)L_{m}+ \nu_{3}\delta_{m,0}W_{m}\\ P_{0}^{4}(W_{m})=\nu_{2}\delta_{m,0}L_{m}+(\nu_{4}\delta_{m,0}+\beta)W_{m}, \end{array}\right.\] \[\left\{\begin{array}{l}P_{0}^{5}(L_{m})=(\nu_{1}\delta_{m,0}+\gamma)L_{m}+ (\nu_{3}\delta_{m,0}+\frac{\gamma\theta-\gamma^{2}}{\beta})W_{m},\\ P_{0}^{5}(W_{m})=(\nu_{2}\delta_{m,0}+\beta)L_{m}+(\nu_{4}\delta_{m,0}+\theta- \gamma)W_{m}.\end{array}\right.\] _where \(\gamma,\theta,\nu_{1},\nu_{2},\nu_{3},\nu_{4}\in\mathbb{K}\) and \(\beta\in\mathbb{K}^{*}\)._ Proof.: Directly by combining Lemma 4.2 and Propositions 4.10-4.12. **Theorem 4.14**.: _The homogeneous averaging operators on the \(q\)-deformed \(W(2,2)\) Homalgebra \(\mathcal{W}^{q}\) with of degree \(d=0\) obtained in Theorem 4.6 provide the following Hom-Leibniz algebras on the underlying linear space \(\mathcal{W}^{q}:\)_ * \(\{L_{m},L_{n}\}^{1}=\nu_{1}\delta_{m,0}[m-n]L_{m+n}+(\nu_{3}\delta_{m,0}+ \gamma)[m-n]W_{m+n}\)__ \(\{L_{m},W_{n}\}^{1}=\nu_{1}\delta_{m,0}[m-n]W_{m+n}\)__ \(\{W_{m},L_{n}\}^{1}=\nu_{2}[m-n]\delta_{m,0}L_{m+n}+\nu_{4}\delta_{m,0}[m-n]W_{m +n}\)__ \(\{W_{m},W_{n}\}^{1}=\nu_{2}\delta_{m,0}[m-n]L_{m+n},\)__ * \(\{L_{m},L_{n}\}^{2}=(\nu_{1}\delta_{m,0}+\beta)[m-n]L_{m+n}+(\nu_{3}\delta_{m,0}+ \gamma)[m-n]W_{m+n}\)__ \(\{L_{m},W_{n}\}^{2}=(\nu_{1}\delta_{m,0}+\beta)[m-n]W_{m+n}\)__ \[\begin{array}{l}\{W_{m},L_{n}\}^{2}=\nu_{2}[m-n]\delta_{m,0}L_{m+n}+\nu_{4} \delta_{m,0}[m-n]W_{m+n}\\ \{W_{m},W_{n}\}^{2}=\nu_{2}\delta_{m,0}[m-n]W_{m+n},\end{array}\] 3. \(\{L_{m},L_{n}\}^{3}=(\nu_{1}\delta_{m,0}+\beta)[m-n]L_{m+n}+\nu_{3}\delta_{m,0}[m -n]W_{m+n}\\ \{L_{m},W_{n}\}^{3}=(\nu_{1}\delta_{m,0}+\beta)[m-n]W_{m+n}\\ \{W_{m},L_{n}\}^{3}=\nu_{2}[m-n]\delta_{m,0}L_{m+n}+(\nu_{4}\delta_{m,0}+\beta )[m-n]W_{m+n}\\ \{W_{m},W_{n}\}^{3}=\nu_{2}\delta_{m,0}[m-n]W_{m+n},\end{array}\) 4. \(\{L_{m},L_{n}\}^{4}=(\nu_{1}\delta_{m,0}+\gamma)[m-n]L_{m+n}+\nu_{3}\delta_{m, 0}[m-n]W_{m+n}\\ \{L_{m},W_{n}\}^{4}=(\nu_{1}\delta_{m,0}+\gamma)[m-n]W_{m+n}\\ \{W_{m},L_{n}\}^{4}=\nu_{2}[m-n]\delta_{m,0}L_{m+n}+(\nu_{4}\delta_{m,0}+\beta )[m-n]W_{m+n}\\ \{W_{m},W_{n}\}^{4}=\nu_{2}\delta_{m,0}[m-n]W_{m+n},\end{array}\) 5. \(\{L_{m},L_{n}\}^{5}=(\nu_{1}\delta_{m,0}+\gamma)[m-n]L_{m+n}+(\nu_{3}\delta_{m,0}+\frac{\gamma\theta-\gamma^{2}}{\beta}[m-n]W_{m+n}\\ \{L_{m},W_{n}\}^{5}=(\nu_{1}\delta_{m,0}+\gamma)[m-n]W_{m+n}\\ \{W_{m},L_{n}\}^{5}=(\nu_{2}\delta_{m,0}+\beta)[m-n]L_{m+n}+(\nu_{4}\delta_{m,0}+\theta-\gamma)[m-n]W_{m+n}\\ \{W_{m},W_{n}\}^{5}=(\nu_{2}\delta_{m,0}+\beta)[m-n]W_{m+n},\end{array}\) _where \(\nu_{i},\gamma,\theta\in\mathbb{K}\ and\ \beta\in\mathbb{K}^{*}\)._ Proof.: We demonstrate a proof of (i). The others are proved analogously. For any \(m,n\in\mathbb{Z}\), \[\begin{array}{l}\{L_{m},L_{n}\}^{1}=[P_{0}^{1}(L_{m}),L_{n}]=[\nu_{1} \delta_{m,0}L_{m}+(\nu_{3}\delta_{m,0}+\gamma)W_{m},L_{n}]\\ \quad=\nu_{1}[m-n]\delta_{m,0}L_{m+n}+(\nu_{3}\delta_{m,0}+\gamma)[m-n]W_{m+n},\\ \{L_{m},W_{n}\}^{1}=[P_{d}^{1}(L_{m}),W_{n}]=[\nu_{1}\delta_{m,0}L_{m}+(\nu_{ 3}\delta_{m,0}+\gamma)W_{m},W_{n}]\\ \quad=\nu_{1}[m-n]\delta_{m,0}L_{m+n},\\ \{W_{m},L_{n}\}^{1}=[P_{0}^{1}(W_{m}),L_{n}]=[\nu_{1}\delta_{m,0}L_{m}+\nu_{4 }\delta_{m,0}W_{m},L_{n}]\\ \quad=\nu_{2}(m-n)\delta_{m,0}L_{m+n}+\nu_{4}\delta_{m,0}[m-n]W_{m+n},\\ \{W_{m},W_{n}\}^{1}=[P_{0}^{1}(W_{m}),W_{n}]\\ \quad=[\nu_{2}\delta_{m,0}L_{m}+\nu_{4}\delta_{m,0}W_{m},W_{n}]=\nu_{2}(m-n) \delta_{m,0}W_{m+n}.\qed\end{array}\] **Proposition 4.15**.: _The Hom-Leibniz algebras \((\mathcal{W}^{q},\ \{\cdot,\cdot\}^{i},\alpha)\) induced by \(P_{0}^{i}\) for all \(i\in\{1,\ldots,5\}\) is multiplicative if and only if \(i=1\) and \(\nu_{1}=\nu_{2}=\nu_{3}=\nu_{4}=\gamma=0\)._ Proof.: For any \(m,n\in\mathbb{Z}\), we have \[\begin{array}{l}\alpha(\{L_{m},L_{n}\}^{1})-\{\alpha(L_{m}),\alpha(L_{n}) \}^{1}\\ \quad=\alpha\Big{(}\nu_{1}\delta_{m,0}[m-n]L_{m+n}+(\nu_{3}\delta_{m,0}+ \gamma)[m-n]W_{m+n}\Big{)}\\ \quad\quad-\{2q^{m}L_{m},2q^{N}L_{n}\}^{1}\\ \quad=-2q^{m+n}[m-n](\nu_{1}\delta_{m,0}L_{m+n}+(\nu_{3}\delta_{m,0}+\gamma)W_ {m+n})\\ \alpha(\{L_{m},W_{n}\}^{1})-\{\alpha(L_{m}),\alpha(W_{n})\}^{1}\\ \quad=\alpha\Big{(}(\nu_{1}\delta_{m,0})[m-n]W_{m+n}-\{2q^{m}L_{m},2q^{n}W_{n} \}^{1}\\ \quad=2q^{m+n}\nu_{1}\delta_{m,0}[m-n]W_{m+n}-4q^{m+n}\nu_{1}\delta_{m,0}[m-n]W_ {m+n}\end{array}\] \[=-2q^{m+n}\nu_{1}\delta_{m,0}[m-n]W_{m+n}\] \[\alpha(\{W_{m},L_{n}\}^{1})-\{\alpha(W_{m}),\alpha(L_{n})\}^{1}\] \[\quad=2q^{m+n}[m-n](\nu_{2}\delta_{m,0}L_{m+n}+\nu_{4}\delta_{m,0} W_{m+n})-4q^{m+n}[m-n](\nu_{2}\delta_{m,0}L_{m+n}+\nu_{4}\delta_{m,0}W_{m+n})\] \[\alpha(\{W_{m},W_{n}\}^{1})-\{\alpha(W_{m}),\alpha(W_{n})\}^{1}\] \[\quad=-2q^{m+n}\nu_{2}\delta_{m,0}[m-n]L_{m+n}.\] So, by Theorem 2.1 (i), \(({\cal W}^{q},\{\cdot,\cdot\}^{1},\alpha)\) is multiplicative \(\Leftrightarrow\forall m\in\mathbb{Z},\ \nu_{1}\delta_{m,0}=\nu_{3}\delta_{m,0}+\gamma=\nu_{2}\delta_{m,0}=\nu_{4} \delta_{m,0}=0\Leftrightarrow\nu_{1}=\nu_{2}=\nu_{3}=\nu_{4}=\gamma=0\). Similarly, for all \(i\in\{2,\ldots,5\}\) we prove that the Hom-Leibniz algebras \(({\cal W}^{q},\{\cdot,\cdot\}^{i},\alpha)\) are not multiplicative.
2306.01305
Cosmological Phase Transitions in Composite Higgs Models
We investigate cosmological phase transitions in various composite Higgs models consisting of four-dimensional asymptotically-free gauge field theories. Each model may lead to a confinement-deconfinement transition and a phase transition associated with the spontaneous breaking of a global symmetry that realizes the Standard Model Higgs field as a pseudo-Nambu-Goldstone boson. Based on the argument of universality, we discuss the order of the phase transition associated with the global symmetry breaking by studying the renormalization group flow of the corresponding linear sigma model at finite temperature, which is calculated by utilizing the $\epsilon$-expansion technique at the one-loop order. Our analysis indicates that some composite Higgs models accommodate phenomenologically interesting first-order phase transitions. We also explore the confinement-deconfinement transition in a UV-completed composite Higgs model based on a $Sp(2N_c)$ gauge theory. It is found that the first-order phase transition is favored when the number of degrees of freedom for the $Sp(2N_c)$ gauge field is much larger than that of matter fields in the fundamental representation of $Sp(2N_c)$. We comment on the gravitational wave signal generated by the confinement-deconfinement transition and its detectability at future observations. Our discussions motivate further studies on phase transitions in composite Higgs models with the use of lattice simulations.
Kohei Fujikura, Yuichiro Nakai, Ryosuke Sato, Yaoduo Wang
2023-06-02T07:05:53Z
http://arxiv.org/abs/2306.01305v1
# Cosmological Phase Transitions in Composite Higgs Models ###### Abstract We investigate cosmological phase transitions in various composite Higgs models consisting of four-dimensional asymptotically-free gauge field theories. Each model may lead to a confinement-deconfinement transition and a phase transition associated with the spontaneous breaking of a global symmetry that realizes the Standard Model Higgs field as a pseudo-Nambu-Goldstone boson. Based on the argument of universality, we discuss the order of the phase transition associated with the global symmetry breaking by studying the renormalization group flow of the corresponding linear sigma model at finite temperature, which is calculated by utilizing the \(\epsilon\)-expansion technique at the one-loop order. Our analysis indicates that some composite Higgs models accommodate phenomenologically interesting first-order phase transitions. We also explore the confinement-deconfinement transition in a UV-completed composite Higgs model based on a \(Sp(2N_{c})\) gauge theory. It is found that the first-order phase transition is favored when the number of degrees of freedom for the \(Sp(2N_{c})\) gauge field is much larger than that of matter fields in the fundamental representation of \(Sp(2N_{c})\). We comment on the gravitational wave signal generated by the confinement-deconfinement transition and its detectability at future observations. Our discussions motivate further studies on phase transitions in composite Higgs models with the use of lattice simulations. + Footnote †: preprint: UT-Komaba/23-5 + Footnote †: preprint: UT-Komaba/23-5 ## I Introduction To unveil the nature of the observed Higgs particle is a key to understand our Universe. Especially, it is a critical issue to answer the question of whether the Higgs boson is made of more fundamental constituents or not. The study of a composite Higgs boson has been initiated by Georgi and Kaplan [1; 2; 3; 4], where the Higgs boson is identified as a pseudo-Nambu-Goldstone boson (pNGB) arising from the spontaneous breaking of a continuous global symmetry triggered by the dynamics of a new confining gauge field theory, just like the pion as a pNGB associated with the chiral symmetry breaking in the ordinary QCD (for reviews, see _e.g._ refs. [5; 6]). An unbroken subgroup of global symmetry is gauged and identified as the \(SU(2)_{L}\times U(1)_{Y}\) symmetry of the Standard Model (SM). Thanks to the pNGB nature, the SM-like Higgs boson can be naturally lighter than other composite states. If the new gauge theory contains only fermions and no scalars, it can provide a solution to the naturalness problem of the electroweak scale and also explain its smallness by dimensional transmutation. Various composite Higgs models with different patterns of global symmetry breaking have been proposed so far, as summarized in table 1 of ref. [6], while the authors of refs. [7; 8] discussed four-dimensional UV descriptions based on purely fermionic gauge theories. If a composite Higgs model consisting of a new confining gauge field theory is realized in nature, it may show two (distinct) phase transitions in the early Universe: (i) a confinement-deconfinement transition and (ii) a phase transition associated with the spontaneous breaking of a global symmetry. The ordinary electroweak phase transition may follow from those phase transitions or simultaneously take place. Although the effect of a new strongly-interacting sector in the model on the electroweak phase transition has been extensively investigated in literature [9; 10; 11; 12], including the application to electroweak baryogenesis [13; 14; 15; 16; 17], the dynamics of the phase transitions (i), (ii) has been largely unexplored in the context of a composite Higgs model, while its understanding is essential to clarify the evolution of our Universe and possibly brings about a new application to a cosmological issue as well as a further prediction of the model that is experimentally tested.1 In particular, if either of the phase transi tions (i), (ii) is of the first-order, it proceeds through the nucleation and expansion of true vacuum bubbles, which provides a significant departure from the thermal equilibrium, where the bubble collision [32; 33; 34; 35], sound wave [36; 37; 38; 39] and plasma turbulence [40; 41; 42; 43; 44; 45] may generate an observable stochastic gravitational wave (GW) background. In addition, if there exists a conserved global charge such as a dark baryon number with its finite density, _i.e._ a non-vanishing chemical potential, a macroscopic compact object, called a dark quark nugget, can be formed and become a suitable dark matter candidate [46] (see also ref. [47; 48] for another production mechanism of dark matter utilizing a first-order phase transition). In a general context, dark confinement and chiral phase transitions in QCD-like theories have been studied in refs. [49; 50; 51; 52]. The strongly-interacting system of a composite Higgs model does not admit the first-principle analytical calculation to explore the phase transitions (i), (ii).2 The only possible direct approach is a numerical lattice simulation, as performed for the electroweak and QCD phase transitions, revealing that they are likely to be smooth crossover transitions [55; 56; 57; 58].3 However, various theoretical approaches have been discussed to clarify the nature of phase transitions for ordinary QCD and QCD-like gauge theories. The most famous attempt is to use the argument of universality (see ref. [60] for a detailed discussion, but we will review it in the next section). This approach assumes that long-wavelength fluctuations are dominant during a phase transition, which is presumably realized when the phase transition is of the second-order or weakly first-order. Under this assumption, the phase transition may be insensitive to the microscopic physics, which enables us to determine the order of the phase transition by studying the effective linear sigma model that respects the symmetries of the system. The advantage of this approach is that it can be applied to a wide range of systems, including the ferromagnetic system in condensed matter physics, and the result is largely model-independent because the analysis only depends on the space dimension as well as the underlying symmetries. There are attempts based on the argument of universality for the ordinary QCD [61; 62] and QCD-like theories [63; 64]. In the present paper, we discuss the order of the phase transition associated with the global symmetry breaking in each of various composite Higgs models by studying the renormalization group (RG) flow of the corresponding linear sigma model at finite temperature, which is calculated by using the so-called \(\epsilon\)-expansion technique at the one-loop order. It turns out that several composite Higgs models favor the first-order phase transitions within the framework of the argument of universality. Footnote 2: The dynamics of a phase transition in a weakly-interacting system can be directly studied by means of the equilibrium thermal field theory with the imaginary time formalism [53] unless the phase transition is of the second-order or weakly first-order (see ref. [54] for an excellent review). To study a confinement-deconfinement phase transition in a composite Higgs model, one needs to specify its UV description. Here, our benchmark model is given by a four-dimensional asymptotically-free \(Sp(2N_{c})\) gauge field theory presented in ref. [7]. Unfortunately, it is difficult to analyze the confinement-deconfinement transition for such a \(Sp(2N_{c})\) gauge field theory with dynamical matter fields by using a method other than a lattice simulation. Then, our approach is to take the large \(N_{c}\) limit with a fixed number of flavors and assume that the system is well-described by the pure \(Sp(2N_{c})\) gauge theory at finite temperature. In this case, the first-order confinement-deconfinement phase transition is favored from direct lattice calculations [65; 66], and the dynamics of the phase transition may be described by a phenomenological effective theory, called the Polyakov loop model, that is constructed in terms of the Polyakov loop as an appropriate order parameter of the phase transition [67; 68]. By utilizing the result of lattice simulations for the pure \(SU(N_{c})\) gauge theory, which is justified at least in the large \(N_{c}\) and zero-temperature limit, one can quantitatively analyze the first-order confinement-deconfinement phase transition by the Polyakov loop model. In particular, the use of the Polyakov loop model enables us to derive the gravitational wave spectrum generated by the first-order confinement-deconfinement transition. We will discuss its detectability at future observations. The rest of the present paper is organized as follows. In section II, we determine the order of the phase transition associated with the global symmetry breaking in each of different composite Higgs models by assuming the argument of universality and considering the corresponding linear sigma model. In section III, we focus on a \(Sp(2N_{c})\) gauge theory with the large \(N_{c}\) limit and study the confinement-deconfinement transition by utilizing the Polyakov loop model. The gravitational wave spectrum generated by the first-order phase transition is calculated. Section IV is devoted to conclusions and discussions. In appendix A, we perform the analysis of the phase transition associated with global symmetry breaking \(U(4)\) (\(SU(4)\)) \(\to Sp(4)\) by utilizing the Nambu-Jona-Lasinio model. ## II Global symmetry breaking To discuss the order of the phase transition associated with the spontaneous breaking of a global symmetry \(\mathcal{G}\) to its subgroup \(\mathcal{H}\) in a composite Higgs model, we assume the argument of universality. Despite strong interactions, the dynamics of the phase transition is then described by the corresponding linear sigma model whose form is solely determined by the symmetry breaking pattern \(\mathcal{G}\rightarrow\mathcal{H}\), and as reviewed here, we can investigate the order of the phase transition by studying the RG flow of the linear sigma model at finite temperature, calculated by the \(\epsilon\)-expansion technique. Since most of proposed composite Higgs models have global symmetry breaking patterns whose associated phase transitions have been already explored by using the same technique in literature, their results will be summarized (see table 2). We will also present a new analysis of the order of the phase transition in a composite Higgs model with global symmetry breaking \(SO(N)\to SO(M)\times SO(N-M)\) proposed for \(N=9\) and \(M=4\) in ref. [69]. ### \(SO(N)\to SO(N-1)\) We begin with the most familiar symmetry breaking pattern, \(SO(N)\to SO(N-1)\), which is realized for \(N=5\) in the minimal composite Higgs model [72] and for \(N=9\) in the composite two Higgs doublet model [73], to describe the outline of the analysis of the phase transition based on the argument of universality. The symmetry breaking \(SO(N)\to SO(N-1)\) can be described by introducing an order parameter \(\Phi_{a}\) (\(a=1,2,\cdots,N\)) in the fundamental representation of \(SO(N)\). We now assume that the phase transition dynamics is dominated by the order parameter and its thermal fluctuation. In this theory, the temporal direction is compactified with period, \(\beta=1/T\), where \(T\) is the ambient temperature. The phase transition dynamics is assumed to be dominated by the long-distance physics whose length scale is longer than \(\beta\). In this case, the temporal direction can be integrated out, and one obtains the three-dimensional effective action for the order parameter.4 Under these assumptions, the phase transition dynamics may be described by the following three-dimensional effective Lagrangian: Footnote 4: If the underlying theory is weakly-coupled, one can perturbatively derive the three-dimensional thermal effective theory from the original short-distance physics. See e.g. refs. [71; 807; 808; 81; 82; 83] for the detailed procedure of dimensional reduction in the equilibrium finite-temperature field theory. \[\mathcal{L}_{E}= \frac{1}{2}\partial_{i}\Phi_{a}\partial_{i}\Phi_{a}+\frac{1}{2}m^ {2}(T)\Phi_{a}\Phi_{a}+\frac{\lambda_{3}}{4}(\Phi_{a}\Phi_{a})^{2}\] \[+\mathcal{O}(\Phi_{a}\Phi_{a})^{3}. \tag{1}\] Here, \(i=1,2,3\) denotes the space index, while we truncate operators of \(\mathcal{O}(\Phi_{a}\Phi_{a})^{3}\). \(m^{2}(T)\) and \(\lambda_{3}\) represent the temperature dependent mass and coupling whose precise values and expressions generically depend on the short-distance physics. In a strongly-coupled system, we cannot perturbatively calculate these quantities from the underlying theory, and hence, we treat \(m^{2}(T)\) and \(\lambda_{3}\) as free parameters. However, \(\lambda_{3}>0\) is required for the stability of the potential.5 The \(SO(N)\) symmetry is spontaneously broken to \(SO(N-1)\) at zero temperature, \(m^{2}(T=0)<0\) (\((\Phi_{a})\neq 0\)), while it is restored at high temperature, \(m^{2}(T)>0\) (\((\Phi_{a})=0\)). Footnote 5: One cannot exclude the possibility of \(\lambda_{3}<0\). In this case, one needs to include higher order terms of \(\mathcal{O}(\Phi_{a}\Phi_{a})^{3}\) and the mean field analysis indicates the first-order phase transition. Let us first neglect the fluctuation of the order parameter and discuss the order of the phase transition in terms of the so-called mean field analysis. In this case, the phase transition is not of the first order since \(\langle\Phi_{a}\rangle\) continuously vanishes at the critical temperature \(T_{C}\) defined by \(m^{2}(T_{C})=0\) as long as \(m^{2}(T)\) is an analytic function of \(T\). However, near the critical temperature \(T=T_{C}\), the IR fluctuation of \(\Phi_{a}\) is in general nonperturbatively large. In particular, the coupling constant \(\lambda_{3}\) has mass dimension one in three dimensions, and the effect of the IR fluctuation scales by some power of the dimensionless ratio, \(\lambda_{3}/m(T)\), using the standard power counting [84; 85] (for \(N\simeq 1\)), where the mass of the order parameter \(m(T)\) plays the role of an IR cutoff. As it comes close to the critical temperature, the effect is obviously unsuppressed, leading to the IR divergence. Note that the appearance of the IR divergence is a generic feature of the critical phenomena in three dimensions as indicated in ref. [86]. Therefore, the mean field analysis cannot be justified near the critical temperature, and one needs a more sophisticated analysis which can deal with the IR fluctuation to determine the order of the phase transition. When the second-order phase transition takes place in a simple system such as the Ising model, it has been experimentally known that the correlation length diverges, and consequently, the system exhibits self-similarity at the critical temperature in the long-distance limit [60]. Self-similarity at long-distance scales implies that the system experiences the scale invariance at the IR, which may correspond to the existence of an attractive IR fixed point of coupling constants in the effective theory. Then, we may argue that if there exists a stable IR fixed point in the effective theory, the system shows the second-order phase transition at the critical temperature (critical point). In order to find the presence of an IR fixed point including the effect of the IR fluctuation, one needs the RG analysis. Interestingly, as originally found by Wilson [87], the analysis could successfully reproduce singular behaviors of thermodynamical quantities at the second-order phase transition, which are described in terms of critical exponents. Following this argument, we here assume that the existence of an attractive IR fixed point corresponds to the occurrence of the second-order phase transition. On the other hand, if there is no stable IR fixed point, one expects that the second-order phase transition does not take place. Indeed, if coupling constants in the effective theory flow to an unstable region of the potential, it is conceivable that the _fluctuation-induced_ first order phase transition takes place as examined in refs. [88; 71] for generic scalar models. Here, the terminology of fluctuation-induced is added since the fluctuation drives the first-order phase transition, which cannot be seen in the mean field analysis. We will encounter this situation in the next subsection. For \(N>2\), the effective Lagrangian of Eq. (1) is known as the Heisenberg model which describes the phase transition of the Heisenberg ferromagnetic system in condensed matter physics. The RG analysis has been carried out by utilizing the \(\epsilon\)-expansion technique at the one-loop level in refs. [71; 87]. In the \(\epsilon\)-expansion, one calculates loop corrections to \(\lambda_{3}\) in \(4-\epsilon\) dimensions instead of directly working in three dimensions. Using the standard \(\overline{\text{MS}}\) subtraction, the RG equation at \(m^{2}(T=T_{C})=0\) is given by [89] \[\beta_{\lambda_{3}}\equiv\mu\frac{\partial\lambda_{3}}{\partial\mu}=-\epsilon \lambda_{3}+(N+8)\frac{\lambda_{3}^{2}}{8\pi^{2}}\,, \tag{2}\] where \(\mu\) is the renormalization scale. It can be seen that there exists a stable fixed point at \(\lambda_{3}^{*}=8\pi^{2}\epsilon/(N+8)\), which is called the Wilson-Fisher fixed point [90]. We finally obtain the result in three dimensions by the extrapolation of \(\epsilon\to 1\). Since the attractive IR fixed point exists, the phase transition associated with the symmetry breaking \(SO(N)\to SO(N-1)\) is expected to be of the second-order, and its property is characterized by the fixed point. For this symmetry breaking pattern, the presence of the IR fixed point has been also reported by the analysis of \(1/N\) expansion [91]. ### \(SU(2N)\left(U(2N)\right)\to Sp(2N)\) We next consider phase transitions in composite Higgs models whose symmetry breaking patterns are given by \(SU(2N)\to Sp(2N)\) with \(N=2\)[7; 74; 75; 76] and \(N=3\)[77; 74]. Such a global symmetry breaking is realized in a QCD-like theory with \(2N\) flavors of quarks belonging to the pseudo-real representation under a given gauge group [92]. For \(N=2\), ref. [7] has presented a UV completed composite Higgs model that contains the top partner and satisfies the requirement of anomaly matching. In this case, one can discuss the chiral phase transition by using the Nambu-Jona-Lasinio (NJL) model as well as the argument of universality. We will describe the discussion of the phase transition based on the NJL model in appendix A. The spontaneous breaking \(SU(2N)\to Sp(2N)\) can be described by an order parameter which belongs to the second-rank anti-symmetric tensor representation of \(SU(2N)\), \(\Phi_{ab}=-\Phi_{ba}\) (\(a,b=1,2,\cdots,2N\)). This field transforms as \(\Phi\to U\Phi U^{T}\) under the \(SU(2N)\), where \(U\) denotes a \(SU(2N)\) matrix. If \(\Phi_{ab}\) gets a vacuum expectation value (VEV) of the form \(\Phi_{ab}\propto J_{ab}\) where \(J_{ab}\) is the invariant tensor of \(Sp(2N)\), the \(SU(2N)\) symmetry is broken to \(Sp(2N)\). As in the case of the previous subsection, one can write down the three-dimensional effective theory of the current system as \[\begin{split}\mathcal{L}_{E}=&\,\text{Tr}\left( \partial_{i}\Phi^{\dagger}\partial_{i}\Phi\right)+m^{2}(T)\text{Tr}\left(\Phi ^{\dagger}\Phi\right)+\frac{u}{4}\left(\text{Tr}\left[\Phi^{\dagger}\Phi \right]\right)^{2}\\ &+\frac{v}{4}\text{Tr}\left[\Phi^{\dagger}\Phi\right]^{2}+c(T) \left(\text{Pf}(\Phi)+\text{h.c.}\right),\end{split} \tag{3}\] where we have neglected higher-dimensional operators of \(\mathcal{O}(\Phi^{\dagger}\Phi)^{3}\), and \(\text{Pf}(\Phi)\) denotes pfaffian of \(\Phi_{ab}\) leading to the \(U(1)\) breaking by the axial anomaly. If \(c\left(T_{C}\right)=0\), the flavor symmetry is enhanced to \(\mathcal{G}=U(2N)\). Therefore, the universality class of the system is affected by the (non-)presence of the axial anomaly. At zero temperature, the VEV \(\Phi_{ab}\propto J_{ab}\) is realized for \(m^{2}(T=0)<0\)[93; 94]. The stability of the potential requires \(u>0\) and \(u+v/N>0\). The phase transition has been investigated in ref. [63] by using the \(\epsilon\)-expansion technique at the one-loop order in the context of a \(SU(2)\) gauge theory with \(2N\) flavors of quarks belonging to the fundamental representation. Let us first discuss the case without the axial anomaly, i.e. \(c(T_{C})=0\), where the symmetry breaking pattern is \(U(2N)\to Sp(2N)\). The RG equations of the effective Lagrangian (3) are given by [63] \[\begin{split}\beta_{u}\equiv\mu\frac{\partial u}{\partial\mu}=- \epsilon u+\frac{2N^{2}-N+4}{\pi^{2}}u^{2}+\frac{4N-2}{\pi^{2}}uv+\frac{3}{2 \pi^{2}}v^{2}\,,\\ \beta_{v}\equiv\mu\frac{\partial v}{\partial\mu}=-\epsilon v+ \frac{4N-5}{2\pi^{2}}v^{2}+\frac{6}{\pi^{2}}uv\,.\end{split} \tag{4}\] In this case, there is no stable IR fixed point for \(N>1\), and the RG flow drives \(v\) into the unstable region. This \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\mathcal{G}\to\mathcal{H}\) & PT dynamics & Model & Order \\ \hline \(SO(N)\to SO(N-1)\) & [70; 71] & \(N=5\)[72] & 2’nd \\ & & \(N=9\)[73] & 2’nd \\ \hline \(SO(9)\to\text{SO}(5)\times\text{SO}(4)\) & This work & [69] & 1’st \\ \hline \(SU(2N)\left(U(2N)\right)\to Sp(2N)\) & [63] & \(N=2\)[7; 74; 75; 76] & anomaly \\ & & \(N=3\)[74; 77] & 1’st \\ \hline \(SU(N)\left(U(N)\right)\to SO(N)\) & [64] & \(N=5\)[74; 78; 79] & 1’st \\ \hline \end{tabular} \end{table} Table 1: The order of a phase transition associated with spontaneous breaking of a global symmetry. The first column denotes the symmetry breaking pattern, \(\mathcal{G}\to\mathcal{H}\). The second column gives a reference performing the analysis based on the argument of universality, while the third column summarizes the corresponding composite Higgs models. The fourth column shows the order of the phase transition, and “anomaly” indicates that the order of the phase transition depends on the restoration of the axial anomaly. See the main text for detailed discussions. instability indicates that the phase transition is of the fluctuation-induced first-order. We next discuss the case with \(c(T_{C})\neq 0\), where the symmetry breaking pattern is \(SU(2N)\to Sp(2N)\). In the effective Lagrangian, the most relevant operators (except for the mass term) are considered to give the dominant effect on the phase transition dynamics. For \(N>4\), the \(\text{Pf}\left(\Phi\right)\) term becomes less relevant compared to the \(u\) and \(v\) operators and thus we can neglect it. Therefore, the fluctuation-induced first-order phase transition takes place for \(N>4\). For \(N=3\), the anomaly term is relevant and behaves as a cubic term of \(\Phi\) leading to the first-order phase transition. On the other hand, the order of the phase transition for \(N=2\) strongly depends on the effect of the anomaly. Since \(SU(4)\) and \(Sp(4)\) are locally isomorphic to \(SO(6)\) and \(SO(5)\), it is speculated in ref. [63] that the linear sigma model of Eq. (3) falls into the \(SO(6)\) universality class which shows the second-order phase transition as discussed in the previous subsection. In summary, the phase transition is of the first order for \(N>2\), while it can be of the first-order or the second-order for \(N=2\) depending on the effect of the anomaly. ### \(SU(N)\left(U(N)\right)\to SO(N)\) The littlest Higgs model [74; 78; 79] shows a symmetry breaking pattern, \(SU(N)\to SO(N)\) with \(N=5\). This breaking pattern is realized in a QCD-like theory with \(N\) quarks belonging to the real representation under a given gauge group [92]. The symmetry breaking \(SU(N)\to SO(N)\) can be described by an order parameter which belongs to the second-rank symmetric tensor representation of \(SU(N)\), \(\Phi_{ab}=\Phi_{ba}\) (\(a,b=1,2,\cdots,N\)). The three-dimensional effective Lagrangian is given by \[\begin{split}\mathcal{L}_{E}=&\,\text{Tr}\left( \partial_{i}\Phi^{\dagger}\partial_{i}\Phi\right)+m^{2}(T)\text{Tr}\left(\Phi ^{\dagger}\Phi\right)+\frac{u}{4}\left(\text{Tr}\left[\Phi^{\dagger}\Phi \right]\right)^{2}\\ &+\frac{v}{4}\text{Tr}\left[\Phi^{\dagger}\Phi\right]^{2}+c(T) \left(\det(\Phi)+\text{h.c.}\right),\end{split} \tag{5}\] A diagonal VEV, \(\Phi_{ab}\propto\delta_{ab}\), drives \(SU(N)\to SO(N)\) (or \(U(N)\to SO(N)\) if the t'Hooft operator is absent). For \(N>4\), the determinant operator becomes less relevant compared to the \(u\) and \(v\) operators. Since we are mostly interested in the case of \(N=5\), we neglect the t'Hooft term in the following analysis. The RG analysis with the \(\epsilon\)-expansion technique at the one-loop order leads to \[\begin{split}\beta_{u}&=-\epsilon u+\frac{N^{2}+N +8}{24\pi^{2}}u^{2}+\frac{N+1}{6\pi^{2}}uv+\frac{1}{8\pi^{2}}v^{2}\,,\\ \beta_{v}&=-\epsilon v+\frac{1}{2\pi^{2}}uv+\frac{2 N+5}{24\pi^{2}}v^{2}\,,\end{split} \tag{6}\] which indicate that there is no stable IR fixed point, and the RG flow drives the \(u\) and \(v\) couplings into the unstable region. This signals the fluctuation-induced first order phase transition. This result is in agreement with that of ref. [64]. ### \(SO(N)\to SO(M)\times SO(N-M)\) A composite Higgs model with global symmetry breaking \(SO(N)\to SO(M)\times SO(N-M)\) with \(M=\lfloor N/2\rfloor\) has been proposed for \(N=9\) in ref. [69]. Such a symmetry breaking pattern is realized by introducing the real bi-fundamental scalar field \(\Phi_{ab}\), which satisfies the symmetric and traceless conditions, \(\Phi_{ab}=\Phi_{ba}\) and \(\text{Tr}\Phi=0\). The effective Lagrangian is \[\begin{split}\mathcal{L}_{E}=&\,\frac{1}{2}\text{ Tr}\left(\partial_{i}\Phi\right)^{2}+\frac{1}{2}m^{2}(T)\text{Tr}\left(\Phi^{2} \right)\\ &+\frac{u}{4!}\left(\text{Tr}\Phi^{2}\right)^{2}+\frac{v}{4!} \text{Tr}\left(\Phi^{4}\right).\end{split} \tag{7}\] One may introduce the determinant operator, \(\det\Phi\), but it is irrelevant for \(N=9\). Hence we omit this term to analyze the phase transition dynamics. The potential is bounded from below when \(v>0\) and \(u+v/N>0\) are satisfied. Let us now discuss the order of the phase transition, which is new to the best of our knowledge. By using the \(\epsilon\)-expansion technique, RG equations for \(N>6\) at the one-loop order are \[\begin{split}\beta_{u}=&-\epsilon u+\frac{1}{16\pi ^{2}}\bigg{[}\frac{2}{3}\left(N^{2}+N+14\right)u^{2}\\ &+\left(\frac{8}{3}N+4-\frac{8}{N}\right)uv+\left(2+\frac{12}{N^{2 }}\right)v^{2}\bigg{]},\\ \beta_{v}=&-\epsilon v+\frac{1}{16\pi^{2}}\left[16uv+ \left(\frac{4}{3}N+6-\frac{24}{N}\right)v^{2}\right].\end{split} \tag{8}\] Figure 1: The RG flow of the \(u\) and \(v\) couplings in the effective Lagrangian (7). The potential is not bounded from below in the gray colored region. Unstable IR fixed points exist at \((u,v)=(3\pi^{2}/13,0)\) and \((u,v)=(0,0)\) (Gaussian fixed point). Figure 1 shows RG evolutions of the \(u\) and \(v\) couplings for the case of \(N=9\). We can see that there exist unstable IR fixed points at \(u=3\pi^{2}/13,\ v=0\) and \(u=v=0\) (Gaussian fixed point). The figure indicates that the RG flow drives \(u\) and \(v\) into the unstable region of the potential, unless bare couplings are tuned to be the values at unstable fixed points. Therefore, the symmetry breaking \(SO(N)\to SO(M)\times SO(N-M)\) with \(N=9,\ M=4\) is expected to accommodate the fluctuation-induced first-order phase transition. ### Comments on the universality argument Our analysis to find the order of the phase transition associated with the spontaneous breaking of a global symmetry is based on two important assumptions: (i) the effect of an explicit breaking of the global symmetry on the phase transition dynamics is negligible, and (ii) the most important excitation is the order parameter during the thermal phase transition. In reality, it is unclear that both of the assumptions are justified. Indeed, the explicit breaking is required to give the observed SM-like Higgs boson mass. Moreover, the \(SU(2)_{W}\times U(1)_{Y}\) gauge fields always couple to the order parameter. It is usually challenging to take account of these effects in the universality argument. Ref. [95] has proposed the use of gauge moose [96] to realize desired global symmetry breaking patterns in composite Higgs models. In this construction, a global symmetry breaking is realized by gauging a subgroup of an enlarged global symmetry. In such a case, the effect of gauge fields is clearly essential, and one may not justify our assumption (i). We do not consider this construction in the present discussion. Finally, let us comment on an issue of the \(\epsilon\)-expansion technique. We have obtained our results by the extrapolation of \(\epsilon\to 1\), which is not fully justified in general. For example, the inclusion of higher order corrections may change our conclusion. One of the powerful methods to overcome this difficulty is to utilize the conformal bootstrap technique [97] because it does not rely on the perturbative calculation. The use of the conformal bootstrap approach is beyond the scope of the present paper and left for a future study. ## III Confinement transition We now take a benchmark model proposed in ref. [7] where the strong dynamics of a \(Sp(2N_{c})\) gauge theory induces the spontaneous breaking of its flavor symmetry to realize the pNGB Higgs, and investigate the confinement-deconfinement phase transition. By taking the large \(N_{c}\) limit with a fixed number of flavors, the system is assumed to be well-described by the pure \(Sp(2N_{c})\) gauge theory at finite temperature. Then, we discuss that the dynamics of the phase transition can be described by the Polyakov loop model which is constructed in terms of the Polyakov loop identified as an order parameter of the phase transition [67; 68]. By using the result of lattice simulations, one can quantitatively analyze the phase transition. We can derive a GW spectrum generated by the first-order confinement-deconfinement transition and discuss its discovery prospect. ### Polyakov loop In a pure Yang-Mills theory with a unitary gauge group \(G\), such as \(SU(N_{c})\) and \(Sp(2N_{c})\), we can define a gauge invariant operator called Polyakov loop [98], \[l_{P}\equiv\frac{1}{\dim(G)}\text{Tr}_{c}\left[\mathbf{L}_{P}\right], \tag{9}\] where \(\text{Tr}_{c}\) and \(\dim(G)\) denote the trace taken over the color space and the number of dimensions of the gauge group \(G\) in the fundamental representation, respectively. For example, \(\dim(Sp(2N_{c}))=2N_{c}\) and \(\dim(SU(N_{c}))=N_{c}\). The thermal Wilson line \(\mathbf{L}_{P}\) is defined as \[\mathbf{L}_{P}\equiv\mathcal{P}\exp\left[\int_{0}^{\beta}\text{d}\tau\,T^{a}A_{4} ^{a}(\tau,\mathbf{x})\right]. \tag{10}\] Here, \(\mathbf{x},\ \tau,\ \beta\equiv 1/T\) are three spatial coordinates, the Euclidean time and the inverse of the ambient temperature, respectively. A \(\dim(G)\times\dim(G)\) matrix \(T^{a}\) is a generator of the color gauge group \(G\) in the fundamental representation, and \(A_{4}^{a}\) is the Euclidean temporal component of the gauge field. \(\mathcal{P}\) denotes the path-ordering along the temporal direction. In the current normalization, the Euclidean temporal component of the covariant derivative of a matter field transformed under the fundamental representation is given by \(D_{4}\equiv\partial_{4}-A_{4}\). The thermal average of the Polyakov loop \(l_{P}\) behaves as \(\langle l_{P}\rangle\propto e^{-\beta\Delta F}\) where \(\Delta F\) represents the free energy of an isolated test quark relative to the energy without the quark (see refs. [99; 100] for the path-integral derivation of this behavior in the finite temperature field theory). Then, if the expectation value of the Polyakov loop vanishes, \(\langle l_{P}\rangle=0\), the free energy of an isolated test quark costs an infinite energy, which is identified with the confinement phase. On the other hand, if \(\langle l_{P}\rangle\neq 0\), the free energy of an isolated test quark is finite, and hence, the system is identified with the deconfinement phase. Therefore, \(l_{P}\) can be regarded as a good order parameter for the confinement-deconfinement phase transition. There exists a global symmetry to distinguish the confinement and deconfinement phases. In the equilibrium thermal field theory (with the imaginary time formalism), the gauge field satisfies a periodic boundary condition for the Euclidean time, \(A_{\mu}(\tau,\mathbf{x})=A_{\mu}(\tau+\beta,\mathbf{x})\). The gauge transformation is \[A_{\mu}(\tau,\mathbf{x})\to U(\tau,\mathbf{x})(A_{\mu}(\tau,\mathbf{x})+i\partial_{\mu})U ^{\dagger}(\tau,\mathbf{x})\,, \tag{11}\] with \(U(\tau,\mathbf{x})\in G\). A remarkable feature is that the periodicity of the gauge field does not necessarily result in the periodicity of \(U(\tau,\mathbf{x})\). The periodicity of the gauge field only requires that \(U(\tau,\mathbf{x})\) satisfy the following twisted boundary condition [101]: \[U(\tau+\beta,\mathbf{x})=zU(\tau,\mathbf{x})\,. \tag{12}\] Here, \(z\) is the center of the group \(G\) such that \(z\) commutes with every element of \(G\). We can explicitly write is as \(z=e^{i2\pi k/N_{c}}\) (\(k=1,2,\cdots,N_{c}-1\)) for \(G=SU(N_{c})\) and \(z=-1\) for \(G=Sp(2N_{c})\), in addition to the trivial transformation, \(z=1\). The gauge transformation (11) with the condition (12) is conventionally called the center transformation. The non-trivial center transformation acts on the Polyakov loop as \(l_{P}\to z\,l_{P}\) with \(z\neq 1\), which tells that \(l_{P}\) is charged under the center transformation.6 Hence, the confinement phase \(\langle l_{P}\rangle=0\) is regarded as the center symmetric phase, while the deconfinement phase \(\langle l_{P}\rangle\neq 0\) is identified as the broken phase. Footnote 6: From a modern perspective of generalized symmetries, the center symmetry can be understood as a one-form symmetry [102]. When we introduce massless dynamical matter fields, the center symmetry is explicitly broken or preserved depending on their representations under the gauge group \(G\). For example, a scalar (or fermion) field in the fundamental representation \(\phi(\tau,\mathbf{x})\) transforms as \(\phi(\tau,\mathbf{x})\to U(\tau,\mathbf{x})\phi(\tau,\mathbf{x})\). The boundary condition of \(\phi(\tau,\mathbf{x})\) in the equilibrium finite temperature field theory is given by \(\phi(\tau+\beta,\mathbf{x})=\pm\phi(\tau,\mathbf{x})\) where \(+\) and \(-\) correspond to the cases of the scalar and fermion, respectively. This is incompatible with the twisted boundary condition (12) so that the center symmetry is explicitly broken. Thus, when massless dynamical matter fields in the fundamental representation are introduced, \(l_{P}\) is no longer a good order parameter for the confinement-deconfinement phase transition. With a sizable effect of explicit breaking, \(l_{P}\) acquires a non-vanishing value at any non-zero temperature which usually makes the transition a smooth crossover, where thermodynamical quantities are smooth functions of the temperature, rather than a phase transition.7 In practice, by performing the path integral with respect to dynamical matter fields, one obtains a contribution to the Polyakov loop potential which explicitly breaks the center symmetry. A good example is the ordinary QCD. As we will discuss later, the \(SU(3)\) pure Yang-Mills theory leads to the first-order confinement-deconfinement transition. However, the lattice simulation of QCD including (highly improved staggered) light quarks indicates that the confinement-deconfinement transition is of the crossover rather than the first-order phase transition [104]. On the other hand, dynamical matter fields in the adjoint representation do not break the center symmetry because the gauge transformation is the same as that of the gauge field which is compatible with the twisted boundary condition (12). Thus, in this case, \(l_{P}\) remains a good order parameter for the phase transition. Footnote 7: This situation is very similar to the crossover in a ferromagnetic system with an external magnetic field, as argued in ref. [103]. Since our benchmark composite Higgs model [7] contains \(N_{f}=4\) Weyl fermions in the fundamental representation of the gauge group \(G=Sp(2N_{c})\), the center symmetry is explicitly broken. Another source of explicit breaking arises when the model accommodates top partners by introducing vector-like colored and hypercharged fermions in the two (or higher) index representation under \(Sp(2N_{c})\).8 Therefore, in the benchmark model, the Polyakov loop \(l_{P}\) is not a good order parameter in general. Footnote 8: Fermion matter contents in generic composite Higgs models accommodating top partners are summarized in ref. [105] by considering the requirement of the t’Hooft anomaly matching condition. ### Large \(N_{c}\) limit The analysis of the confinement-deconfinement transition in our benchmark composite Higgs model is generically challenging due to non-perturbative effects. One of successful ideas to overcome this difficulty is to make the color number \(N_{c}\) large and perform the large \(N_{c}\) expansion, as originally proposed by t'Hooft [106]. The large \(N_{c}\) limit is defined as \(N_{c}\to\infty\) with a fixed t'Hooft coupling \(\lambda\equiv g^{2}N_{c}\), where \(g\) is the gauge coupling. At a large \(N_{c}\), t'Hooft showed that Feynman diagrammatic calculations are available, and the \(N_{c}\) dependence is determined by the topology of a diagram (see e.g. ref. [107] for the basic argument of the large \(N_{c}\) expansion at zero temperature). Let us now consider \(N_{f}\) flavors of quarks in the fundamental representation of the gauge group, and take the large \(N_{c}\) limit with a fixed \(N_{f}\). The contribution to the vacuum energy from those quarks scales as \(\mathcal{O}(N_{c}N_{f})\), while that of the gauge field is given by \(\mathcal{O}(N_{c}^{2})\)[107]. Hence, the quark contribution is sub-dominant and suppressed by a factor of \(N_{f}/N_{c}\) compared to the gauge field contribution as least in the zero-temperature field theory. This feature may be preserved for the deconfinement phase even in the finite-temperature field theory since the number of degrees of freedom is the same as that of the zero-temperature field theory. Thus, the Polyakov loop potential may be dominated by the gauge field and scale as \(\mathcal{O}(N_{c}^{2})\), while the matter contribution is given by \(\mathcal{O}(N_{c}N_{f})\) and negligible for \(N_{f}/N_{c}\ll 1\). Since Weyl fermions introduced to accommodate top partners are in the two-index anti-symmetric (or higher) representation under the gauge group \(G\) and in the fundamental representation under the ordinary \(SU(3)_{C}\), their number of degrees of freedom is larger than that of the gauge field. Thus, such fermions give unsuppressed effects on the center symmetry at a large \(N_{c}\), and we cannot discuss the dynamics of the confinement-deconfinement phase transition since there has been no available information from the lattice simulation or first-principle approach. For this reason, we concentrate on the large \(N_{c}\) analysis of the confinement-deconfinement transition in a composite Higgs model without top partners. We have argued that the Polyakov loop potential may be dominated by the gauge field contribution at a large \(N_{c}\). It enables us to construct the effective field theory of the confinement-deconfinement phase transition in terms of the Polyakov loop with the input of lattice simulations for a pure Yang-Mills theory. The first attempt has been made by Pisarski in refs. [67; 68] for the confinement-deconfinement transition in the \(SU(N_{c})\) pure Yang-Mills theory (see also ref. [101] for the analysis of confinement-deconfinement phase transitions for general gauge theories based on the argument of universality). Let us apply this effective approach to the case of \(G=Sp(2N_{c})\). The center symmetry is \(Z_{2}\) whose transformation is defined by \(l_{P}\to-l_{P}\), and hence, the potential of \(l_{P}\) must be symmetric under the \(Z_{2}\) transformation. Following ref. [67], we postulate the simplest polynomial Polyakov loop potential as \[\frac{V_{\text{pure}}(l_{P})}{T^{4}}=-\frac{a(T)}{2}l_{P}^{2}+b(T )l_{P}^{4}+c(T)l_{P}^{6}+d(T)l_{P}^{8}\,, \tag{13}\] where \(a(T),b(T)\) and \(c(T)\) denote temperature dependent couplings that are undetermined within the effective theory and requires the input of lattice simulations or the first-principle approach. At a critical temperature \(T=T_{C}\), when \(b(T_{C})>0\) and \(a(T_{C})=0\) are satisfied for an arbitrary \(c(T_{C})\), the above potential takes the same form as that of the Ising model, and the phase transition is of the second order as long as the argument of universality and the form of the Polyakov loop potential (13) are valid (see Sec. II.1 for the discussion of the argument of universality). On the other hand, the first-order phase transition is realized when \(b(T_{C})<0\) and \(c(T_{C})>0\).9 Hence, the actual order of the phase transition depends on microscopic physics. In this discussion, the last term of the potential is irrelevant, but it might be needed to reproduce the result of lattice simulations. Footnote 9: \(a(T_{C})=b(T_{C})=0\) corresponds to the phase boundary of the first and second-order phase transitions, called tricritical point. For a small number of colors, there exist lattice simulations for the confinement-deconfinement transition in the \(Sp(2N_{c})\) pure Yang-Mills theory [65; 66]. It has been found that the phase transition is of the second-order for \(N_{c}=1\), while it is of the first order for \(N_{c}=2,3\). There is also interesting theoretical progress to clarify the order of the confinement-deconfinement phase transition in the \(Sp(2N_{c})\) pure Yang-Mills theory. The functional RG approach reveals that the phase transition is of the first order [66] for \(N_{c}=2\). Ref. [108] showed that the first-order confinement-deconfinement phase transition is confirmed for \(N_{c}>1\) by studying the quantum phase transition in supersymmetric gauge theories under the conjecture that the thermal confinement-deconfinement transition is smoothly connected by the quantum phase transition against gluino mass deformation. The similar behavior has been found in the \(SU(N_{c})\) pure Yang-Mills theory. A lattice simulation has reported that the confinement-deconfinement phase transition in the \(SU(2)\) pure Yang-Mills theory is of the second-order, and its universality class is in good agreement with that of the three-dimensional Ising model [109] (note that \(SU(2)=Sp(2)\) in our notation). On the other hand, the phase transition becomes the first order when the number of colors is sufficiently large, \(N_{c}>2\)[110; 111; 112; 113; 114]. Some theoretical approaches also indicate the first-order phase transition [115; 116]. It is conceivable that pure Yang-Mills theories with large color degrees of freedom generally lead to the first order phase transitions, and the nature of the phase transitions is adequately captured by the large \(N_{c}\) expansion, independent of the detail structure of \(G\). In fact, at zero temperature, the large \(N_{c}\) behaviors of pure Yang-Mills theories with \(G=SU(N_{c}),SO(N_{c}),Sp(2N_{c})\) are equivalent in the sense that the expectation value of the Wilson loop is the same, as explicitly demonstrated in ref. [117]. Although there is no direct proof of this similarity in the finite-temperature field theory, we assume that it is maintained at finite temperature. That is, thermodynamical properties of confinement-deconfinement transitions in pure Yang-Mills theories are independent of \(G\) in the large \(N_{c}\) limit. We utilize this assumption and the input of lattice simulations for the \(SU(N_{c})\) pure Yang-Mills theory to determine the parameters in Eq. (13) for \(G=Sp(2N_{c})\). Let us fit the Polyakov loop potential under the assumption that the potential for the \(Sp(2N_{c})\) pure Yang-Mills theory has the same form as that of \(SU(N_{c})\) at a large \(N_{c}\) except for the center symmetry.10 Refs. [52; 118] have argued that the result of lattice simulations for the first-order confinement-deconfinement transition can be phenomenologically described by the four and six-dimensional Polyakov loop potential in Eq. (13) at a large \(N_{c}\). Following ref. [118], the Polyakov loop potential for the \(SU(8)\) pure Yang-Mills theory is fitted by the following parameterization, Footnote 10: Since the center symmetries of the \(Sp(2N_{c})\) and \(SU(N_{c})\) pure Yang-Mills theories are \(Z_{2}\) and \(Z_{N_{c}}\), respectively, a replacement \(l_{P}^{2}\to\|l_{P}\|^{2}\) is required. However, this replacement does not affect the following analysis because thermodynamical quantities do not change. \[a_{8}(T) = a_{80}+a_{81}\left(\frac{T_{\text{con}}}{T}\right)+a_{82}\left( \frac{T_{\text{con}}}{T}\right)^{2} \tag{14}\] \[\quad+a_{83}\left(\frac{T_{\text{con}}}{T}\right)^{3}+a_{84}\left( \frac{T_{\text{con}}}{T}\right)^{4},\] \[a_{80}=28.7,\ a_{81}=-69.8,\ a_{82}=134,\ a_{83}=-180, \tag{15}\] \[a_{84}=56.1,\ b_{8}=90.5,\ c_{8}=157,\ d_{8}=-68.9, \tag{16}\] where \(T_{\rm con}\) is the critical temperature of the confinement-deconfinement phase transition at which two free energy minima of \(\langle l_{P}\rangle=0\) and \(\langle l_{P}\rangle\neq 0\) separated by a potential barrier are degenerate. In order to translate the above fitting into the case of the \(Sp(2N_{c})\) pure Yang-Mills theory, it is needed to take account of the change of color degrees of freedom. Since the number of degrees of freedom for \(Sp(2N_{c})\) is \(N_{c}(2N_{c}+1)\) while that of \(SU(N_{c})\) is \(N_{c}^{2}-1\), the Polyakov loop potential for \(Sp(2N_{c})\) in Eq. (13) may be fitted by \[a(T)=a_{0}+a_{1}\left(\frac{T_{\rm con}}{T}\right)+a_{2}\left( \frac{T_{\rm con}}{T}\right)^{2}\] \[\qquad\qquad+a_{3}\left(\frac{T_{\rm con}}{T}\right)^{3}+a_{4} \left(\frac{T_{\rm con}}{T}\right)^{4}, \tag{17}\] \[a_{i}(T)=\frac{N_{c}(2N_{c}+1)}{63}a_{8i}(T)\quad(i=0,\cdots,4),\] (18) \[b=\frac{N_{c}(2N_{c}+1)}{63}b_{8},\] (19) \[c=\frac{N_{c}(2N_{c}+1)}{63}c_{8},\ d=\frac{N_{c}(2N_{c}+1)}{63} d_{8}. \tag{20}\] Here, we have assumed the color dependence of the potential as \(V_{\rm pure}\propto N_{c}^{2}-1\) for \(SU(N_{c})\) and \(V_{\rm pure}\propto N_{c}(2N_{c}+1)\) for \(Sp(2N_{c})\), which is justified at least for the \(SU(N_{c})\) pure Yang-Mills theory with a large \(N_{c}\)[52]. The total Polyakov loop potential is schematically decomposed as \[V_{P}(l_{P},T)=V_{\rm pure}(l_{P},T,N_{c})+V_{\rm matter}(l_{P},T,N_{c},N_{f})\,, \tag{21}\] where \(V_{\rm pure}\) and \(V_{\rm matter}\) represent contributions from the gauge field in Eq. (13) and from dynamical matter fields in the fundamental representation, respectively. The first term in the potential preserves the center symmetry, while the second term breaks it explicitly. It is clear that \(V_{\rm pure}(l_{P},T,N_{c})\propto 2N_{c}^{2}\) at a large \(N_{c}\). We assume that the second term is proportional to \(N_{f}N_{c}\) in the large \(N_{c}\) limit with a fixed \(N_{f}\). ### Gravitational wave signals We now discuss GW signals generated from the cosmological first-order phase transition associated with the confinement of the \(Sp(2N_{c})\) gauge theory in the large \(N_{c}\) limit (see refs. [119; 120; 121] and references therein for reviews of GW signals generated by the first-order phase transition). Since new fields introduced in the composite Higgs model possess SM gauge quantum numbers, we assume that they share the same temperature as that of the SM thermal plasma. When the cosmic temperature is high enough, the Polyakov loop potential has the center symmetry breaking minimum at \(\langle l_{P}\rangle\neq 0\) corresponding to the deconfinement phase. As the temperature cools down due to the cosmic expansion, a metastable local minimum appears at \(\langle l_{P}\rangle=0\). At \(T=T_{\rm con}\), two minima are degenerate and separated by a potential barrier. For a lower temperature \(T<T_{\rm con}\), the center symmetry preserving minimum \(\langle l_{P}\rangle=0\) becomes energetically favorable. Bubbles then nucleate at some nucleation temperature \(T_{n}\) when tunneling takes place. The nucleation temperature can be roughly estimated in the following way. The tunneling probability per unit time and per unit volume is expressed as \[\Gamma=\mathcal{A}e^{-S_{E}}, \tag{22}\] where \(S_{E}\) is the classical configuration called bounce, and \(\mathcal{A}\) is the fluctuation around the bounce configuration. When the temperature is high enough, \(S_{E}\) is obtained by the \(O(3)\)-symmetric bounce configuration [122], \[S_{E}=\frac{S_{3}}{T},\quad S_{3}=\int\mathrm{d}\bar{r}\,4\pi \bar{r}^{2}\left[\frac{1}{2}\left(\frac{dl_{P}^{B}}{d\bar{r}}\right)^{2}+V_{P} (l_{P}^{B},T)\right]. \tag{23}\] Here, the length scale is normalized by the temperature, \(\bar{r}\equiv rT\), where \(r\) is the length in three-dimensional polar coordinates. In the above expression, \(l_{P}^{B}\) is the solution of the following differential equation: \[\frac{\mathrm{d}^{2}l_{P}^{B}}{\mathrm{d}\bar{r}^{2}}+\frac{2}{ \bar{r}}\frac{\mathrm{d}l_{P}^{B}}{\mathrm{d}\bar{r}}+\frac{\partial V_{P}(l_{P} ^{B},T)}{\partial l_{P}^{B}}=0\,, \tag{24}\] under the boundary conditions, \[\left.\frac{\mathrm{d}l_{P}^{B}}{\mathrm{d}\bar{r}}\right|_{\bar{r} =0}=0\,,\quad l_{P}^{B}(\bar{r}\rightarrow\infty)=l_{PF}\,. \tag{25}\] Here, \(l_{PF}\neq 0\) is the position of the local minimum of \(V_{P}\). By the dimensional analysis, we set \(\mathcal{A}\sim T^{4}\). The Hubble parameter during the radiation domination era is \[H^{2}(T)=\frac{\rho_{\rm rad}}{3M_{\rm Pl}^{2}}\,,\quad\rho_{ \rm rad}=\frac{\pi^{2}}{30}g_{*}(T)T^{4}\,, \tag{26}\] where \(M_{\rm Pl}\simeq 2.4\times 10^{18}\,\mathrm{GeV}\) denotes the reduced Planck mass, and \(g_{*}(T)=g_{\rm SM*}(T)+g_{\rm new*}(T)\) is the effective number of relativistic species of the thermal plasma before the confinement-deconfinement transition. Here, \(g_{\rm SM*}\simeq 106.75\)[123] is the one for the SM sector, while \(g_{\rm new*}\) is that of a new sector introduced in the composite Higgs model. In the large \(N_{c}\) limit, \(g_{\rm new*}\simeq 4N_{c}^{2}+\mathcal{O}(N_{c}N_{f})\), where factor 2 comes from the polarization degrees of freedom of the gauge field. The nucleation temperature can be roughly estimated by \(\Gamma(T_{n})=H^{4}(T_{n})\). Assuming that \(S_{3}(T)/T\) is a monotonic function around \(T=T_{n}\), the condition leads to \[\left.\frac{S_{3}}{T}\right|_{T=T_{n}}=137-2\log\left(\frac{g_{* }(T_{n})}{100}\right)-4\log\left(\frac{T_{n}}{1\,\mathrm{TeV}}\right). \tag{27}\] As the right hand side only depends on the cosmic temperature logarithmically, we find \(T_{n}\simeq T_{\rm con}\). The critical temperature \(T_{\rm con}\) is around the confinement scale which is set to be \(T_{\rm con}=1\,{\rm TeV}\). The condition is less sensitive to the precise values of \(T_{\rm con}\) and \(g_{*}\) due to the logarithmic dependence. In order to compute GW signals, one needs to estimate the amount of the released energy of the first-order phase transition transferred into the bulk kinetic energy of the fluid and the mean separation of bubbles. The ratio of the amount of the released energy to the energy density of the fluid is parameterized as \[\alpha\equiv\left[\frac{\Delta V_{P}-\frac{1}{4}T\frac{\partial\Delta V_{P}}{ \partial T}}{\rho_{\rm rad}}\right]_{T=T_{n}}\,. \tag{28}\] Here, \(\Delta V_{P}\equiv V_{P}(l_{PF})-V_{P}(l_{Pt})\) with \(l_{Pt}\equiv l_{P}^{B}(\bar{r}\to 0)\) being the tunneling point obtained by solving the bounce equation. The mean bubble separation normalized by the Hubble parameter is roughly estimated by the rate of the bubble nucleation probability, \[\widetilde{\beta}\equiv-\frac{1}{H(t_{n})}\frac{\rm d}{\rm d t}\left(\frac{S_ {3}}{T}\right)_{t=t_{n}}, \tag{29}\] where \(t\) is the cosmic time and \(t_{n}\) is the time when bubbles are nucleated. Using the relation \({\rm d}T/{\rm d}t=-TH(T)\), Eq. (29) can be rewritten as \[\widetilde{\beta}=\left.T\frac{\rm d}{\rm dT}\left(\frac{S_{3}}{T}\right) \right|_{T=T_{n}}. \tag{30}\] The other important quantity is the terminal bubble wall velocity \(v_{w}\). In weakly-coupled theories, the terminal bubble wall velocity can be estimated by computing the friction from the thermal plasma [124; 125; 126; 127]. On the other hand, in strongly-coupled theories, it is challenging to explicitly compute the friction. For this reason, we here simply assume that Jouguet detonation bubbles, where \(v_{w}>c_{s}=1/\sqrt{3}\) with \(c_{s}\) being the sound speed, are realized. Then the bubble wall velocity is determined by the following relation [128]: \[v_{w}=\frac{\sqrt{2\alpha/3+\alpha^{2}}+\sqrt{1/3}}{1+\alpha}. \tag{31}\] Note that if the actual wall velocity is slower than \(c_{s}\), the resultant GW signals are suppressed. The total amplitude of GW signals can be schematically decomposed as \[\Omega_{\rm GW}=\Omega_{\rm coll}+\Omega_{\rm sound}+\Omega_{\rm tur}, \tag{32}\] where \(\Omega_{\rm coll},\ \Omega_{\rm sound}\) and \(\Omega_{\rm tur}\) are contributions from bubble collisions, the sound wave and turbulence of the thermal plasma, respectively. In our analysis, we simply assume that most of the released energy is converted into the hot thermal plasma by frictions acting on the wall. Then, the dominant contribution comes from sound waves or turbulence of the thermal plasma. In this case, numerical calculations [119] reveal that the contribution of sound waves is considerably larger than that of turbulence. Hence, we focus on the contribution of sound waves in our analysis. The contribution to \(\Omega_{\rm sound}\) is estimated by numerical calculations [38] and given by \[\Omega_{\rm sound}h^{2}=\Omega_{\rm peak}h^{2}\left(\frac{f}{f_{ \rm sound}}\right)^{3}\left(\frac{7}{4+3(f/f_{\rm sound})^{2}}\right)^{\frac{7} {2}},\] \[\Omega_{\rm peak}h^{2}=2.65\times 10^{-16}\left(\frac{10^{5}}{ \widetilde{\beta}}\right)\left(\frac{\kappa_{\rm sound}^{2}}{10^{-5}}\right) \left(\frac{\alpha}{1+\alpha}\right)^{2}\left(\frac{100}{g_{*}}\right)^{\frac{1 }{3}}, \tag{33}\] \[f_{\rm sound}=1.9\times 10^{1}\,{\rm Hz}\left(\frac{\widetilde{ \beta}}{10^{5}}\right)\left(\frac{1}{v_{w}}\right)\left(\frac{T_{n}}{1\,{\rm TeV }}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}}.\] Here, \(h,\ \kappa_{\rm sound}\) and \(f\) are the dimensionless Hubble parameter at present time, the fraction of the released energy injected into the energy of GW signals and the frequency of GW signals, respectively. The fraction \(\kappa_{\rm sound}\) can be further decomposed as \[\kappa_{\rm sound}=\sqrt{\tau_{\rm sound}}\,\kappa\,, \tag{34}\] where \(\tau_{\rm sound}\) and \(\kappa\) are the sound-wave period normalized by the inverse of the Hubble parameter and the efficiency coefficient, respectively. Refs. [129] have pointed out that a suppression factor arises if the sound-wave period is shorter than the inverse of the Hubble parameter during the phase transition. The sound-wave period \(\tau_{\rm sound}\) is expressed as [129; 130; 131] (see also [132]) \[\tau_{\rm sound}=\min\left\{1,\frac{(8\pi)^{\frac{1}{3}}{\rm Max}\{v_{w},c_{s} \}}{\widetilde{\beta}\widetilde{U}_{f}}\right\}\,, \tag{35}\] with \(\bar{U}_{f}\) being the root-mean-square four velocity of the thermal plasma [38] which is approximately given by \[\bar{U}_{f}^{2}\simeq\frac{3}{4}\frac{\alpha}{1+\alpha}\kappa\,. \tag{36}\] For the Jouguet detonation bubble, the efficiency coefficient can be fitted by [128] \[\kappa=\frac{\sqrt{\alpha}}{0.135+\sqrt{0.98+\alpha}}\,. \tag{37}\] Let us discuss the detectability of GW signals generated by the first-order confinement-deconfinement phase transition in our benchmark composite Higgs model at a large \(N_{c}\). By calculating the bounce action (23) and evaluating the nucleation temperature (27), we find \(\alpha\) and \(\widetilde{\beta}\). Since the latent heat is proportional to \(4N_{c}^{2}\) for the \(Sp(2N_{c})\) pure Yang-Mills theory, \(\alpha=\mathcal{O}(0.1)\) is realized. It turns out that the duration of the phase transition is maximized at \(N_{c}=9\) which leads to \(\widetilde{\beta}=4.3\times 10^{4}\). With this optimized parameter set, the peak amplitude of GW signals is \(\Omega_{\rm peak}h^{2}\simeq 1.4\times 10^{-15}\) which receives a strong suppression due to a large \(\widetilde{\beta}\), while the peak frequency is \(f_{\rm sound}\sim 10\,\)Hz. Unfortunately, such GW signals are too weak to be detected by future-planned experiments. ## IV Discussion In the present paper, we have discussed cosmological phase transitions in various composite Higgs models, each of which may show a confinement-deconfinement transition and a phase transition associated with the spontaneous breaking of a global symmetry that realizes the SM Higgs field as a pNGB. To determine the order of the phase transition for a global symmetry breaking, we have assumed the argument of universality and studied the effective linear sigma model. The effect of infrared fluctuations on the phase transition dynamics was taken into account by the RG analysis with \(\epsilon\)-expansion at the one-loop order. For a confinement-deconfinement phase transition, we took the UV-completed model proposed in ref. [7] as a benchmark. The model consists of a strongly-coupled \(Sp(2N_{c})\) gauge theory. Although the presence of dynamical matter fields in the fundamental representation makes it difficult to investigate the phase transition, taking a large number of color degrees of freedom with a fixed number of flavors, we have argued that the effect of dynamical matter fields is subdominant, and the first-order confinement-deconfinement phase transition takes place, as it is favored in the \(Sp(2N_{c})\) pure Yang-Mills theory for \(N_{c}>1\). The amplitude of GW signals generated by the first-order phase transition is not within the reach of future-planned experiments. So far, we have separately discussed the phase transition associated with a global symmetry breaking and the confinement-deconfinement transition. If the size of a gauge group is sufficiently small, it is possible to construct the effective theory which simultaneously describes those phase transitions in terms of the Polyakov loop and the quark condensate, assuming that the global symmetry breaking is realized by the NJL mechanism. This Polyakov-Nambu-Jona-Lasinio (PNJL) model has been applied to the chiral and confinement phase transitions in the ordinary QCD [133] and QCD-like theories [51] (see also ref. [134] for an excellent review). The usage of the PNJL model is promising to simultaneously analyze both phase transitions in a UV-completed composite Higgs model. One cannot apply this approach with a single Polyakov loop \(l_{P}\) alone in contrast to the Polyakov loop model when the number of color degrees of freedom is large, as argued in ref. [51]. Then, the analysis of two phase transitions requires an extension of the PNJL model such as the matrix model approach [135; 136; 137; 138]. ## Acknowledgement We would like to thank Hiromasa Watanabe for valuable discussions. KF is supported by JSPS Grant-in-Aid for Research Fellows Grant No. 22J00345. YN is supported by Natural Science Foundation of China No. 12150610465. The work of RS is supported in part by JSPS KAKENHI No. 23K03415. ## Appendix A NJL analysis of \(SU(4)\left(U(4)\right)\to Sp(4)\) We here consider a UV completed model proposed in ref. [7] and discuss the phase transition associated with global symmetry breaking, \(SU(4)\left(U(4)\right)\to Sp(4)\). As discussed in ref. [7], the phase transition is induced by four-Fermi interactions. We extend their analysis by including the effect of thermal fluctuations of new quarks to determine the order of the phase transition. The gauge group is \(G=Sp(2N_{c})\), and the model contains four Weyl fermions \(Q_{i}\) (\(i=1,\cdots,4\)) in the fundamental representation. Since the number of flavors is even and the fundamental representation of \(Sp(2N_{c})\) is pseudo-real, the theory does not suffer from both Witten and chiral anomalies. We introduce the following four-Fermi interactions of the NJL model: \[\mathcal{L}_{Q}= \frac{G_{1}}{2N_{c}}\left(Q_{i}^{a}Q_{ja}\bar{Q}_{j}^{b}\bar{Q}_ {ib}\right)\] \[+\frac{G_{2}}{8N_{c}}\left(Q_{i}^{a}Q_{ja}Q_{k}^{b}Q_{lb}\epsilon^ {ijkl}+{\rm h.c.}\right). \tag{10}\] Here, \(\epsilon^{ijkl}\) is the Levi-Civita symbol, and \(\bar{Q}\) denotes the complex conjugate of \(Q\). The contraction with respect to \(a,b\) is taken by the \(Sp(2N_{c})\) invariant metric tensor \(J_{ab}\) whose matrix form is defined by \[J=\begin{pmatrix}0&\mathbf{1}_{N_{c}\times N_{c}}\\ -\mathbf{1}_{N_{c}\times N_{c}}&0\end{pmatrix}, \tag{11}\] where \(\mathbf{1}_{N_{c}\times N_{c}}\) denotes the \(N_{c}\times N_{c}\) unit matrix. The interactions (10) possess the \(SU(4)\) flavor symmetry whose transformation is defined as \(Q_{i}\to U_{ij}^{(4)}Q_{j}\) where \(U_{ij}^{(4)}\) is a \(SU(4)\) matrix. For \(G_{2}=0\), the flavor symmetry is enhanced to \(U(4)\). We evaluate the mean-field values of chiral condensate parameterized by auxiliary fields \(M_{ij}\equiv\langle Q_{i}^{a}Q_{ja}\rangle/N_{c}\). At the tree level, the potential for \(M_{ij}\) is \[V_{\rm tree}^{\rm NJL}(M_{ij})= \frac{G_{1}N_{c}}{2}{\rm Tr}\left[MM^{\dagger}\right]\] \[+\frac{G_{2}N_{c}}{8}\left(\epsilon^{ijkl}M_{ij}M_{kl}+{\rm h.c.} \right). \tag{12}\] Following the original analysis of the NJL model, we decompose \(Q_{i}^{a}Q_{ja}\) into mean-field values and fluctuations, \(Q_{i}^{a}Q_{ja}\to M_{ij}+Q_{i}^{a}Q_{ja}\). Then, the effective interactions of \(Q\) with \(M\) are described by \[\mathcal{L}_{\rm int}= -V_{\rm tree}^{\rm NJL}+\frac{G_{1}}{2}\left\{\mathrm{Tr}\left[Q^{a} Q_{a}M^{\dagger}\right]+\mathrm{h.c.}\right\}\] \[+\frac{G_{2}}{4}\left(\epsilon^{ijkl}Q_{i}Q_{j}M_{kl}+\mathrm{h.c. }\right). \tag{10}\] Since \(M_{ij}\) is a \(4\times 4\) complex anti-symmetric matrix, using the \(SU(4)\) symmetry, one can parametrize the matrix as \[M=\begin{pmatrix}0&M_{1}&0&0\\ -M_{1}&0&0&0\\ 0&0&0&M_{2}\\ 0&0&-M_{2}&0\end{pmatrix}, \tag{11}\] where \(M_{1}\) and \(M_{2}\) are generally complex. By integrating out the \(Q\) fields, one obtains the zero-temperature effective potential up to the one-loop order, \[V_{\rm zero}^{\rm NJL} =V_{\rm tree}^{\rm NJL}+V_{\rm one-loop}^{\rm NJL}\,,\] \[V_{\rm tree}^{\rm NJL} =N_{c}G_{1}\left(|M_{1}|^{2}+|M_{2}|^{2}\right)+N_{c}G_{2}(M_{1}M _{2}+\mathrm{h.c.})\,,\] \[V_{\rm one-loop}^{\rm NJL} =-4N_{c}\int\frac{\mathrm{d}^{3}k}{(2\pi)^{3}}\sum_{i=1,2}\sqrt{ k^{2}+|m_{i}|^{2}}\,,\] \[m_{1} \equiv e^{i\phi_{1}}\left|G_{1}M_{1}^{*}+G_{2}M_{2}\right|,\] \[m_{2} \equiv e^{i\phi_{2}}\left|G_{1}M_{2}^{*}+G_{2}M_{1}\right|\,.\] Here, \(e^{i\phi_{1},z}\) are arbitrary phases. The one-loop effective potential is UV divergent and requires renormalization. Following ref. [51], we regularize the UV divergence by inserting a sharp three-dimensional momentum cut-off \(\Lambda_{\rm 3D}\). Such a regularization scheme is different from that of ref. [7] where a four-dimensional momentum cutoff is introduced. With our regularization scheme, the one-loop effective potential in Eq. (10) is evaluated as \[V_{\rm one-loop}^{\rm NJL}= -\sum_{i=1,2}\frac{N_{c}\Lambda_{\rm 3D}^{4}}{4\pi^{2}}\bigg{[}(2+ \xi_{i}^{2})\sqrt{1+\xi_{i}^{2}}\] \[+\frac{\xi_{i}^{4}}{2}\log\Bigg{(}\frac{\sqrt{1+\xi_{i}^{2}}-1}{ \sqrt{1+\xi_{i}^{2}}+1}\Bigg{)}\bigg{]},\quad\xi_{i}^{2}\equiv\frac{|m_{i}|^{2 }}{\Lambda_{\rm 3D}^{2}}\,. \tag{12}\] It is useful to rewrite \(V_{\rm tree}^{\rm NJL}\) in terms of \(m_{1,2}\), \[V_{\rm tree}^{\rm NJL}=\frac{N_{c}}{G_{1}^{2}-G_{2}^{2}}\left[G _{1}\left(|m_{1}|^{2}+|m_{2}|^{2}\right)-G_{2}\left(m_{1}m_{2}+\mathrm{h.c.} \right)\right]. \tag{13}\] Since \(m_{1}\) and \(m_{2}\) are complex variables, \(V_{\rm zero}^{\rm NJL}\) depends on three real fields, \(|m_{1,2}|\) and a relative phase of \(m_{1,2}\). A potential for the relative phase comes from the second term of Eq. (13), and is minimized for \(m_{1}m_{2}=\pm|m_{1}||m_{2}|\) for \(G_{2}/(G_{1}^{2}-G_{2}^{2})\gtrless 0\), respectively. Focusing on the minimum of the relative phase of \(m_{1,2}\), \(V_{\rm tree}^{\rm NJL}\) can be reexpressed as \[V_{\rm tree}^{\rm NJL}=\frac{N_{c}G_{1}\Lambda_{\rm 3D}^{2}}{G_{1}^{2}-G_{2}^ {2}}\left(\xi_{1}^{2}+\xi_{2}^{2}\right)-\left|\frac{2N_{c}G_{2}\Lambda_{\rm 3D }^{2}}{G_{1}^{2}-G_{2}^{2}}\right|\xi_{1}\xi_{2}\,. \tag{14}\] Therefore, we effectively need to consider two fields \(|m_{1,2}|\) in our analysis. This discussion is applicable when we include the thermal effect since the thermal potential of \(m_{1,2}\) only depends on \(|m_{1,2}|\) (see the concrete expression (12)). To investigate the parameter region where the chiral symmetry breaking, \(\xi_{1}=\xi_{2}\neq 0\), is realized, we numerically evaluate the zero-temperature potential, \(\bar{V}_{\rm zero}^{\rm NJL}(G_{1},\bar{G}_{2},\xi_{1},\xi_{2})\equiv V_{\rm zero }^{\rm NJL}/N_{c}\Lambda_{\rm 3D}^{4}\) with \(\bar{G}_{1}\equiv G_{1}\Lambda_{\rm 3D}^{2}\) and \(\bar{G}_{2}\equiv G_{2}\Lambda_{\rm 3D}^{2}\). Fig. 2 displays the phase diagram in terms of \(\bar{G}_{1}\) and \(\bar{G}_{2}\). We can see from the figure that the chiral symmetry breaking, \(SU(4)\to Sp(4)\), takes place for sufficiently large four-Fermi interactions, while it does not for small interactions. In fact, the chiral symmetry breaking takes place when the following conditions are satisfied: \[\bar{G}_{1}>|\bar{G}_{2}|\,,\quad 2\pi^{2}\left(\frac{\bar{G}_{1}-|\bar{G}_{2}|}{ \bar{G}_{1}^{2}-\bar{G}_{2}^{2}}\right)<1\,. \tag{15}\] The first condition is required for the stability. We focus on the parameter region to satisfy the conditions (15) in the following analysis. We shall discuss the chiral phase transition dynamics by including the effect of thermal fluctuations. Thermal corrections can be calculated by using the standard imaginary time formulation of the thermal field theory (see e.g. refs. [139, 54] for reviews). The thermal effective potential is given by \[V_{\rm th}^{\rm NJL}=-8N_{c}T\int\frac{\mathrm{d}^{3}k}{(2\pi )^{3}}\sum_{i=1,2}\log\left(1+\exp\left[-\frac{E_{k}^{i}}{T}\right]\right),\] \[E_{k}^{i}\equiv\sqrt{k^{2}+\Theta(\Lambda_{\rm 3D}-k)\Lambda_{\rm 3D }^{2}\xi_{i}^{2}}\,, \tag{16}\] Figure 2: The phase diagram in terms of \(\bar{G}_{1}\) and \(\bar{G}_{2}\). Symmetries of the ground state are \(SU(4)\) and \(Sp(4)\) for the white and blue colored regions, respectively. In the red colored region, the zero-temperature effective potential is not bounded from below. where \(k\) denotes the magnitude of the three-dimensional momentum, and \(\Theta(x)\) is the Heaviside step function. In the above expression of the thermal potential, we introduce a 3D momentum cutoff scale, following refs. [50; 51]. Although the thermal potential is UV finite due to the Boltzmann suppression, this cutoff treatment may be required because we introduce it for the zero-temperature potential.11 However, we stress that the first-order chiral phase transition takes place for sufficiently large Fermi constants if we do not impose this cutoff treatment for the finite-temperature effective potential. This conclusion is also found in ref. [51]. Footnote 11: For example, in a \(SU(3)\) gauge theory with fermions in the adjoint representation, it is found in ref. [50] that this treatment is required to obtain a clear distinction between the confinement-deconfinement and chiral phase transitions. In the numerical analysis, it is convenient to parameterize the total effective potential, \(V_{\rm tot}^{\rm NJL}=V_{\rm zero}^{\rm NJL}+V_{\rm th}^{\rm NJL}\), as \[\bar{V}_{\rm tot}^{\rm NJL}(\bar{G}_{1},\bar{G}_{2},\tilde{T}, \xi_{1},\xi_{2}) \equiv\frac{V_{\rm tot}^{\rm NJL}}{N_{c}\Lambda_{\rm 3D}^{4}} \tag{12}\] \[=\tilde{T}^{4}\bar{V}_{\rm th}^{\rm NJL}+\bar{V}_{\rm zero}^{\rm NJL }(\bar{G}_{1},\bar{G}_{2},\xi_{1},\xi_{2})\,,\] where \[\bar{V}_{\rm th}^{\rm NJL}= -\frac{4}{\pi^{2}}\int_{0}^{\infty}\mathrm{d}t\,t^{2}\bigg{[}\sum _{i=1,2}\log\!\left(1\right.\] \[+\left.\exp\left[-\sqrt{t^{2}+\Theta\left(1/\tilde{T}-t\right) \frac{\xi_{i}^{2}}{\tilde{T}^{2}}}\right]\right)\right], \tag{13}\] with \(\tilde{T}\equiv T/\Lambda_{\rm 3D}\). We numerically evaluate \(\bar{V}_{\rm tot}^{\rm NJL}\) within the range of \(-50\leq\bar{G}_{i}\leq 50\) (\(i=1,2\)), and investigate the temperature dependence of potential minima. We find that there is no parameter region that leads to the first-order phase transition. Let us comment on the result of the current NJL analysis by comparing that of the analysis based on the argument of universality. In section II.2, we have seen that the fluctuation-induced first-order phase transition is expected to take place for the symmetry breaking pattern \(U(2N)\to Sp(2N)\) for \(N\geq 2\). For \(\bar{G}_{2}=0\), the Lagrangian of the NJL model (1) possesses the enlarged \(U(4)\) symmetry, but the order of the chiral phase transition based on this model is not of the first order, which is in tension with the result based on the universality argument. In the NJL analysis, one takes account of the effect of thermal fluctuations of quarks on the chiral condensate as we have explicitly performed here, while thermal fluctuations of the chiral condensate itself have not been considered because the mean-field approximation is assumed. Indeed, the second-order phase transition takes place in the analysis based on the argument of universality if we neglect fluctuations of the chiral condensate. Therefore, in the NJL model, one may not capture the important effect originated from fluctuations of the chiral condensate which plays a central role to determine the order of the phase transition. It may be interesting to study the chiral phase transition by using the quark-meson model [140; 141] because thermal fluctuations of the chiral condensate, as well as fluctuations of quarks coupled to the chiral condensate, may be adequately included by the functional RG method.
2305.18304
Semantic-aware Digital Twin for Metaverse: A Comprehensive Review
To facilitate the deployment of digital twins in Metaverse, the paradigm with semantic awareness has been proposed as a means for enabling accurate and task-oriented information extraction with inherent intelligence. However, this framework requires all devices in the Metaverse environment to be directly linked with the semantic model to enable faithful interpretation of messages. In contrast, this article introduces the digital twin framework, considering a smart industrial application, which enables semantic communication in conjugation with the Metaverse enabling technologies. The fundamentals of this framework are demonstrated on an industrial shopfloor management use case with a digital twin so as to improve its performance through semantic communication. An overview of semantic communication, Metaverse, and digital twins is presented. Integration of these technologies with the basic architecture as well as the impact on future industrial applications is presented. In a nutshell, this article showcases how semantic awareness can be an effective candidate in the implementation of digital twins for Metaverse applications.
Senthil Kumar Jagatheesaperumal, Zhaohui Yang, Qianqian Yang, Chongwen Huang, Wei Xu, Mohammad Shikh-Bahaei, Zhaoyang Zhang
2023-05-12T09:19:30Z
http://arxiv.org/abs/2305.18304v1
# Semantic-aware Digital Twin for Metaverse: A Comprehensive Review ###### Abstract To facilitate the deployment of digital twins in Metaverse, the paradigm with semantic awareness has been proposed as a means for enabling accurate and task-oriented information extraction with inherent intelligence. However, this framework requires all devices in the Metaverse environment to be directly linked with the semantic model to enable faithful interpretation of messages. In contrast, this article introduces the digital twin framework, considering a smart industrial application, which enables semantic communication in conjugation with the Metaverse enabling technologies. The fundamentals of this framework are demonstrated on an industrial shopfloor management use case with a digital twin so as to improve its performance through semantic communication. An overview of semantic communication, Metaverse, and digital twins is presented. Integration of these technologies with the basic architecture as well as the impact on future industrial applications is presented. In a nutshell, this article showcases how semantic awareness can be an effective candidate in the implementation of digital twins for Metaverse applications. Metaverse, Digital Twin, Semantic Communication, Internet of Everything, Extended Reality. ## I Introduction Replicating the physical world with the digital world has become feasible through digital twins, where the users gain real-time immersive experience in extended reality (XR). It is highly assistive in performing collaborative tasks, with the aid of three dimensional (3D) simulations and the widespread applications of artificial intelligence (AI) to learn from the environment, predict the probable consequences and enable actions in a smart way [1]. Furthermore, in order to ensure green energy solutions, preserve natural resources, enhance safety concerns, and support immerse remote communication, digital twins technology is playing an important role in the Metaverse. Across the globe, smart industries are taking advantage of this new trend of interconnectedness that enables the Metaverse platform. Almost every modeled physical object in the Metaverse can approach the status of its physical twin, including the interactions and relations between the physical and virtual objects. Through utilizing the potential of cloud services, the digital twins and Metaverse could eliminate the boundaries of reach and their core capabilities. With limitless potential, it is feasible to track and analyze data from the connected environment to identify anomalies, patterns, and trends. As there is a paradigm shift in the way currently humans interact with the web, the emergence of Web 3.0 offers us the potential for advancements. It includes the most recent iteration of the internet, which is more immersive, connected, and decentralized. With the new technologies currently available at the service, we will bring to the Internet an experience more similar to that of the actual world, where the virtual world's activities can affect the real world and therefore allow replicating events. To realize a Metaverse, key performance indicators are required such as user collaboration, persistence, and interoperability. Owing to COVID-19, a new perspective on work has been introduced as telework, where several professions had their operating procedures altered to fit a new reality. For instance, medical consultations, remote surgeries, psychological evaluations, online classes, and the growing home-office work format are examples of applications where the Metaverse can be used, which are used to establish virtualized spaces to bring people together. In the virtual environments provided by the Metaverse, users can interact with digital twins and realize immersive 3D modeling, simulation, and data analytics. Additionally, the digital avatars in the Metaverse can interact with data about physical objects in the digital twins, which greatly helps to optimize the performance of the physical object or system in real-time. There are some recent works on digital twin-enabled systems [2, 3]. These works are mainly focused on wireless communication networks, for example, key design aspects for digital twin based beyond fifth generation (B5G) / sixth generation (6G) networks along with their architectural components [2] and digital twin networks for typical application scenarios [3]. In the Metaverse, B5G/6G networks can support real-time interactions between users and digital entities, such as avatars and virtual objects. When integrated seamlessly with digital twins, B5G/6G networks can provide the connectivity and processing power required to monitor machine performance in real-time and make adjustments on-the-fly to optimize its performance. To support seamless communication with digital twins, emerging technologies including integrated sensing and communication, federated learning, and massive multiple-input multiple-output can can provide the connectivity and processing power required to monitor machine performance and make adjustments in real-time. Metaverse frameworks generalize this concept to more intelligent communication services that encompass digital twins as one of their core enabling technologies. Moreover, imparting semantic awareness to such systems introduces collaborative and cooperative frameworks at different layers of the network [4]. Fig. 1 shows an abstract representation of deploying digital twin interfaces in an industry 5.0 scenario and beyond with their associated states and rewards. The major contributions in the article are as follows: * This paper provides an overview of semantic communi cation, Metaverse, and digital twins is presented, along with their integration with the basic architecture. * An understanding of the potential impact of semantic-awareness digital twins on Metaverse research is provided for communication engineers. * This paper showcases how semantic awareness can be an effective candidate in the implementation of digital twins for Metaverse in future industrial applications. * The importance of investigating the potential challenges and demands associated with the proposed semantic-aware Metaverse framework is emphasized. ## II System Architecture and Requirements Eventually, the Metaverse is being evolved as the dominant technology that accelerates digital transformation, and the research community is capable of visualizing immense values in the services they offer. For instance, significant performance improvements have been realized throughout the communication channel as a result of the machine learning (ML) / deep learning (DL) models extensive and effective training. In particular, the strategies are extremely promising, since they have a large range of advantages in multimedia communication. The interaction of Metaverse with beyond fifth-generation (B5G) / sixth-generation (6G) services will be essential for it to realize its full potential. Further, among the communication service providers, Metaverse drives innovations in a flawless digital recreation of real networks through distributed cloud architectures. Reliable means of multimedia communication are essential in the Metaverse. It depends on numerous enabling technologies to achieve a robust communication system, such as high-speed, low-latency, and secure data transmission. The Metaverse research community has investigated a wide range of architectural solutions inspired by earlier research outcomes to engineer multimedia communication. The preference for trustworthy and reliable communication architecture rather than traditional architectures is in demand due to the increasing complexity of multimodal multimedia data. ### _The Metavere Framework_ Metaverse is a 3D virtual world with social connection, which is designed to provide a convenient interface to massive human users for an immersive experience. For the fusion of the virtual and real worlds in the Metaverse, it is crucial to establish an open-source, interoperable chain to transfer resources between multiple virtual worlds, and seamlessly connect one another through Web 3.0. Physical interactions between the real objects and the reconstructed 3D objects could be enabled through sensors that are made feasible through active reconstruction. Moreover, the passive reconstruction may not involve any interactions among the objects. Further, in order to evaluate the outcomes, we could simulate the possible use cases driven through digital twins and Metaverse, which helps to estimate the impact of any change or conditions. In order to perceive the physical world from the perspective of Metaverse platforms, the power of AI helps to enhance and automate crucial tasks and facilitates superpowers to the frontline workers engaged in a smart working environment. The workers also could be given a provision of building applications and configure the workflows and engage in collaborative operations through the virtual spaces, thereby they could share and receive expertise right from their place. Digital twin-enabled Metaverse platforms help to navigate through the physical world and get relevant information regarding their digital counterparts based on demand. With the substantial computing power requirements, the could service, intelligent edge devices, the aforementioned capabilities could be realized. Incorporating such a framework for a machine, or building with the impactfulness supported through digital twins, and the inherent immersive capabilities facilitated through XR and Metaverse solutions, a true blend of the physical and virtual world could be realized. For deploying robust multimedia communication architectures in Metaverse use cases, the spectrum resources, privacy, security, and Quality of Service (QoS) / Quality of Experience (QoE) demands to need to be evaluated. Further, with the best set of solutions from each multimedia communication Fig. 1: Abstraction of digital twin interfaces in an industry 5.0 scenario and beyond with their states and rewards. architecture, and by largely considering the dependability aspects of Metaverse enabling technologies, the choice of optimal and tailored-made architectures could be extended in the future. Since the primary role of the Metaverse is to ensure an immersive experience for the users, more active components may be needed, which demand more bandwidth, increased power, and payload space, which needs to be carefully dealt with through effective spectrum management. As most of the multimedia data processing is challenging to be executed locally in smart gadgets and Internet of Everything (IoE) devices in the Metaverse environment and due to several privacy and security concerns, we need realistic high-performance remote computing units or cloud services. Hence, this demands high-speed transmission of data, and accordingly, a flexible and new range of protocols would guarantee optimized spectrum utilization. This may be achieved by examining the effect of various crucial parameters that address the design requirements on speed, power, and data transfer limits in the wireless communication channel. The Metaverse with a hybrid set of multimedia contents requires a secure steganography methodology suitable for each category of digital data. However, due to higher computational complexity, handling big data in a dynamic manner is challenging. The rich set of multimedia content utilization in the Metaverse from a diversified range of smart gadgets and IoE devices may be required, which are challenging to share due to privacy concerns. In fact, the multimodal sensory data from the environment need to share core information content, and detection of abnormal signals, nonlinear interference, and other threats. However, these multimedia data are privacy sensitive, and the communication platforms may not gain confidence in the transmission of information not related to their operating range of frequencies. Using reversible data hiding, each multimedia information could be embedded as secret data with low computational complexity and could be comfortably reversed back with the reduced multidimensional prediction error. Furthermore, considering certain multimedia features, attribute-based signature schemes could ensure robust cryptographic mechanisms. Deploying cryptography techniques through Advanced Encryption Standard (AES) based ciphers as substitution could resist the attacks on multimedia information. On the other hand, the QoE performance metrics are assessed from the user's perspective in terms of either subjective or objective factors. In the case of objective factors, the Fig. 2: Digital twin state status with respect to semantic observational data. QoE metrics include quantifiable network-based manageable parameters. In subjective QoE metrics, it involves experience evaluation done by humans, which is challenging to be measured. In this context, the communication architecture and specifications for handling large streams of multimedia data, need to introduce mechanisms to distribute the data across the entire infrastructure for the deployment of optimal solutions. These include practical considerations on how the streaming of multimedia data could be handled and processed for provisioning better QoS/QoE for the users. First, the QoS provisioning approaches are made available to users with better spectrum sensing strategies [5], and decisions are made for optimized utilization of the resources in handling multimedia data streams. Second, the QoE optimization solutions must be validated for enhancing the user experience with rich multimedia content reception without latency in the communication. Indeed, since the audiovisual communication services required large resources, the Metaverse framework needed to optimize the platform during multimedia communication. In addition to the widespread use of XR and other reality-specific enabling technologies, beyond 5G and 6G might be the power that propels the Metaverse and accelerates the transition of our world to the social media of the future. More precisely, three distinct aspects including unleashing personal creativity, investigating immersive experiences, and creating new virtual worlds, could have an impact on the future of the entertainment industry with the Metaverse. One can create virtual avatars in the Metaverse to facilitate real-time collaboration among users, thus enabling simulated social scenarios for social analytics. Moreover, digital twins that possess semantic awareness can enhance immersive and personalized interactions while providing valuable insights to improve social interactions. Utilizing advanced wireless communication standards such as B5G/6G can further enhance this capability. The streams of generating revenue through the Metaverse are already very well established. The non-fungible tokens (NFTs) are one among them, which are distinctive tokens that contain important information and have a value determined by the market and demand. NFTs can be utilized to validate ownership due to their specific data, and they can be transferred between owners just like any other physical thing. They can assist in confirming a person's ownership of a piece of Metaverse real estate or their eligibility to enter a virtual concert. NFTs will also be used as prizes in a lot of Metaverse games. The prediction mechanisms on the fifth-generation low-latency communication systems extract the characteristics of data packets like transmission period by deploying AI algorithms within the models based on a large number of historical data. Extraction of quality multimedia content is crucial for estimating the QoS/QoE requirements, which could be vital inputs for provisioning enriched user experience. For instance, multimodal multimedia data streaming from the source environment may overclaim their communication costs to receive higher bandwidth and resources from the service providers, which is unfair to the authenticated and genuine users in the network. Moreover, threats over the privacy of the user data such as their location, and crucial multimedia content may dominate due to the launch of malicious inference attacks to steal the data. Further, it is also challenging to verify the facial features, voice, and video streaming with the use of digital avatars of authenticated users. A potential breach of such vital multimedia data in the Metaverse communication platforms would end up in the identity loss of genuine users. In principle, the demanded reliability concerns for the visual semantics can depend on the frameworks based on AI techniques, high-speed 6G services, and the matched knowledge base among the transmitter and the receivers. Proving that the semantic communication systems, AI models, and 6G services have the necessary reliability and availability in the Metaverse environments, considering them for task-oriented applications is a daunting task [4]. Indeed, reliability is one of the crucial performance metrics for Metaverse applications that can be represented in the end-to-end communication between a transmitter and a receiver with the mean failure time or the misinterpretation of the semantic information [6]. ### _Digital Twins to Realize Metaverse_ Robust communication protocols are necessary to bridge the gap between the semantic-aware digital twin and the Metaverse. To ensure compatibility with both frameworks and accommodate their unique needs, such protocols should be standardized and flexible. Additionally, it is crucial to implement robust security measures and trustworthy protocols to ensure the accurate and uninterrupted transfer of data, preventing loss or compromise of information. In the future Internet of 3D worlds with the focused attention of interoperable Metaverse driven by IoT-assisted Digital Twin synchronization can be leveraged by a large range of virtual service providers. With a group of IoT devices, in [7], a dynamic hierarchical framework is presented, where an evolutionary game approach was adopted to select the virtual service provider. The users demanding task-oriented semantic services could consider interacting with the corresponding virtual service providers and ensure optimal synchronization with the Metaverse platforms. The relationship between low-latency communications and Digital Twin-enabled Metaverse is implicitly utilized for providing better computation infrastructure [8]. Such platforms enable optimized communication and computation variables, which suites to be a better candidate for semantic-aware communications. Since the QoE is often captured using latency and reliability, in Metaverse applications the QoE of digital twins could be enhanced through semantic communication. Subsequently, they could also be involved in the optimization of computation resources of the IoE devices at the user end, better edge caching capabilities, as well as optimized transmission power, and bandwidth allocation. Designing semantic-aware digital twins that are environment-based can negate a few concerns, as they need various modalities of perception to facilitate accurate and immersive interactions. For example, real-time interactive scenes and the graphical models have to be collaborated to generate implicit and explicit semantics for transmission, which could be from the multi-modal signals in the environment and plays a crucial role in reducing polysemy. Designing a robot-environment interaction system that approximates the challenges in the form of consistent and interactive controls, and assisting in sorting objects in an environment using XR technique, ensures better observability and interpretation of the scenes [9]. However, in most cases, this is not feasible as the systems lack the Spatio-temporal features, which could be effectively addressed through explicit semantics by obtaining low-level feature descriptors. ### _Semantic Awareness Prioritization_ The transformation from higher bit-level communications to semantics-aware techniques drives knowledge-oriented QoE with better user privacy. Analysis of semantic similarity and semantic noises is one of the vital components to establishing trustworthy semantic communication. Moreover, by combining the multimodal data streams from a cloud or edge server, XR frameworks can extend the semantic awareness at the application layer by realizing the low-latency benefits in the communication between the semantic encoders and decoders. reinforcement learning-based semantic communication paradigm in association with the confidence-based distillation mechanism could address the joint semantics noise coding challenges [10]. Its existence could drastically degrade the quality of digital twins in the Metaverse, where robust and resilient noise-free semantic communication is in demand for better QoE and user experience. Fig. 2 shows the state variables variations in the digital twin models with respect to semantic observational data. Semantic communication for the Metaverse is composed of numeric subsystems that may operate effectively for the multi-user scenario and motivates the convergence of intelligence and the infrastructure layers of the Metaverse. The subsystems are involved in the extraction of source semantic features, and compression of data by considering imperfect channel features with joint source-channel coding design. However, for one-to-many semantic communication, which is normally recommended model for digital twin applications in the Metaverse, semantic feature extraction through appropriate recognizer is in demand. A deep neural network (DNN) based semantic communication system in [11] could be configured as a semantic recognizer to distinguish the users in the system with the pre-trained model. Here a semantic importance score has been defined as a benchmark performance measure, that fixes the semantic distortion and the design of a nonlinear transform function might address the residual errors in the source-channel decoding phases. It is worthwhile to note that the integration of AI in semantic communication involves context-based encoding and decoding of data, which are jointly capable of minimizing interpretation errors and maximizing system capacity. Particularly, edge intelligence should be implemented for Metaverse to enrich the user experience with digital twins. The data exchange through semantic communication with the Metaverse application layers requires cross-modal semantic encoders and decoders. Therefore, the deep learning models, which will probably provide model-choice and design guidelines based on the environmental conditions are conceived to be one of the practical solutions to establish context-based semantic communication [12]. Regarding reliability and effectiveness, only authorized digital clones in the Metaverse environment can interpret the semantic information. However, the reliability demands for multi-model services are challenging to be Incorporated due to the polysemy and ambiguity in semantic communication. Nonetheless, the emergence of some advanced semantic communication paradigms demonstrates great potential to adapt to the Metaverse environment. When it comes to the incorporation of semantic intelligence in the next generation communication systems, the scientific community started developing deep learning-based semantic models for 6G services [13]. The most recent challenges in providing higher-order intelligence, better reconstruction of signals, and in particular, semantic sensing, as well as information extraction, could be addressed through the ubiquitous intelligence of 6G services. This service also allows the transformation of semantic-aware networking with improved reliability for Metaverse applications. It provides a clear vision of addressing the networking challenges and threats over digital twins with sensory interactions among humans and digital twin models. As such, the future semantic communication systems and Metaverse will feature stringent goal-oriented and semantic-aware networking infrastructures. ## III Use Case: Smart Industries Integrating semantic-aware digital twins for the Metaverse can revolutionize healthcare, industries, smart cities, and entertainment. It creates more realistic and intelligent virtual environments that enhance our ability to interact with the digital world. This integration also unlocks new opportunities for innovation and creativity in virtual reality. In this section, we identify the usage of digital twins in modern industrial use cases blended with Metaverse infrastructure, which demands semantic-aware features and requirements. Obviously, the industrial Metaverse incorporates the network of digital twins, which integrates the physical machinery and the 3D digital virtual space. It enables the industry managers and the shop floor domain-specific workers to connect the available digital twins with the suppliers as well as customers to work together and gather valuable insights on real-time demands and requirements. Fig. 3 shows the semantic transmission system that communicates among the physical assets and digital twin models. To achieve reliable transmission of semantic information in increasingly harsh environmental conditions in industries, the 6G connectivity could ensure a hassle-free and immersive experience for Metaverse users. The demands on establishing reliable solutions digital twins system paradigm could be solved by implementing fault-tolerant architectures, that can operate critical functions even if some components fail. Another possible solution is in using advanced analytics to predict potential reliability issues by monitoring and allowing proactive diagnosing and maintenance before any failure occurs. The data mining or analytic layer in 6G services handles a massive volume of raw data from many devices, where incorporation of knowledge discovery through semantic derivations is recommended. The illustrative Metaverse framework with its demands of integrating 6G reveals distinctive characteristics by including intelligent sensing, edge computing, digital twins, and powerful means of handling security issues [14]. Beyond semantic encoding and decoding of the multimodal stream of text, audio, images, and videos among Metaverse environments, effective rendering of the data for the users could be provisioned by the 6G services to ensure the persistence of online 3D experience for the users in the virtual environment. Moreover, such QoE in the rendering strategies also benefits the edge servers, blockchain miners, digital twin operators, and other stakeholders associated with the Metaverse environment. Enabled by robots interacting with an environment, a novel semantic-aware digital twin system framework was proposed in [9], which aims to sort objects with the aid of 3D graphic models embedded with the dynamics of the robotic system. This work considers the XR techniques and promises an enhanced semantic level jointly with a higher degree of observability in the deployed environment. In addition, based on interactive triggered control, the consistency of digital twins was ensured in response to the real-time interaction, which directly assists in the evaluation of semantic reasoning. Considering a dynamic environment, semantic communication with a different knowledge base is often required if more users are connected to the Metaverse. Fig. 4 shows the evolution of the digital twin of an industrial robot updated from their respective state spaces. However, even though Metaverse technologies assist in the interaction of virtual objects in the 3D world, they require 3D models, simulators, and legacy manuals to interact with the machinery in the standard industrial setup. Moreover, it is considered challenging to manage and operate by the managers and workers from remote on the physical machinery with recommended loads. As the integration and deployment of digital twins in such scenarios could impart synthetic data for training, the semantic aware configurations could be combined with the traditional reasoning techniques to deal with the virtual world of digital twins in industries. Also, training and education services could be enhanced mainly with semantic-aware digital twin models, which could guarantee sustainable results. One such instance was reported in [15], where an aircraft maintenance Metaverse is used in the aviation industry for effective maintenance through interactive training and education. This constitutes an effective deployment of digital twin models of aircraft and has motivated the efforts to migrate towards 3D virtual training. As a result, the incorporation of semantic awareness could discard redundant data transmission and provides an immersive learning platform for the users operating the real-world machines of the twin model. Considering a case of an industrial shopfloor management use case, the semantic-aware Metaverse interface is shown in Fig. 5. The semantic features that affect the performance of Metaverse and corresponding digital twin solutions are summarized in Table I. ## IV Challenges and Open Research Issues Digital identities are one of the promising features of the Metaverse, where the user's identity could be categorized in their personal space, and workplace, which are built through digital assets and avatars. Although the outcomes and recommendations from the Metaverse frameworks can be incorporated to uniquely identify the individuals, the perception of the real world through this approach is challenging. Furthermore, the rich streaming multimedia content coupled with the digital identities while adhering to the user demands ensures that the multimedia data is effectively integrated and tested for URLLC-driven Metaverse frameworks. Furthermore, deepfake videos are another threat that requires considerable attention, which could camouflage the real-time multimedia streaming information streaming in the Metaverse platforms. According to a larger amount of multimedia information, the deepfake videos impose and hinder the intelligence level of the Metaverse systems and make them challenging to incorporate URLLC-driven frameworks for making intelligent decisions. With the primary focus on enhancing the spectrum utilization and energy efficiency of future wireless networks, reconfigurable intelligent surface (RIS) could be used as one of the promising candidates for digital twin models and semantic communications. Due to mostly inconsistent knowledge bases at the transmitter and receiver in semantic communications, it could be made homogeneous with optimized time and resource consumption with RIS. Altering the properties of the tiny reflecting surfaces based on the multimedia data and the visual schematics could be processed effectively [5]. Periodic updation of the knowledge base is often recommended in the visual schematics, particularly while they are addressing Fig. 3: Schematic diagram of a semantic transmission system between physical assets and digital twin models. the Metaverse scenario. This makes the longer sharing time and more challenging to update periodically. By installing RIS at frequent intervals in public places, energy-efficient and smart propagation of multimedia content could be guaranteed in the wireless network. Thus, the periodic updating of the knowledge base and task-oriented activation of RIS for consistent data transfer is a wide-open issue in schematic communications. ## V Conclusion Although the full-fledged incorporation of semantic-awareness digital twins is a few years away, it is extremely timely to understand its potential demands and challenges for communication engineers particularly involved in Metaverse research. We have provided the core semantic architecture for digital twins in the Metaverse with a couple of use cases in industries and healthcare applications. Subsequently, we Fig. 4: Evolution of the digital twin for an industrial robot updated from their respective state spaces. Fig. 5: Semantic-aware Metaverse in an industrial shopfloor management use case. have summarized the potential challenges whose investigation will leverage the expertise in semantic communication, digital twins, Metaverse, and their physical implementation. Semantic-aware digital twins can support the 3D virtual world with real-time social connection in the Metaverse through efficient semantic transmission with background reasoning and immersive interaction with digital twins.
2309.00846
pSTarC: Pseudo Source Guided Target Clustering for Fully Test-Time Adaptation
Test Time Adaptation (TTA) is a pivotal concept in machine learning, enabling models to perform well in real-world scenarios, where test data distribution differs from training. In this work, we propose a novel approach called pseudo Source guided Target Clustering (pSTarC) addressing the relatively unexplored area of TTA under real-world domain shifts. This method draws inspiration from target clustering techniques and exploits the source classifier for generating pseudo-source samples. The test samples are strategically aligned with these pseudo-source samples, facilitating their clustering and thereby enhancing TTA performance. pSTarC operates solely within the fully test-time adaptation protocol, removing the need for actual source data. Experimental validation on a variety of domain shift datasets, namely VisDA, Office-Home, DomainNet-126, CIFAR-100C verifies pSTarC's effectiveness. This method exhibits significant improvements in prediction accuracy along with efficient computational requirements. Furthermore, we also demonstrate the universality of the pSTarC framework by showing its effectiveness for the continuous TTA framework. The source code for our method is available at https://manogna-s.github.io/pstarc
Manogna Sreenivas, Goirik Chakrabarty, Soma Biswas
2023-09-02T07:13:47Z
http://arxiv.org/abs/2309.00846v2
# pSTarC: Pseudo Source Guided Target Clustering ###### Abstract Test Time Adaptation (TTA) is a pivotal concept in machine learning, enabling models to perform well in real-world scenarios, where test data distribution differs from training. In this work, we propose a novel approach called **p**seudo **S**ource guided **T**arget **C**lustering (pSTarC) addressing the relatively unexplored area of TTA under real-world domain shifts. This method draws inspiration from target clustering techniques and exploits the source classifier for generating pseudo-source samples. The test samples are strategically aligned with these pseudo-source samples, facilitating their clustering and thereby enhancing TTA performance. pSTarC operates solely within the fully test-time adaptation protocol, removing the need for actual source data. Experimental validation on a variety of domain shift datasets, namely VisDA, Office-Home, DomainNet-126, CIFAR-100C verifies pSTarC's effectiveness. This method exhibits significant improvements in prediction accuracy along with efficient computational requirements. Furthermore, we also demonstrate the universality of the pSTarC framework by showing its effectiveness for the continuous TTA framework. The source code for our method is available at [https://manogna-s.github.io/pstarC](https://manogna-s.github.io/pstarC) ## 1 Introduction Over the past decade, deep networks have shown a continuous upward trend due to the availability of large datasets [4, 6, 19], significant improvements in computing power, and advancements in algorithms [8, 23] and architectures [9, 26]. But while humans can adapt seamlessly to new domains, the performance of deep networks deteriorate significantly when the test and training distributions differ. In practical scenarios, a trained model is often deployed in an unseen test environment, so equipping it with good adaptation capabilities to mitigate the adverse effects of any domain shift is crucial. Additionally, since access to the source data may be difficult because of privacy concerns or storage limitations, there is a significant interest in the following research directions: (i) Source-free Domain Adaptation (SFDA) [33, 17, 34], which assumes access to the source model and a large amount of unlabeled test data and (ii) Test-Time adaptation (TTA) [29, 1, 2], where test data arrives in an online manner, one batch at a time, allowing for one-step model adaptation followed by prediction. SFDA and TTA methods have been developed independently, resulting in fundamentally different approaches. Here, we address the challenging and more practical task of swiftly adapting models without the need for extensive accumulation of test data, i.e. the TTA setting. Unlike SFDA methods which have been evaluated on real world domain shift datasets like VisDA [22], DomainNet [21] and Office-Home [28], TTA methods have primarily been evaluated within the confines of artificially corrupted data. It is only recently that researchers have started to address the TTA task for such real-world domain shifts [13, 2, 14]. In this work, we propose a simple yet effective TTA strategy termed **p**seudo **S**ource guided **T**arget **C**lustering (pSTarC). It is inspired by the exceptional performance of SFDA techniques like SHOT [17], NRC [33], and AaD [34] in the context of the real world domain shift benchmarks. Notably, contemporary SFDA methods, including NRC and AaD, concentrate on refining target sample clustering, leveraging the luxury of abundant unlabeled target data. To extend this SFDA principle to TTA, one compelling avenue is the maintenance of a feature bank, which dynamically populates as new target data becomes available, enriching the adaptation process. While approaches like AdaContrast [2] have successfully harnessed this concept for TTA, they need to store auxiliary components like the momentum encoder and key features, a constraint that might not align well with an online TTA framework. Our proposed pSTarC approach aims to leverage the power of SFDA objectives while adhering to the principle of minimizing memory and storage requirements for TTA task. Generally, the source-trained classifier remains unchanged during TTA to preserve the valuable class-discriminative insights gained from the source. Building on this insight, we introduce a novel strategy: utilizing the classifier to generate a diverse array of pseudo-source samples, thereby steer ing the target clustering process. Impressively, our findings reveal that generating as few as 20 pseudo-source samples per class is adequate to achieve state-of-the-art TTA performance, without imposing a significant burden on storage demands. Thus, the main contributions of this work can be summarized as follows: 1. We propose the pSTarC framework, which generates pseudo-source samples to guide the target clustering during test time adaptation. 2. We strive to achieve TTA using SFDA objectives, which not only improves the TTA performance significantly for real domain-shifts, but also helps to unify the seemingly disparate research directions. 3. pSTarC outperforms the state-of-the-art TTA techniques on Office-Home and DomainNet, and at par on VisDA, while requiring much lesser memory. 4. pSTarC also seamlessly works in Continual Test-Time Adaptation (CTTA) [31] scenario, where the test distribution changes with time. Here, its performance is at par with current state-of-the-art approaches on the large-scale DomainNet-126 benchmark. In a nutshell, pSTarC aligns seamlessly with our objective to pave the way for swift, efficient TTA in the face of real-world domain shifts, building upon the insights garnered from the relationship between SFDA techniques and such demanding benchmarks. ## 2 Related Works Here, we discuss the related work in Source-Free Domain Adaptation, Test-Time Adaptation, Continuous Test Time Adaptation and Model Inversion. **Source-free domain adaptation** (SFDA) aims to adapt a source domain trained model to a target domain without access to any labeled data from either the source or target domain. SFDA methods typically assume access to abundant unlabeled data from the target domain and leverage the structure of the data to refine the target predictions. [17] proposes to cluster target features by mutual entropy maximization along with pseudo labeling, while keeping the classifier fixed. [2] extends the idea in [17] proposing to refine the pseudo labels using a feature bank, alongside doing self-supervised contrastive learning [3]. Another line of work include [33, 34], where they exploit the inherent semantic structure of the target features extracted from the source model. They reinforce consistency between the predictions of a sample and its local neighbors while also ensuring diversity to avoid degenerate predictions. **Test Time Adaptation** (TTA) further relaxes the assumptions on data availability compared to SFDA. TENT [29] proposed the more practical fully test time adaptation setting, where source data cannot be accessed at all, and the model can only utilize the test samples in each batch encountered in an online manner for adaptation. They propose minimizing the entropy of the model predictions on the test data. More recently LAME [1] uses Concave-Convex Procedure (CCCP) to modify the feature vectors to obtain better classification, while AdaContrast [2] addresses SFDA and TTA settings by using contrastive learning with nearest neighbour soft voting for online pseudo label refinement. C-SFDA [14] uses curriculum learning in a Teacher Student framework. Other works like EATA [20] uses a small buffer from source distribution. TTN [18] trains a modified BN layer to leverage source data for improved TTA. In [13], they synthesize source proxy images by condensing the source dataset, which is then used during TTA after stylizing them to match the test distribution. Our work falls in the category of fully test-time adaptation [2, 14, 29, 31]. **Continual Test Time Adaptation** (CTTA) As a further extension of TTA, the concept of continual test-time adaptation (CTTA) has been recently introduced [31]. This protocol recognizes the dynamic nature of the testing environment, where the test domain evolves over time. CoTTA [31] adopts strategies like weight-averaged and augmentation-averaged predictions in a teacher-student framework to mitigate error accumulation. Additionally, it retains a fraction of neurons with source pre-trained weights during each iteration to prevent catastrophic forgetting, thus enabling model adaptation while preserving source knowledge. RMT [5] is a recent CTTA method that uses symmetric cross-entropy loss and contrastive loss in a teacher student framework. **Model inversion** is a recent research direction explored in [15, 24, 30] for image generation where they optimize the input space to generate an image \(\hat{x}\) using a pre-trained deep network. To do this, given an arbitrary target \(y\) which can be a label or a reference image, a trainable input \(\hat{x}\) in the image space is initialized with random noise. This input space is then optimized by minimizing a loss function \(\mathcal{L}(\hat{x},y)\), which is usually cross-entropy loss and a regularizer \(\mathcal{R}(\hat{x})\) to induce natural image prior. The training is done in an adversarial manner by alternating between the optimization of the synthesized image and that of the discriminator weights. Inspired by the effectiveness of these methods, here we propose a classifier guided _feature generation_ approach, which is used for generating pseudo-source samples for guiding the clustering of the target data. ## 3 Problem Setting & Motivation Firstly, the source model is trained using labeled source data. Then, in the Test Time Adaptation (TTA) stage, this model is adapted using the test batches in an online manner. **Source training:** The model is first trained using labeled source data \(\mathcal{D}_{s}=\{x_{i}^{s},y_{i}^{s}\}_{i=1}^{n_{s}}\) comprising of \(C\) classes. Here, \(x_{i}^{s}\in\mathcal{X}_{s}\) and \(y_{i}^{s}\in\mathcal{Y}_{s}\) denote the source sample and its class label, and \(n_{s}\) is the number of training samples. We denote the source model as \(\mathbf{F_{s}}=\mathbf{H_{s}}\circ\mathbf{G_{s}}\), where \(\mathbf{G_{s}}\) is the feature extractor and \(\mathbf{H_{s}}\) is the classifier. Following [2, 17, 34], the source model \(\mathbf{F_{s}}:\mathcal{X}_{s}\rightarrow\mathcal{Y}_{s}\) is trained by minimizing the label-smoothing cross entropy loss as \[\mathcal{L}_{src}\left(f_{s};\mathcal{D}_{s}\right)=-\mathbb{E}_{\left(x^{s},y^{s}\right)\in\mathcal{D}_{s}}\sum_{c=1}^{C}\tilde{y}_{c}^{s}\log p_{c},\] where \(p_{c}=\delta_{c}\left(f_{s}\left(x^{s}\right)\right)\) is the softmax score for class \(c\), \(\delta\) being the softmax function. The smooth label is computed as \(\tilde{y}_{c}^{s}=(1-\alpha)y_{c}^{s}+\alpha/C\), where the smoothness coefficient \(\alpha\) is set to \(0.1\). Test Time Adaptation:Given the source model \(\mathbf{F_{s}}\), during TTA, the target model \(\mathbf{F_{t}}\) is initialized with the source model \(\mathbf{F_{s}}\). We only have access to the unlabeled test samples \(x_{t}\) coming in batches from an unseen test distribution \(\mathcal{D}_{t}\). Here, we address the closed setting where the source and target samples come from the same \(C\) classes. The goal is to continuously adapt \(\mathbf{F_{t}}:\mathcal{X}_{t}\rightarrow\mathcal{Y}_{t}\) using the unlabeled samples \(x_{t}\in\mathcal{X}_{t}\) (in batches) in an online manner. Continual Test Time Adaptation:In addition to the above setup, the test data can come from multiple domains which changes over time such that \(D_{t}^{(1)}\neq D_{t}^{(2)}\neq\ldots\neq D_{s}\) leading to the continual test time adaptation scenario. ## 4 Proposed Framework The proposed pSTarC framework is based on effectively clustering the target samples which are available during test time. Our formulation is inspired by the clustering framework proposed in the state-of-the-art SFDA technique, AaD [34], which we briefly describe below. **Attracting and Dispersing (AaD):** AaD [34] treats SFDA as an unsupervised clustering problem, where consistency is enforced between predictions of local neighbourhood features, while also ensuring diversity in the feature space. The test objective for a sample \(x_{i}\) from a test batch \(\mathbf{x}_{t}\) is \[\mathcal{L}(x_{i})=-\sum_{x_{j}\in\mathcal{N}_{i}}p_{i}^{T}p_{j}+\lambda\sum_ {x_{m}\in\mathbf{x}_{t}}p_{i}^{T}p_{m} \tag{1}\] where \(p_{i}\) refers to the softmax prediction vector of the sample \(x_{i}\in\mathbf{x}_{t}\), \(p_{j}\) in the first term corresponds to the prediction vectors in its neighborhood \(\mathcal{N}_{i}\), \(p_{m}\) in the second term corresponds to the prediction vectors of the samples \(x_{m}\) in the current batch \(\mathbf{x}_{t}\). Now, we describe the proposed pSTarC framework for fully TTA task, which we also illustrate in Fig.( 1). In a TTA setting, as mentioned before, the labeled source samples are unavailable, and only the source model is available for adaptation. In addition, since the number of samples in a batch is usually quite low, it is a common practice to freeze the source trained classifier and update only the feature extractor to align target features with those of the source. Hence, we set \(\mathbf{H_{t}}=\mathbf{H_{s}}=\mathbf{H}\) and only update the feature extractor \(\mathbf{G_{t}}\) using the test data in an online manner. The goal is to adapt the test features such that they align with the source features so that the classifier \(\mathbf{H}\) is transferable to test data. The classifier, being trained in a supervised manner using abundant source data, defines the decision boundaries for which the source data is perfectly classified. We leverage this fact to synthesize pseudo-source features, which are used to guide the target clustering. Given the source model, this process is only done once to store few features and corresponding prediction scores, and can be utilized throughout the TTA process. We describe the feature generation and clustering in detail below. ### Pseudo Source Feature Generation Since the decision boundaries in the feature space remain fixed (due to the classifiers remaining unchanged), it is important to align the target features with the original source features, which will inherently lead to better clustering and hence better classification of the target samples. First, we utilize the fixed source classifier to synthesize pseudo-source features. By aligning the target to these generated features, we hope to improve the adaptation performance of the model and make it more robust to the domain shift between the source and target domains. Here, we aim to generate, say \(N\) pseudo-source features, where \(N=C\times n_{c}\), \(C\) being the number of classes and \(n_{c}\) is the number of samples per class. We first randomly initialize a feature bank \(\mathbf{f}\in\mathcal{R}^{N\times d}\), where \(d\) is the feature dimension. To compute the pseudo-source features, we use the information maximization loss which is a combination of entropy minimization and diversity maximization. These losses have been widely used in unsupervised clustering methods [17] to optimize a feature extractor to make the predictions of unlabeled samples diverse and confident. However, our objectives are very different. While they aim to learn a good feature extractor, our goal is to synthesize pseudo-source features given the source trained classifier \(\mathbf{H}\). We want to generate features which are likely to be correctly classified by the source classifier. This is achieved by minimizing the following entropy loss: \[\mathcal{L}_{ent}\left(\mathbf{f};\mathbf{H}\right)=-\frac{1}{N}\sum_{i=1}^{N} \sum_{k=1}^{C}\delta_{k}\left(\mathbf{H}\left(f_{i}\right)\right)\log\delta_{k }\left(\mathbf{H}\left(f_{i}\right)\right) \tag{2}\] where \(\delta_{k}\left(\mathbf{H}\left(f_{i}\right)\right)\) is the softmax score of class \(k\) for the pseudo-source feature \(f_{i}\in\mathbf{f}\). Along with this, we use diversity maximization loss to avoid the trivial solution where all feature vectors collapse to the same class. This ensures there are adequate number of feature vectors from each class in \(\mathbf{f}\). \[\begin{split}\mathcal{L}_{div}\left(\mathbf{f};\mathbf{H}\right)& =\sum_{k=1}^{C}\hat{p}_{k}\log\hat{p}_{k}\\ &=D_{KL}\left(\hat{p},\frac{1}{C}\mathbf{1}_{C}\right)-\log C \end{split} \tag{3}\] The loss is computed based on the mean softmax score of the test batch \(\hat{p}=\mathbb{E}_{f\in\mathbf{f}}\left[\delta\left(h\left(f\right)\right)\right]\). The first term in the equation is the Kullback-Leibler (KL) divergence between the mean prediction vector \(\hat{p}\) and the uniform distribution \(\frac{1}{C}\mathbf{1}_{C}\). Here, \(\hat{p}\) represents the marginal class distribution of the target data as estimated by the target model \(\mathbf{F_{t}}\), \(C\) is the number of classes and \(\mathbf{1}_{C}\) is a vector of ones with length \(C\). The KL divergence measures the dissimilarity between two probability distributions, and in this context, it measures the discrepancy between the class distribution in the feature bank and the ideal case where all classes are equally represented. Overall, the diversity maximization loss encourages the feature bank to have a balanced representation of features across all classes, which is important for improving the clustering performance of the TTA algorithm. To summarize, we optimize the following \[\mathbf{f}^{*}=\operatorname*{arg\,min}_{\mathbf{f}}\mathcal{L}_{ent}( \mathbf{f};\mathbf{H})+\beta\mathcal{L}_{div}(\mathbf{f};\mathbf{H}) \tag{4}\] In Fig.( 2), we visualize the generated features on setting 20 samples per class for VisDA dataset. ### Pseudo Source Guided Target Clustering The use of feature bank has proven to be effective in Contrastive learning [7] and SFDA methods like AaD [34] and AdaContrast [2]. The proposed feature bank consists of pseudo-source features which are very different from the target feature bank used in [2, 34]. Unlike target features whose pseudo labels can be noisy, we can obtain clean labels for the generated pseudo-source features. We explain below how the generated features and their label information can be leveraged to better cluster and align the target features. We visually demonstrate the entire pSTarC framework in Fig.( 1). Pseudo-labeling based on confidence thresholding has been used very effectively in several applications [27]. Here, we propose a soft pseudo-labeling approach to cluster the target samples. Specifically, we identify the low entropy test samples based on a threshold \(\tau_{t}\), which we define as the mean entropy of the batch. We aim to align these selected test samples to the nearest pseudo-source samples which belong to the same class as the sample. Formally, given Figure 1: pSTarC Framework: (1) Feature Generation: We randomly initialize a feature bank \(\mathbf{f}\) which is iteratively optimized keeping the classifier \(\mathbf{H}\) fixed to minimize the entropy of the features while maximizing the diversity across classes using the loss in eqn (4). (2) Given the learnt features, we aim to bring the low entropy samples towards the corresponding pseudo-source features. We anchor the high entropy target samples to its own prediction. We also enforce consistency between the predictions of the test sample and its strong augmentation. the generated feature bank \(\mathbf{f}^{*}\), we first obtain their softmax score vectors and pseudo labels. We denote \(p_{i}=\delta(\mathbf{H}(f_{i}))\) as the softmax score vector and \(\hat{y}_{i}=\operatorname*{arg\,max}_{c}p_{i,c}\) as the pseudo label for feature \(f_{i}\), where \(p_{i,c}\) is the score of feature \(i\) for class \(c\). We partition the features into sets \(S_{c}\) based on their pseudo labels as follows: \[S_{c}=\{f_{i};\quad\hat{y}_{i}=c,f_{i}\in\mathbf{f}\};\quad c\in\{1\dots C\} \tag{5}\] These sets are obtained once for the pseudo-source features generated and kept fixed throughout the adaptation process. Given a test batch \(\mathbf{x}_{t}\), we first obtain their confidence scores and pseudo labels and set the threshold \(\tau_{t}=\mathbb{E}_{x_{k}\in\mathbf{x}_{t}}[e_{k}]\), the mean entropy of the batch. For a test sample \(x_{k}\in\mathbf{x}_{t}\) (test batch), we denote its pseudo label as \(\hat{y}_{k}\) and compute the sample entropy as \(e_{k}\). For this sample, we define its positive set \(\mathbf{p}^{+}\) based on its entropy \(e_{k}\) as follows: (1) When \(e_{k}<\tau_{t}\), we define the positives to be \(K\) nearest pseudo-source samples from set \(S_{\hat{y}_{k}}\). (2) For samples which have high entropy, i.e, with \(e_{k}>\tau_{t}\), as the pseudo labels can be highly noisy, it is not desirable to enforce them to align towards any pseudo-source samples. Instead, we anchor it to its own prediction vector by setting \(\mathbf{p}^{+}=\{p_{k}\}\). In addition, we use its strong image augmentation \(\tilde{x}_{k}\) to enforce prediction consistency between \(p_{k}\) and \(\tilde{p}_{k}\), the prediction vector of \(\tilde{x}_{k}\). This helps the model to be invariant to image transformations and improves its generalization ability. We also use the dispersion loss that makes a sample dissimilar to the other samples in the batch, which is representative of the test data in all. This dispersion loss prevents the model from the trivial solution of all test samples collapsing to the same class. Our objective now is to make the predictions of the target embeddings similar to its positives without facing mode collapse, which we achieve by optimizing the following loss: \[\mathcal{L}_{\text{pSTarC}}(x_{k})=\underbrace{-\,p_{k}^{T}\tilde{p}_{k}}_{L_ {aug}}-\underbrace{\sum_{p_{j}^{+}\in\mathbf{p}^{+}}p_{k}^{T}p_{j}^{+}}_{L_{ attr}}+\underbrace{\lambda\sum_{x_{j}\in\mathbf{x}_{t}}p_{k}^{T}p_{j}}_{L_{ align}} \tag{6}\] We perform one step optimization on test batch \(\mathbf{x}_{t}\) using this loss and then predict their labels. This process is repeated for each batch in the TTA setting. **What makes pSTarC an effective framework?** 1. We operate in the _fully test-time scenario_, i.e., we do not assume access to source data in any form unlike some prior methods [11, 12, 13], which use the source data to equip the model for future TTA. In pSTarC, we leverage the classifier which is a part of the given source model to synthesize pseudo-source features to enable clustering during test time. 2. Feature banks have been effectively used in AdaContrast [2] to cluster the test data. However, it is expensive to have multiple large memory buffers which have to be continuously updated. We propose a _simple one-step pseudo source generation_ framework. These generated features can be used during TTA forever, as the final goal indeed is to align the test distribution to the source distribution. 3. pSTarC is a _memory efficient framework_ as we only store the online updating model, in contrast to AdaContrast [2] and C-SFDA [14] where they need to store the student and teacher model. Our framework is also _more efficient in runtime_ as we only forward pass the image and its strong augmentation, while the state-of-the-art method C-SFDA [14] uses 12 augmentations. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Method & plane & bycyl & bus & car & horse & knife & mcycl & person & plant & sktbrd & train & truck & Average \\ \hline Source & 57.2 & 11.1 & 42.4 & 66.9 & 55.0 & 4.4 & 81.1 & 27.3 & 57.9 & 29.4 & 86.7 & 5.8 & 43.8 \\ CAN\({}^{*}\)[12] & 95.7 & 88.8 & 6.9 & 68.6 & 94.5 & 94.8 & 79.2 & 70.3 & 88.7 & 80.6 & 83.2 & 51.7 & 75.2 \\ MCC\({}^{*}\)[11] & 93.9 & 78.4 & 70.4 & 74.3 & 92.5 & 84.2 & 84.5 & 58.2 & 86.6 & 36.0 & 86.1 & 20.6 & 72.2 \\ Source-Proxy TTA\({}^{*}\)[13] & 92.5 & 82.4 & 85.8 & 74.2 & 92.7 & 88.5 & 83.9 & 85.8 & 92.8 & 62.5 & 75.2 & 32.5 & 79.1 \\ BN-Adapt [25] & 87.3 & 52.1 & 83.7 & 52.8 & 83.7 & 57.0 & 83.6 & 59.2 & 69.1 & 54.7 & 80.0 & 28.1 & 66.0 \\ TENT [29] & 91.1 & 45.6 & 86.4 & 66.4 & 88.7 & 75.1 & 90.3 & 76.4 & 84.4 & 47.1 & 83.6 & 13.7 & 70.7 \\ AdaContrast [2] & 95.0 & 68.0 & 82.7 & 69.6 & 94.3 & 80.8 & 90.3 & 79.6 & 90.6 & 69.7 & 87.6 & 36.0 & 78.7 \\ C-SFDA [14] & 95.9 & 75.6 & 88.4 & 68.1 & 95.4 & 86.1 & 94.5 & 82.0 & 89.2 & 80.2 & 87.3 & 43.8 & **82.1** \\ \hline **pSTarC** & 95.1 & 82.1 & 83.6 & 61.2 & 93.8 & 89.9 & 87.9 & 80.7 & 90.9 & 81.9 & 87.6 & 48.1 & 81.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Average class accuracy (%) of pSTarC and other TTA methods on VisDA. \({}^{*}\) refers to methods utilizing source data to enable TTA. Figure 2: t-SNE plot of 240 generated pseudo-source features for TTA on VisDA dataset comprising of 12 classes. ## 5 Experimental Evaluation We evaluate the proposed framework extensively on three real-world domain shift datasets, namely VisDA [22], DomainNet-126 [21] and Office-Home [28] and also on a corruption benchmark dataset, namely CIFAR100C [10]. **Datasets:** VisDA is a challenging dataset for object recognition tasks with synthetic to real domain shift. The target domain consists of \(55,388\) real object images from \(12\) classes. **Office-Home** contains four domains - Real, Clipart, Art, Product and \(65\) classes with a total of \(15,500\) images. **DomainNet-126** is a subset of DomainNet consisting of \(126\) classes from four domains, namely Real, Sketch, Clipart and Painting. **CIFAR-100C** is a corruption benchmark with domain shifts like gaussian noise, blur, weather changes, etc. Following [31], we use severity level 5 corruptions. For VisDA-C, we compare the average of per-class accuracies while for the other datasets, we compare the average of total accuracy across domain shifts. **Model Architecture:** For TTA experiments, we use ResNet-50 [9] as the backbone for Office-Home and DomainNet-126 datasets and ResNet-101 [9] for the VisDA dataset. We use the same network architecture as in [2], in which the final part of the network is modified to include fully connected layer and Batch Normalization, and then followed by a classifier, which is a fully connected layer with weight normalization. For CIFAR-100C, we use ResNeXt [32] as used in [5, 31]. **Implementation details:** We use Pytorch framework and run all experiments on a single NVIDIA A-5000 GPU. For source training, following [2, 14] the model is initialized with ImageNet pre-trained weights and trained for \(10\), \(60\) and \(50\) epochs for VisDA, DomainNet-126 and Office-Home respectively. During test time adaptation, we only update the backbone parameters, keeping the classifier fixed for all experiments. Following [2, 13, 14], we set the batch size to \(128\) in all experiments for VisDA, DomainNet-126 and Office-Home. We use SGD as the optimizer with learning rate of 5e-4 and momentum \(0.9\). Following [5, 31], for CIFAR-100C, the batch size is set to \(200\) and we use Adam [16] optimizer with learning rate of 1e-3. We set \(\beta\) to \(5\) in eqn.(4) and the number of features per class \(n_{c}\) to 20 in all experiments. We report the results of prior methods from the respective papers. We use the official code provided by AdaContrast [2] to perform experiments on Office-Home and also adapt it to CTTA setting. In the Supplementary material, we describe the image augmentations used, analysis on parameter \(n_{c}\) and provide the pseudo code for pSTarC. ### Evaluation for TTA setting We compare the performance of our proposed pSTarC framework with the prior TTA approaches [2, 13, 14, 25, 29]. For VisDA dataset, from Table 1, we observe that pSTarC performs at par with the state-of-the-art method C-SFDA [14], while being computationally much more efficient (Table 9). Interestingly, it also outperforms the approaches which assume access to the source data before performing TTA. On Office-Home, we get a significant improvement of 3.5% compared to the prior TTA method AdaContrast [2] as shown in Table 2. On DomainNet-126, from \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline Method & R\(\rightarrow\)C & R\(\rightarrow\)P & P\(\rightarrow\)C & C\(\rightarrow\)S & S\(\rightarrow\)P & R\(\rightarrow\)S & P\(\rightarrow\)R & Average \\ \hline Source & 55.5 & 62.7 & 53 & 46.9 & 47.3 & 46.3 & 75.0 & 55.2 & & & & & \\ BN-Adapt [25] & 54.1 & 62.8 & 54.3 & 49.4 & 59.1 & 47.6 & 75.0 & 57.5 & & & & & \\ TENT [29] & 55.6 & 64.5 & 55.5 & 50.8 & 59.9 & 49.9 & 75.9 & 58.9 & & & & & \\ AdaContrast [2] & 61.1 & 66.9 & 60.8 & 53.4 & 62.7 & 54.5 & 78.9 & & & & & & \\ C-SFDA [14] & 61.6 & 67.4 & 61.3 & 55.1 & 63.2 & 54.8 & 78.5 & & & & & & \\ \hline **pSTarC** & 60.8 & 67.7 & 60.3 & 55.6 & 65.3 & 55.8 & 80.2 & **63.7** & & & & & \\ \hline \end{tabular} \end{table} Table 4: Total accuracy (%) of TTA methods on DomainNet-126. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline Method & A \(\rightarrow\) C & A \(\rightarrow\) P & A \(\rightarrow\) R & C \(\rightarrow\) A & C \(\rightarrow\) P & C \(\rightarrow\) R & P \(\rightarrow\) A & P \(\rightarrow\) C & P \(\rightarrow\) R & R \(\rightarrow\) A & R \(\rightarrow\) C & R \(\rightarrow\) P & Average \\ \hline Source & 44.6 & 66.5 & 73.5 & 51.0 & 61.9 & 63.2 & 51.1 & 40.5 & 71.9 & 64.4 & 47.1 & 77.3 & 59.4 \\ BN-Adapt [25] & 38.9 & 59.9 & 71.5 & 55.0 & 62.0 & 65.2 & 54.4 & 37.3 & 71.6 & 65.2 & 41.3 & 73.8 & 58.0 \\ TENT [29] & 39.1 & 60.2 & 71.6 & 55.2 & 62.2 & 65.5 & 54.6 & 37.6 & 71.8 & 65.3 & 41.6 & 73.9 & 58.2 \\ AdaContrast [2] & 42.2 & 64.5 & 73.2 & 56.2 & 64.1 & 66.4 & 54.7 & 40.4 & 73.0 & 66.7 & 45.1 & 75.6 & 60.2 \\ \hline **pSTarC** & 47.7 & 68.7 & 75.4 & 58.6 & 68.4 & 68.9 & 55.1 & 45.8 & 75.6 & 67.5 & 51.8 & 78.7 & **63.5** \\ \hline \end{tabular} \end{table} Table 2: Total accuracy (%) of pSTarC and other TTA methods on Office-Home dataset. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c} \hline Method & gaussian & shot & impulse & defocus & glass & motion & zoom & snow & frost & fog & brightness & contrast & elastic & pixelate & jpeg & Average \\ \hline Source & 27 & 32 & 60.6 & 70.7 & 45.9 & 69.2 & 71.2 & 60.5 & 54.2 & 49.7 & 70.5 & 44.9 & 62.8 & 25.3 & 58.8 & 53.6 \\ BN Adapt [25] & 57.9 & 59.3 & 57.3 & 72.4 & 58.1 & 70.3 & 72.1 & 65.1 & 65 & 58.5 & 73.5 & 69.7 & 64.3 & 67.1 & 58.8 & 64.6 \\ TENT [29] & 62.7 & 65.1 & 65.5 & 75.0 & 62.6 & 72.5 & 75.0 & 69.6 & 68.1 & 66.2 & 76.0 & 71.8 & 67.1 & 71.6 & 63.1 & 68.8 \\ AdaContrast [2] & 57.3 & 59.4 & 61.1 & 73.4 & 58.8 & 71.1 & 73.4 & 66.6 & 67.3 & 60.7 & 75.2 & 71.8 & 65.4 & 65.8 & 60.5 & 65.9 \\ \hline **pSTarC** & 63.4 & 65.4 & 66.5 & 75 & 63 & 73.2 & 74.9 & 70.3 & 69.8 & 66.5 & 76.6 & 73.2 & 68.0 & 72.2 & 63.8 & **69.5** \\ \hline \end{tabular} \end{table} Table 3: Accuracy (%) of different TTA methods on 15 corruptions from CIFAR-100C dataset in TTA setting. Table 4, we observe that pSTarC achieves an average accuracy of 63.7% across 7 domain shifts, outperforming all the existing approaches including [14]. On CIFAR-100C [10], our method performs 1.1% better than TENT [29] and 3.6% better than AdaContrast [2], suggesting its effectiveness even on corruption domain shifts (Table 3). ### Evaluation for CTTA setting We also study the effectiveness of pSTarC in the CTTA setting where test domains change with time. To do this, we perform experiments on CIFAR-100C and the following four domain sequences from DomainNet-126: (1) _Real_-World\(\rightarrow\)Clipart\(\rightarrow\)Painting\(\rightarrow\)Sketch; (2) _Clipart\(\rightarrow\)Sketch\(\rightarrow\)Real-World\(\rightarrow\)Painting;_ (3) _Painting\(\rightarrow\)_Real-World\(\rightarrow\)Sketch\(\rightarrow\)Clipart (4) _Sketch\(\rightarrow\)_Painting\(\rightarrow\)Clipart\(\rightarrow\)Real-World. The _first domain_ indicates the source domain, which is then adapted to the other three test domains in the above sequence. From Table 6, we observe that pSTarC outperforms all the state-of-the-art approaches in this challenging setting. Specifically, it outperforms CoTTA by a significant margin of 5.3% and also performs favourably compared to the state-of-the-art method RMT [5]. In addition, we also evaluate pSTarC on CIFAR-100C continual setting and report the results in Table 5. It performs favourably compared to AdaContrast [2] and CoTTA [31], while RMT [5] performs the best in this case. But, CoTTA [31] and RMT [5] are computationally more expensive as they need to store teacher and student models, while pSTarC is more light-weight as it only stores one model. In Figure 3, we summarize the performance of pSTarC with the source model, TENT [29] and AdaContrast [2]. In this plot, the lines farther from the center indicates better performance. We observe that pSTarC outperforms these methods across all domain shifts for both TTA and CTTA. ### Additional Analysis Here, we report the results of additional analysis to better understand the proposed framework. **Ablation Study:** The proposed pSTarC framework consists of three loss components. The first component is \(L_{aug}\) which enforces consistency between an image and its augmentation. From Table 7, we observe that using strong augmentations can indeed help improve the feature representations, as we get 1.9% and 0.7% improvement on VisDA and DomainNet-126 respectively. The second component \(L_{attr}\) aims to align the test features with the pseudo source features. On removing the attraction loss component from \(L_{pSTarC}\), the loss becomes similar to contrastive learning. While this performs reasonably, achieving 78.2% and 59.7% on VisDA and DomainNet respectively, incorporating the pseudo-source features improves the results significantly by 3.7% and 4%, proving that they indeed help model adaptation by correctly aligning the test features so that the source trained classifier can well classify the test data. The third component, \(L_{disp}\) is the dispersion term which prevents the model to avoid all the test features collapsing to one cluster, which is a trivial solution when optimizing only the attraction loss \(L_{attr}\). This term plays a role similar to the diversity term and is crucial in any unsupervised adaptation protocols [17, 34] to avoid model collapse, the effect of which we observe in Table 7. The accuracy on VisDA and DomainNet-126 drop to 68.8% and 58.8% \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & \multicolumn{1}{c}{\multirow{2}{*}{\(\bigcirc\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\bigcirc\)}} & \multicolumn{1}{c}{\multirow{2}{*}{Average}} \\ \hline Source only & 54.7 & 50.7 & 58.3 & 55.2 & 54.7 \\ BN Adapt [25] & 54.9 & 54.8 & 60.5 & 62.2 & 58.1 \\ TENT [29] & 57.6 & 55.8 & 62.8 & 62.5 & 59.7 \\ CoTTA [31] & 56.6 & 57.0 & 63.6 & 63.7 & 60.2 \\ AdaContrast [2] & 62.2 & 62.4 & 67.7 & 68.1 & 65.1 \\ RMT [5] & 63.0 & 62.1 & 68.3 & 67.9 & 65.3 \\ \hline **pSTarC** & 62.7 & 63.6 & 67.6 & 68.1 & **65.5** \\ \hline \hline \end{tabular} \end{table} Table 6: Accuracy (%) of different TTA methods on four domain shift sequences from DomainNet-126 in CTTA setting. \begin{table} \begin{tabular}{l c c c c} \hline \hline \(L_{aug}\) & \(L_{attr}\) & \(L_{disp}\) & VisDA & DomainNet-126 \\ \hline ✓ & ✓ & & 68.8 & 58.8 \\ ✓ & & ✓ & 78.2 & 59.7 \\ & ✓ & ✓ & 80.0 & 63.0 \\ ✓ & ✓ & ✓ & **81.9** & **63.7** \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation study: Importance of each loss term. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline Method & gaussian & shot & impulse & defocus & glass & motion & zoom & snow & frost & fog & brightness & contrast & elastic & pixelate & jpeg & Average \\ \hline Source & 27 & 32 & 60.6 & 70.7 & 45.9 & 69.2 & 71.2 & 60.5 & 54.2 & 49.7 & 70.5 & 44.9 & 62.8 & 25.3 & 58.8 & 53.6 \\ BN Adapt [25] & 57.9 & 59.3 & 57.3 & 72.4 & 58.1 & 70.3 & 72.1 & 65.1 & 65 & 58.5 & 73.5 & 69.7 & 64.3 & 67.1 & 58.8 & 64.6 \\ TENT [29] & 62.8 & 64.2 & 58.3 & 62.1 & 48.8 & 51.7 & 51.5 & 41.6 & 36.3 & 28.9 & 29.6 & 17.7 & 12.0 & 11.5 & 9.6 & 39.1 \\ CoTTA [31] & 59.9 & 62.3 & 60.3 & 73.1 & 62.0 & 72.1 & 73.6 & 67.2 & 68.2 & 59.7 & 75.3 & 73.1 & 67.5 & 71.7 & 66.5 & 67.5 \\ AdaContrast [2] & 57.7 & 63.2 & 61.4 & 72.3 & 59.9 & 70.9 & 72.5 & 67.1 & 69.3 & 61.8 & 74.1 & 71.7 & 66.1 & 66.7 & 63.8 & 66.6 \\ RMT [5] & 59.5 & 63.9 & 63.7 & 72.3 & 66.1 & 71.5 & 73.6 & 71.0 & 71.0 & 67.5 & 74.9 & 72.6 & 71.8 & 73.7 & 70.7 & **69.6** \\ \hline **pSTarC** & 63.4 & 67.0 & 64.0 & 71.1 & 62.9 & 69.3 & 72.4 & 67.3 & 68.7 & 64.1 & 72.9 & 71.9 & 66.7 & 70.5 & 62.9 & 67.7 \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy (%) of different methods on 15 corruptions from CIFAR-100C dataset in CTTA setting. respectively, as the test samples would be predicted into lesser number of classes than actually present in the dataset. Performance on varying batch sizes:In TTA, it is crucial for the method to be able to continuously adapt even with very few samples. In this analysis, we vary the batch size from \(8\) to \(128\) and perform experiments on the DomainNet-126 dataset. Table8 reports the average accuracy across \(7\) domain shifts for each batch size. We observe that the proposed pSTarC consistently outperforms both TENT [29] and AdaContrast [2] for all batch sizes. The effect is more pronounced for the smallest batch size \(8\), where pSTarC outperforms TENT by a huge margin of 15.3% and AdaContrast by 4%. On average, pSTarC does better than TENT by 6.2% and AdaContrast by 1.7%. Complexity Analysis:Here, we analyse the complexity of pSTarC and three other recent TTA methods: AdaContrast [2], Source-Proxy-TTA [13] and C-SFDA [14] on VisDA dataset. In the TTA setting, it is desirable to have methods that requires storing less additional information due to memory limitations and privacy concerns. The prior methods AdaContrast [2] and C-SFDA [14] are based on the teacher student framework. Hence, it needs to store twice the number of model parameters, while we only store the updating model parameters in pSTarC, as we report in Table 9. AdaContrast stores a memory queue of size \(16384\) to collect key features (of dimension 256), and its pseudo labels, which is used to retrieve positives for contrastive learning. Alongside, they store another feature bank (of size 1024) and their corresponding scores which is used to retrieve neighbours for soft pseudo-labeling the target samples. Thus, the total memory buffer required for AdaContrast is 16384x(256+1)+1024x(256+12). [13] condenses the source data to save 25 images per class of size 112x112 for VisDA dataset. This accounts to a memory requirement of 37.6M (12x25x112x112). On the other hand, in the pSTarC framework, we only store \(20\) features per class and the corresponding scores resulting in a memory buffer of \(240\)x\((256+12)\). C-SFDA does not store any features or images. However, they need \(13\) forward passes (12 augmentations in addition to the actual test sample), while AdaContrast [2] and Source-Proxy-TTA [13] uses 3 augmentations, and pSTarC uses only two augmentations. We summarize this in Table 9, which shows that pSTarC is very efficient, in addition to achieving better or performance comparable to the state-of-the-art across several challenging settings. ## 6 Conclusion In this paper, we have proposed a novel framework termed pSTarC for Test Time Adaptation (TTA) of deep neural networks. pSTarC leverages the fixed source classifier to generate pseudo-source samples, which is then used to align the test samples, which enables the source trained classifier to classify test data from different distributions. Extensive experiments on several real-world domain shift datasets justify the effectiveness of our proposed framework. Additionally, we also show that the method can seamlessly be used in continual test time adaptation scenario, though there is still scope for improvement in the corruption datasets. Overall, our findings highlight the importance of target clustering techniques and leveraging the source classifier for improving test-time adaptation performance in several real-world challenging scenarios. AcknowledgementsThis work is partly supported through a research grant from SERB (SPF/2021/000118), Govt. of India. The first author is supported by Prime Minister's Research Fellowship awarded by Govt. of India. Figure 3: Overall comparison of pSTarC with TTA methods. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & \multicolumn{4}{c}{Batch size} & Average \\ & 8 & 16 & 32 & 64 & 128 & Average \\ \hline TENT & 38.8 & 55.4 & 58.6 & 59.1 & 58.9 & 54.2 \\ AdaContrast & 50.1 & 57.9 & 60.8 & 62.4 & 62.4 & 58.7 \\ \hline **pSTarC** & 54.1 & 59.2 & 61.3 & 63.8 & 63.7 & **60.4** \\ \hline \hline \end{tabular} \end{table} Table 8: Ablation on batch size using DomainNet-126
2304.10471
WISDOM Project -- XV. Giant Molecular Clouds in the Central Region of the Barred Spiral Galaxy NGC 5806
We present high spatial resolution ($\approx24$ pc) Atacama Large Millimeter/sub-millimeter Array $^{12}$CO(2-1) observations of the central region of the nearby barred spiral galaxy NGC 5806. NGC 5806 has a highly structured molecular gas distribution with a clear nucleus, a nuclear ring and offset dust lanes. We identify $170$ spatially- and spectrally-resolved giant molecular clouds (GMCs). These clouds have comparable sizes ($R_{\mathrm{c}}$) and larger gas masses, observed linewidths ($\sigma_{\mathrm{obs,los}}$) and gas mass surface densities than those of clouds in the Milky Way disc. The size -- linewidth relation of the clouds is one of the steepest reported so far ($\sigma_{\mathrm{obs,los}}\propto R_{\mathrm{c}}^{1.20}$), the clouds are on average only marginally bound (with a mean virial parameter $\langle\alpha_{\mathrm{vir}}\rangle\approx2$), and high velocity dispersions are observed in the nuclear ring. These behaviours are likely due to bar-driven gas shocks and inflows along the offset dust lanes, and we infer an inflow velocity of $\approx120$ kms$^{-1}$ and a total molecular gas mass inflow rate of $\approx5$ M$_\odot$ yr$^{-1}$ into the nuclear ring. The observed internal velocity gradients of the clouds are consistent with internal turbulence. The number of clouds in the nuclear ring decreases with azimuthal angle downstream from the dust lanes without clear variation of cloud properties. This is likely due to the estimated short lifetime of the clouds ($\approx6$ Myr), which appears to be mainly regulated by cloud-cloud collision and/or shear processes. Overall, it thus seems that the presence of the large-scale bar and gas inflows to the centre of NGC 5806 affect cloud properties.
Woorak Choi, Lijie Liu, Martin Bureau, Michele Cappellari, Timothy A. Davis, Jindra Gensior, Fu-Heng Liang, Anan Lu, Thomas G. Williams, Aeree Chung
2023-04-20T17:19:02Z
http://arxiv.org/abs/2304.10471v2
WISDOM Project - XV. Giant Molecular Clouds in the Central Region of the Barred Spiral Galaxy NGC 5806 ###### Abstract We present high spatial resolution (\(\approx 24\) pc) Atacama Large Millimeter/sub-millimeter Array \({}^{12}\)CO(2-1) observations of the central region of the nearby barred spiral galaxy NGC 5806. NGC 5806 has a highly structured molecular gas distribution with a clear nucleus, a nuclear ring and offset dust lanes. We identify 170 spatially- and spectrally-resolved giant molecular clouds (GMCs). These clouds have comparable sizes (\(R_{\rm c}\)) and larger gas masses, observed linewidths (\(\sigma_{\rm obs,los}\)) and gas mass surface densities than those of clouds in the Milky Way disc. The size - linewidth relation of the clouds is one of the steepest reported so far (\(\sigma_{\rm obs,los}\propto R_{\rm c}^{1.20}\)), the clouds are on average only marginally bound (with a mean virial parameter \(\langle\alpha_{\rm vir}\rangle\approx 2\)), and high velocity dispersions are observed in the nuclear ring. These behaviours are likely due to bar-driven gas shocks and inflows along the offset dust lanes, and we infer an inflow velocity of \(\approx 120\) km s\({}^{-1}\) and a total molecular gas mass inflow rate of \(\approx 5\) M\({}_{\odot}\) yr\({}^{-1}\) into the nuclear ring. The observed internal velocity gradients of the clouds are consistent with internal turbulence. The number of clouds in the nuclear ring decreases with azimuthal angle downstream from the dust lanes without clear variation of cloud properties. This is likely due to the estimated short lifetime of the clouds (\(\approx 6\) Myr), which appears to be mainly regulated by cloud-cloud collision and/or shear processes. Overall, it thus seems that the presence of the large-scale bar and gas inflows to the centre of NGC 5806 affect cloud properties. keywords: galaxies: spiral and bar - galaxies:individual: NGC 5806 - galaxies: nuclei - galaxies: ISM - radio lines: ISM - ISM: clouds ## 1 Introduction As giant molecular clouds (GMCs) are the gas reservoirs where all star formation occurs, elucidating their life cycles is crucial to understand the formation and evolution of galaxies. Early GMC studies were conducted only in our own Milky Way (MW) and Local Group galaxies such as the Large Magellanic Cloud (LMC; e.g. Fukui et al., 2008), Small Magellanic Cloud (SMC; e.g. Muller et al., 2010), M 31 (e.g. Rosolowsky, 2007) and M 33 (e.g. Rosolowsky et al., 2003, 2007), showing that GMCs in those galaxies have properties similar to each other and follow the same size - linewidth relation (e.g. Larson, 1981; Bolatto et al., 2008). As the resolution and sensitivity of molecular line observations improved, GMC studies were extended to extragalactic objects, revealing deviations from the properties of Local Group galaxy GMCs (e.g. Bolatto et al., 2008; Rosolowsky et al., 2021). For instance, the cloud properties in some late-type galaxies (LTGs) vary depending on galactic environments and do not universally obey the usual scaling relations (e.g. M 51, Hughes et al., 2013; Colombo et al., 2014; NGC253, Leroy et al., 2015). The first study of GMCs in an early-type galaxy (ETG; NGC 4526, Utomo et al., 2015) revealed that the GMCs in that galaxy do not have a clear correlation between size and linewidth but are brighter, denser and have higher velocity dispersions than GMCs in the MW disc (MWd) and Local Group galaxies. On the other hand, Liu et al. (2021) recently reported that the GMCs in the ETG NGC 4429 have an unusually steep size - linewidth relation. These results indicate that galactic environment affects GMC properties, so more GMCs studies in galaxies with different morphologies and substructures are required to quantify these variations and understand the physics behind them. Barred disc galaxies are known to have gas streaming to their centres due to their non-axisymmetric gravitational potentials (e.g. Sormani et al., 2015). Several CO surveys have reported higher central molecular gas mass concentrations in barred than non-barred disc galaxies (e.g. Sakamoto et al., 1999; Sun et al., 2020). Recent high spatial resolution CO observations of barred disc galaxies have also shown that these objects possess several distinct structures mimicking those present at optical wavelengths (e.g. nuclear rings, bars and spiral arms), with non-circular motions (e.g. Salak et al., 2016; Bewkett, Beletic et al., 2021; Sato et al., 2021). Thus, barred disc galaxies allow to investigate the properties of GMCs (e.g. scaling relations) in different environments, particularly the bars themselves. Despite this, however, very few studies investigating GMCs in barred galaxies exist (e.g. Hirota et al., 2018; Maeda et al., 2020; Sato et al., 2021). As part of the mm-Wave Interferometric Survey of Dark Object Masses (WISDOM) project, we analyse here the properties and dynamics of individual GMCs in the centre of the barred spiral galaxy NGC 5806 located in the field. WISDOM aims to use the high angular resolution of Atacama Large Millimeter/sub-millimeter Array (ALMA) to study (1) the physical properties and dynamics of GMCs in the centres of galaxies and how these link to star formation (e.g. Liu et al., 2021, 2022; Lu et al., 2022) and (2) the masses of the supermassive black holes lurking at the centres of the same galaxies. This paper is structured as follows. In Section 2, we describe the data and methodology used to identify GMCs in NGC 5806. The cloud properties, their probability distribution functions and their mass distribution functions are discussed in Section 3. In Section 4, we investigate the kinematics of the clouds and their origins. In Section 5, we assess the dynamical states and degrees of virialisation of the clouds. We further discuss the morphology and velocity dispersion of the molecular gas, the formation, destruction, scaling relations and virialisation of the GMCs, the clouds in the nuclear ring and the CO-to-H\({}_{2}\) conversion factor in Section 6. We summarise our findings in Section 7. ## 2 Data and Cloud Identification ### Target NGC 5806 is a nearby barred spiral galaxy (SAB(s)b) located at R.A.=15\({}^{\rm h}\)00\({}^{\rm m}\)00\({}^{\rm s}\)5, Dec.= 1\({}^{\circ}\)53\({}^{\prime}\)30\({}^{\prime\prime}\) (J2000). Throughout this paper, we adopt a distance \(D=21.4\) Mpc for NGC 5806 (Cappellari et al., 2011), whereby 1\({}^{\prime\prime}\) corresponds to \(\approx 103\) pc. NGC 5806 has a total stellar mass of \(3.89\times 10^{10}\) M\({}_{\odot}\)(Salo & Laurikainen, 2017; Morales et al., 2018), a luminosity-weighted stellar velocity dispersion \(\sigma_{\star}=120\) km s\({}^{-1}\) within the central 10\({}^{\prime\prime}\)(Dumas et al., 2007), an inclination \(i=58^{\circ}\) and a position angle \(PA=166^{\circ}\). The mass of molecular gas in the centre of NGC 5806 (27\(\aas@@fstack{\prime\prime}\)4 diameter) is \(\approx 10^{9}\) M\({}_{\odot}\)(Davis et al., 2022) and the total mass of atomic hydrogen (Haynes et al., 2018). The H i distribution traces the optical disc well (Mundell et al., 2007). Figure 1 shows the Sloan Digital Sky Survey (SDSS) three-colour image of NGC 5806 (left), a _Hubble Space Telescope_ (_HST_) Wide-Field and Planetary Camera 3 (WFPC3) F555W image (top-right) and the \({}^{12}\)CO(2-1) integrated intensity contours derived in Section 2.2 overlaid on the same _HST_ image (bottom-right). On large scales, NGC 5806 has a large-scale bar, inner star-forming ring encirling the bar and weak spiral arms protruding from the bar. In the central region (i.e. well within the bar), NGC 5806 has a bright core and a star-forming nuclear ring that are prominent in both optical continuum and molecular gas emission. NGC 5806 has been classified as a Seyfert 2 galaxy (Dumas et al., 2007), while more recent integral-field spectroscopic observations reveal ionised gas with mixed ionisation mechanisms (Westoby et al., 2007, 2012; Erroz-Ferrer et al., 2019). Star formation is present only in the inner and nuclear rings, with a total star-formation rate (SFR) of 3.6 M\({}_{\odot}\) yr\({}^{-1}\) derived using a spectral energy distribution fitting code (Erroz-Ferrer et al., 2019). Dumas et al. (2007) estimated the mass of the central supermassive black hole to be \(\approx 1.2\times 10^{7}\) M\({}_{\odot}\) using the \(M_{\rm BH}-\sigma_{\star}\) relation of Tremaine et al. (2002). ### Data NGC 5806 was observed in the \({}^{12}\)CO(2-1) line (rest frequency 230.586 GHz) using ALMA as part of the WISDOM project. The observations were carried out using two different 12-m array configurations in October and December 2016 (programme 2016.100437.S, configurations C40-3 and C40-6, PI Davis) and the 7-m Atacama Compact Array (ACA) in July 2017 (programme 2016.2.00053.S, PI Liu) to achieve both high angular resolution and good flux recovery. The C40-3 configuration observations had 242 s on-source using 44 antennae and baselines of 15 - 600 m, leading to a maximum recoverable scale of 6\(\aas@@fstack{\prime\prime}\)0. The C40-6 configuration observations had 272 s on-source using 41 antennae and baselines of 15 - 1800 m, leading to a maximum recoverable scale of 1\(\aas@@fstack{\prime\prime}\)3. Both configurations have a primary beam of 27\(\aas@@fstack{\prime\prime}\)4 (full-width at half-maximum; FWHM). The correlator was set up with one spectral window of 1.875 GHz bandwidth (\(\approx 2400\) km s\({}^{-1}\)) and 3840 channels each of 488 kHz (\(\approx 0.6\) km s\({}^{-1}\)) used for the \({}^{12}\)CO(2-1) line observations, and the three remaining spectral windows of 2 GHz bandwidth used solely for continuum observations. The ACA observations had 1088 s on-source using 10 antennae and baselines of 8 - 43 m, leading to a maximum recoverable scale of 29\(\aas@@fstack{\prime\prime}\)0. The ACA observations have a primary beam of 45\(\aas@@fstack{\prime\prime}\)7. The correlator was set up with one spectral window of 2 GHz bandwidth (\(\approx 2600\) km s\({}^{-1}\)) and 2048 channels each of 977 kHz (\(\approx 1.3\) km s\({}^{-1}\)) used for the \({}^{12}\)CO(2-1) line observations, and the three remaining spectral windows of 2 GHz bandwidth used solely for continuum observations. #### 2.2.1 Data reduction The raw data of each configuration were calibrated using the standard ALMA pipeline provided by ALMA regional centre staff, using Common Astronomy Software Applications (casa; McMullin et al., 2007) version 4.7.0. To combine the different configurations and obtain optimal sensitivity and spatial resolution for our science goals, we manually applied a low weighting (0.2) to the shorter baseline 12-m data (C40-3) and a higher weighting (1.0) to the longer baseline 12-m data (C40-6). Using casa version 5.6.1, we then combined the ACA data using the casa task concat with default weighting. Although continuum emission is not detected (see below), we subtracted any continuum that may be present using the casa task uvcontsub. We then cleaned the data using the tclean task interactively, to a depth equal to the root-mean-square (RMS) noise of the dirty cube, and imaged the cleaned components using Briggs weighting with a robust parameter of 0.5. Finally, we achieved a synthesised beam of \(\theta_{\rm maj}\times\theta_{\rm min}=0\aas@@fstack{\prime\prime}25\times 0\aas@@fstack{ \prime\prime}22\) (\(25.7\times 22.6\) pc\({}^{2}\)) at a position angle of \(48\degr\). Pixels of 0\(\aas@@fstack{\prime\prime}\)05 were chosen as a compromise between spatial sampling and image size, resulting in approximately \(5\times 4.5\) pixels across the synthesised beam. We thus created a fully calibrated and cleaned cube encompassing most of the primary beam spatially, with \(2\,{\rm km\,s^{-1}}\) (binned) channels spectrally. The RMS noise of this cube is \(\sigma_{\rm rms}=0.86\) mJy beam\({}^{-1}\) (\(0.85\) K) per channel. As mentioned above, no continuum emission is detected in NGC 5806. To establish an upper limit, we created a continuum image using the tclean task in casa and Briggs weighting with a robust parameter of 0.5, resulting in a synthesised beam of \(0\aas@@fstack{\prime\prime}20\times 0\aas@@fstack{\prime\prime}18\). Averaging over the entire line-free bandwidth (\(\approx 6.3\) GHz), the resulting RMS noise is \(25\)\(\mu\)Jy beam\({}^{-1}\) at a central frequency of 238.351 GHz. #### 2.2.2 Moment maps Figure 2 shows the zeroth-moment (total intensity) map (top-left), first-moment (intensity-weighted mean velocity) map (top-middle) and second-moment (intensity-weighted velocity dispersion) map (top right) of the \({}^{12}\)CO(2-1) line of NGC 5806. To generate these maps, we utilised a smooth-moment masking method (e.g. Dame, 2011). In brief, we convolved the data cube spatially with a Gaussian of width equal to that of the synthesised beam and Hanning-smoothed the cube spectrally. We then only selected pixels with an intensity above 1.5 times the RMS notice of the smoothed cube to create a mask, and applied this mask to the original data cube to create the moment maps. The integrated intensity map reveals a highly structured molecular gas distribution. In particular, molecular gas is associated with the nucleus at the very centre of the galaxy, the particularly dusty part of the bright optical nuclear ring and the bi-symmetric offset dust lanes of the large-scale bar (stretching to the north and south; see Figure 1). In addition, the integrated intensity is high at the interfaces between the offset dust lanes and the nuclear ring, and it decreases gradually as a function of the azimuthal angle in a counter-clockwise direction. Figure 1: Left: SDSS three-colour (\(gri\)) image of NGC 5806, \(2\aas@@fstack{\prime\prime}6\times 2\aas@@fstack{\prime\prime}6\) (\(16.4\times 16.4\) kpc\({}^{3}\)). Top-right: unsharp-masked _HST_ WFPC3 F555W image of a \(2\times 2\) kpc\({}^{2}\) region around the nucleus. Bottom-right: as above, but overlaid with cyan \({}^{12}\)CO(2-1) integrated intensity contours from our ALMA observations. The molecular gas is co-spatial with the bright nucleus, nuclear ring and offset dust lanes. The mean velocity map clearly shows that the northern side of the ring is blue-shifted while the southern side is red-shifted with respect to the systemic velocity \(V_{\rm sys}=1360\) km s\({}^{-1}\)(as determined from H i line emission; Springob et al., 2005). The eastern and western sides of the ring also show blue- and red-shifted velocities along the spiral arms, indicating deviations from circular motions, leading to a complex velocity field. The velocity dispersion of the molecular gas is generally higher (\(0-60\) km s\({}^{-1}\)) than that of nearby galaxies (e.g. Wilson et al., 2011; Mogotsi et al., 2016; Sun et al., 2018). In particular, the velocity dispersions at the interfaces between the offset dust lanes and the nuclear ring are higher (\(30-50\) km s\({}^{-1}\)) that those in other parts of the nuclear ring (\(0-20\) km s\({}^{-1}\)), indicating that these environments are likely to be different from each other. The nucleus also shows high velocity dispersions (\(30-60\) km s\({}^{-1}\)). The complex velocity field and high velocity dispersions of NGC 5806 are further discussed in Section 6.1. The bottom-left panel of Figure 2 shows the integrated CO spectrum of a \(9^{\prime\prime}\times 9^{\prime\prime}\) central region, revealing multiple peaks and thus suggesting again complex molecular gas distribution and kinematics. The total \({}^{12}\)CO(2-1) flux in that region is \(\approx 300\) Jy km s\({}^{-1}\). #### 2.2.3 Region definitions Based on the moment maps, we divide the galaxy into four distinct regions, referred to as follows (see Figure 3): nucleus (blue), arcs (green), nodes (red) and dust lanes (yellow). The nucleus encompasses only the inner 125 pc in radius, the arcs refer to the parts of the nuclear ring where the velocity dispersions are relatively low, the nodes refer to the parts of the nuclear ring that are at the interfaces between the nuclear ring and the offset dust lanes and where the velocity dispersions are relatively high, and the dust lanes indicate the offset dust lanes in the optical image that are characteristic of barred disc galaxies (e.g. Athanassoula, 1992). We note that we will refer to the nuclear ring only to refer to the nuclear ring in its entirety, encompassing both the arcs and the nodes. Figure 2: Moment maps of the \({}^{12}\)CO(2-1) emission of NGC 5806. Top-left: zeroth-moment (integrated intensity) map. Top-middle: first-moment (intensity-weighted mean velocity) map. Top-right: second-moment (intensity-weighted velocity dispersion) map. Bottom: Integrated \({}^{12}\)CO(2-1) spectrum, extracted from a \(9^{\prime\prime}\times 9^{\prime\prime}\) region around the galaxy centre. The synthesised beam of \(0^{\prime\prime}25\times 0^{\prime\prime}22\) (\(25.7\times 22.6\) pc\({}^{2}\)) is shown in the bottom-left corner of each moment map. ### Cloud identification We utilise our own modified version of the algorithms of cpropsroo (Liu et al., 2021), that is an updated version of CPROPS (Rosolowsky and Leroy, 2006; Leroy et al., 2015), to identify the clouds of NGC 5806. Our version of cpropstoo has fewer free parameters, leading to a more efficient and robust cloud identification in complex and crowded environments. We refer the reader to Liu et al. (2021) for full details of our version of cpropstoo. We introduce here the main steps and parameters of the algorithm. First, the algorithm calculates the spatially-varying noise in the cube and generates a three-dimensional (3D) mask of bright emission. The mask initially includes only pixels for which two adjacent channels are above 2.5 \(\sigma_{\rm rms}\). The mask is then expanded to include all neighbouring pixels for which two adjacent channels are above 1.5 \(\sigma_{\rm rms}\). The individual regions identified are referred to as "islands". To remove noise peaks, we exclude all islands with projected areas less than two synthesised beams. We also apply the Figure 3: \({}^{12}\)CO(2-1) integrated intensity map of NGC 5806 with identified GMCs overlaid. Dark blue (cyan) ellipses indicate resolved (unresolved) clouds. Blue (nucleus), green (arcs), red (nodes) and yellow (dust lanes) polygons indicate the four regions defined in Section 2.2.3. same criteria to the inverted data cube to verify the reliability of our island identification. Second, the islands identified are decomposed into individual structures, that we refer to as clouds. Local maxima (i.e. cloud candidates) are identified within running \(3\times 3\times 3\) pix\({}^{3}\) subsets of the cube (i.e. \(0\aas@@fstack{\prime\prime}15\times 0\aas@@fstack{\prime\prime}15\times 6\) km s\({}^{-1}\) sub-cubes). To eliminate noise peaks and outliers, we also require the total emission in each \(3\times 3\times 3\) pix\({}^{3}\) sub-cube to be greater than that in the eight spatially-neighbouring sub-cubes. We then run crotostoo, setting the minimum number of channels spanned by each cloud (\(minvchan=2\)) and the minimum contrast between a cloud's peak and its boundary (\(\Delta T_{\rm max}=2\,\sigma_{\rm rms}=1.7\) K). Individual cloud candidates have to occupy a minimum area within which all emission is uniquely associated as dictated by two parameters: \(minarea\) (minimum cloud area) and \(minpix\) (minimum number of pixels). However, biases can occur depending on \(minarea\) and \(minpix\), e.g. small structures may be missed when these two parameters are set high, whereas large structures may be missed when they are set low. To minimise this potential bias, rather than using a single value we assign both parameters a range of 96 - 24 spaxels (i.e. the synthesised beam area). The code searches for clouds from the largest \(minarea\) (96 spaxels) and \(minpix\) (96 pixels) to the smallest \(minarea\) (24 spaxels) and \(minpix\) (24 pixels) with a step size of 24 spaxels (or pixels). This modification allows to reduce the arbitrariness of the search area. To counteract the weakness of the algorithm, that is likely to ignore significant sub-structures of large clouds, Liu et al. (2021) introduced an additional parameter, \(convexity\), defined as the ratio of the volume of a cloud's 3D intensity distribution to that of its convex envelope. When \(convexity\approx 1\), the cloud has only one intensity peak, while the smaller the \(convexity\) the more significant the sub-structures. In this work, we set \(convexity=0.5\) by testing a range of \(0.4-0.8\). Values in the range \(0.5-0.7\) are typical (Liu et al., 2021). This parameter allows to identify structures over multiple scales with less arbitrariness. As a result, we identify 366 GMCs, 170 of which are both spatially and spectrally resolved, as shown in Figure 3. We note that two resolved clouds do not belong to any of the four regions defined. ## 3 Cloud Properties ### GMC properties Following the standard crotostoo/crotos definitions (Rosolowsky and Leroy, 2006), we calculate the physical properties of the clouds identified. We list the (intensity-weighted) properties of each cloud in Table 1, including each cloud's central position (R.A. and Dec.), mean local standard of rest velocity (\(V_{\rm LSR}\)), size (radius \(R_{\rm c}\)), observed velocity dispersion (\(\sigma_{\rm obs,los}\)), gradient-subtracted velocity dispersion (\(\sigma_{\rm gs,los}\); see Liu et al., 2021), \({}^{12}\)CO(2-1) luminosity (\(L_{\rm CO(2-1)}\)), molecular gas mass (\(M_{\rm gas}\)), peak intensity (\(T_{\rm max}\)), projected angular velocity (\(\omega_{\rm obs}\)), position angle of the rotation axis (\(\phi_{\rm rot}\); see Section 4.1) and deprojected distance from the galaxy centre (\(R_{\rm gal}\)). Some quantities are discussed below, but see also Liu et al. (2021). The cloud size (\(R_{\rm c}\)) is defined as \[R_{\rm c}\equiv\eta\sqrt{\sigma_{\rm maj,dc}\,\sigma_{\rm min,dc}}\ \, \tag{1}\] where \(\eta\) is a geometric parameter, \(\sigma_{\rm maj,dc}\) and \(\sigma_{\rm min,dc}\) are the de-convolved RMS spatial extent along the major and the minor axis of the cloud, respectively, and we adopt \(\eta=1.91\) for consistency with earlier studies (e.g. Solomon et al., 1987; Utomo et al., 2015; Liu et al., 2021). The observed velocity dispersion (\(\sigma_{\rm obs,los}\)) is calculated as \[\sigma_{\rm obs,los}\equiv\sqrt{\left(\sigma_{\rm v}^{2}-(\Delta V_{\rm cham }^{2}/2\pi)\right)}\, \tag{2}\] where \(\sigma_{\rm v}\) is the second velocity moment and \(\Delta V_{\rm cham}\) the channel width of the data cube. The molecular gas mass (\(M_{\rm gas}\)) is calculated from the \({}^{12}\)CO(2-1) luminosity (\(L_{\rm CO(2-1)}\)), itself obtained from the \({}^{12}\)CO(2-1) flux (\(F_{\rm CO(2-1)}\)) by \[\left(\frac{L_{\rm CO(2-1)}}{\rm K\ km\ s^{-1}\ pc^{2}}\right)=\left(\frac{3.2 5\times 10^{7}}{(1+z)^{3}}\right)\left(\frac{F_{\rm CO(2-1)}}{\rm Jy\ km\ s^{-1}} \right)\left(\frac{\nu_{\rm obs}}{\rm GHz}\right)^{-2}\left(\frac{D}{\rm Mpc} \right)^{2}\, \tag{3}\] where \(z\) is the galaxy redshift and \(\nu_{\rm obs}\) the observed line frequency. To convert the CO luminosity to a molecular gas mass, we adopt a \({}^{12}\)CO(2-1)/\({}^{12}\)CO(1-0) ratio \(R_{21}=1\) in temperature units, within the range typically found in the central regions of (barred) spiral galaxies (0.8 - 1.2; e.g. Crosthwaite et al., 2002), and a CO-to-H2 conversion factor \(X_{\rm CO}=2\times 10^{20}\,\rm cm^{-2}\,(\rm K\,km\,s^{-1})^{-1}\) equivalent to a \({}^{12}\)CO(2-1) conversion factor \(\alpha_{\rm CO(2-1)}\approx 4.4\,\rm M_{\odot}\,(\rm K\,km\,s^{-1}\ pc^{2})^{-1}\). This yields \[\left(\frac{M_{\rm gas}}{\rm M_{\odot}}\right) =\frac{X_{\rm CO}}{R_{21}}\,\left(\frac{L_{\rm CO(2-1)}}{\rm K\ km \ s^{-1}\ pc^{2}}\right) \tag{4}\] \[\approx\ 4.4\,\left(\frac{L_{\rm CO(2-1)}}{\rm K\ km\ s^{-1}\ pc^{2}} \right)\ \.\] In this work, we also use a second measure of the velocity dispersion, the gradient-subtracted velocity dispersion \(\sigma_{\rm gs,los}\) introduced in previous GMC studies (Utomo et al., 2015; Liu et al., 2021). This quantity is calculated as follows. First, we calculate the intensity-weighted mean velocity at each spaxel of a cloud, and measure its offset with regards to the mean velocity at the cloud centre. Second, we shift the spectrum at each spaxel to match its mean velocity to that of the cloud centre. Finally, we calculate the second moment of the shifted emission summed over the whole cloud, and extrapolate it to \(T_{\rm edge}=0\) K. This new \(\sigma_{\rm gs,los}\) measure thus quantifies the turbulent motions within the cloud, with any bulk motion removed. The uncertainties of all cloud properties are estimated via a bootstrapping technique as in Liu et al. (2021), with 500 samples. The uncertainty of the galaxy distance \(D\) is not considered, as an error of the distance translates to a systematic scaling of some quantities, i.e. \(R_{\rm c}\propto D\), \(L_{\rm CO(2-1)}\propto D^{2}\), \(M_{\rm gas}\propto D^{2}\), \(\omega_{\rm obs}\propto D^{-1}\), \(M_{\rm vir}\propto D\) (Section 5) and \(R_{\rm gal}\propto D\). ### Distributions of GMC properties Figure 4 shows the number distributions of cloud size (\(R_{\rm c}\)), gas mass (\(M_{\rm gas}\)), observed velocity dispersion (\(\sigma_{\rm obs,los}\)) and gas mass surface density (\(\Sigma_{\rm gas}\equiv M_{\rm gas}/\pi R_{\rm c}^{2}\)) for the resolved clouds of NGC 5806. As described above, we divide the clouds into four groups, one for each spatial region within the galaxy. In each panel, the black histogram (data) and curve (Gaussian fit) show the full sample, while the blue, green, red and yellow colours show those of the clouds in the nucleus, arcs, nodes and offset dust lanes only, respectively. The sizes (\(R_{\rm c}\)) of the resolved clouds of NGC 5806 range from 15 to 85 pc (top-left panel of Figure 4). The mean of the Gaussian fit is \(27.8\pm 0.7\) pc and the standard deviation 9.4 pc, while the median radius is 30.2 pc. The resolved clouds have gas masses \(M_{\rm gas}\) ranging from \(1.2\times 10^{5}\) to \(3.6\times 10^{7}\) M\({}_{\odot}\) (top-right panel of Figure 4). The mean of the Gaussian fit to the \(\log(M_{\rm gas}/\rm M_{\odot})\) distribution is \(5.66\pm 0.04\) (\(\approx 4.6\times 10^{5}\) M\({}_{\odot}\)) and the standard deviation 0.4, while the median gas mass is \(5.5\times 10^{5}\) M\({}_{\odot}\). About one third (49/170) of the resolved clouds are massive (\(M_{\rm gas}\geq 10^{6}\) M\({}_{\odot}\)). The observed velocity dispersions of the resolved clouds range from 1.6 to 30 km s\({}^{-1}\) (bottom-left panel of Figure 4). The mean of the Gaussian fit is \(5.2\pm 0.2\) km s\({}^{-1}\) and the standard deviation 2.7 km s\({}^{-1}\), while the median observed velocity dispersion is 5.6 km s\({}^{-1}\). The gas mass surface densities of the resolved clouds range from 80 to 1000 M\({}_{\odot}\) pc\({}^{-2}\) (bottom-right panel of Figure 4). The mean of the Gaussian fit to the \(\log(\Sigma_{\rm gas}/\rm M_{\odot}\) pc\({}^{-2}\)) distribution is \(2.29\pm 0.02\) (\(\approx 195\) M\({}_{\odot}\)) and the standard deviation 0.2, while the median gas mass surface density is 2.3 (\(\approx 200\) M\({}_{\odot}\)). There are slight variations of all four quantities across the four regions. The clouds in the arcs and nodes tend to be larger than the clouds in the nucleus and offset dust lanes (median radius \(\approx 38\) and 36 pc vs. \(\approx 27\) and 27 pc), more massive (median gas mass \(\approx 10^{6.1}\) and \(10^{5.9}\) M\({}_{\odot}\) vs. \(\approx 10^{5.5}\) and \(10^{5.6}\) M\({}_{\odot}\)) and more turbulent (median observed velocity dispersion \(\approx 7.5\) and 6.1 km s\({}^{-1}\) vs. \(\approx 3.6\) and 5.1 km s\({}^{-1}\)). We also identified two clouds that have exceptionally large velocity dispersions (\(\approx 12\) and 29 km s\({}^{-1}\)) in the nucleus, despite not being the largest and/or most massive clouds, indicating that those clouds are likely to be affected by their surrounding environment, e.g. the active galactic nucleus (AGN) and/or strong galactic shear. The median gas mass surface density of the clouds in the arcs (\(\langle\Sigma_{\rm gas}\rangle\approx 280\) M\({}_{\odot}\) pc\({}^{-2}\)) is larger than that of the clouds in the other three regions (\(\langle\Sigma_{\rm gas}\rangle\approx 190\) M\({}_{\odot}\) pc\({}^{-2}\)). The resolved clouds of NGC 5806 have sizes comparable to and masses slightly larger than those of the clouds in the MWd (\(R_{\rm c}=30-50\) pc and \(M_{\rm gas}=10^{4.5}-10^{7.5}\) M\({}_{\odot}\), with \(\leq 20\) pc spatial resolution; Rice et al., 2016; Miville-Deschenes et al., 2017), but they have sizes and masses larger than those of the clouds in the central molecular zone (CMZ; \(R_{\rm c}=5-15\) pc and \(M_{\rm gas}=10^{3.3}-10^{6}\) M\({}_{\odot}\), with \(\leq 1.5\) pc resolution; Oka et al., 2001; Kauffmann et al., 2017). On the contrary, the velocity dispersions of the NGC 5806 clouds are slightly larger and smaller than those of the clouds in the MWd (\(1-6\) km s\({}^{-1}\); Heyer et al., 2009) and the CMZ (12 - 50 km s\({}^{-1}\); Oka et al., 1998), respectively. Most clouds in late-type galaxies have comparable sizes (\(20-200\) pc), masses (\(10^{4.5}\) - \(10^{7.5}\) M\({}_{\odot}\)) and observed velocity dispersions (\(2-10\) km s\({}^{-1}\); \(10\) - \(60\) pc resolution; e.g. Donovan Meyer et al., 2012; Hughes et al., 2013; Rebolledo et al., 2015; Liu et al., 2023), while clouds in ETGs have slightly smaller sizes (\(5-50\) pc) and masses (\(10^{4.4}-10^{6.6}\) M\({}_{\odot}\)) but comparable observed velocity dispersions (\(2-20\) km s\({}^{-1}\)) to those of the clouds in NGC 5806 (\(\lesssim 20\) pc resolution; Utomo et al., 2015; Liu et al., 2021). Overall, the clouds in the nucleus generally are the smallest, least massive, least turbulent and have the smallest surface densities. On the other hand, the clouds in the arcs and nodes are the largest, most massive, most turbulent and have the largest surface densities. The clouds in the offset dust lanes have intermediate properties. ### GMC cumulative mass functions The mass function of GMCs is a tool to diagnose GMC populations and provides constraints on GMC formation and destruction (e.g. Rosolowsky and Blitz, 2005; Colombo et al., 2014). Here we use the gas mass rather than the virial mass to calculate the mass function, as the former is well defined even for spatially-unresolved clouds, and no assumption on the dynamical state of the clouds is required. The cumulative mass functions are fit with both a power-law function \[N(M^{\prime}>M)=\left(\frac{M}{M_{0}}\right)^{\gamma+1}\ \, \tag{5}\] where \(N(M^{\prime}>M)\) is the number of clouds with a mass greater than \(M\), \(M_{0}\) sets the normalisation and \(\gamma\) is the power-law index, and a truncated power-law function \[N(M^{\prime}>M)=N_{0}\left[\left(\frac{M}{M_{0}}\right)^{\gamma+1}-1\right]\ \, \tag{6}\] where \(M_{0}\) is now the cut-off mass and \(N_{0}\) is the number of clouds with a mass \(M>2^{1/(\gamma+1)}\)\(M_{0}\). To fit each cumulative mass function, we apply the "error in variable" method of Rosolowsky and Blitz (2005), and the fitting parameters and the uncertainties are estimated via bootstrapping. Fittings are only performed above the mass completeness limit of \(M_{\rm Comp}=2.4\times 10^{5}\) M\({}_{\odot}\). We calculate the mass completeness limit using the minimum mass (\(M_{\rm min}\)) of resolved cloud and the observational sensitivity, i.e. \(M_{\rm Comp}\equiv M_{\rm min}+10\delta_{\rm M}\)(e.g. Colombo et al., 2014; Liu et al., 2021), where the contribution to the mass due to noise \(\delta_{\rm M}=1.03\times 10^{4}\) M\({}_{\odot}\) is estimated by multiplying our RMS gas mass surface density sensitivity of \(17.8\) M\({}_{\odot}\) pc\({}^{-2}\) by the synthesised beam area of 565 pc\({}^{2}\). Figure 5 shows the cumulative mass function of all identified clouds (black data points), with the best-fitting truncated power-law (black solid line) and non-truncated power-law (black dashed line) \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline ID & RA (2000) & Dec. (2000) & V\({}_{\rm LSR}\) & \(R_{\rm c}\) & \(\sigma_{\rm obs,line}\) & \(\sigma_{\rm obs,line}\) & \(L_{\rm CO(2-1)}\) & \(M_{\rm gas}\) & \(T_{\rm max}\) & \(\omega_{\rm obs}\) & \(\phi_{\rm rest}\) & \(R_{\rm pl}\) \\ & (mm.) & (\(\arcsec\) ) & (km s\({}^{-1}\)) & (pc) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (\(10^{6}\) K km s\({}^{-1}\) pc\({}^{-2}\)) & (\(10^{7}\) M\({}_{\odot}\)) & (K) & (km s\({}^{-1}\) pc\({}^{2}\)) & (\(\arcsec\)) & (pc) \\ \hline 1 & 15:00.93 & 1.531.58 & 1168.9 & - & 1.37 3.34 & - & 0.97 1.87 & 0.43 \(\pm\) 0.82 & 3.2 & - & 336.36 \\ 2 & 15:00:0.33 & 15:33.08 & 1180.5 & 16.06 + 24.54 & 2.23 + 1.78 & 2.12 \(\pm\) 3.06 & 2.70 + 1.18 & 1.19 \(\pm\) 0.52 & 4.2 & 0.06 \(\pm\) 0.05 & 256 \(\pm\) 121 & 254 \\ 3 & 15:00:0.50 & 15:33.67 & 1184.0 & - & - & - & 0.38 \(\pm\) 1.04 & 0.17 \(\pm\) 0.46 & 3.2 & - & - & 522 \\ 4 & 15:00:0.37 & 15:33.25 & 119.42 & - & 2.30 \(\pm\) 2.38 & 1.61 + 2.03 & 1.16 \(\pm\) 0.74 & 0.51 \(\pm\) 0.33 & 3.3 & - & - & 412 \\ 5 & 15:00:0.20 & 15:33.39 & 1194.0 & - & - & - & - & 0.72 \(\pm\) 0.93 & 0.32 \(\pm\) 0.41 & 3.8 & - & - & 633 \\ 6 & 15:00:0.48 & 15:33.04 & 12 overlaid. The mass functions of the clouds in each regions are also shown in colours. The best-fitting slopes of the truncated and non-truncated power laws are \(\gamma=-1.72\pm 0.12\) and \(\gamma=-1.86\pm 0.06\), respectively. Although both the truncated and non-truncated power laws do not fit well at large masses due to the bump around \(10^{7}\) M\({}_{\odot}\), both slopes are shallower than that of the mass function of the clouds in the MWd (\(-2.20\pm 0.1\); Rice et al., 2016), M 51 (\(-2.30\pm 1\); Colombo et al., 2014), the outer regions of M 33 (\(-2.10\pm 1\); Rosolowsky et al., 2007) and the ETGs NGC 4526 (\(2.39\pm 0.03\); Utomo et al., 2015) and NGC 4429 (\(-2.18\pm 0.21\); Liu et al., 2021), but they are similar to those of the clouds in the MW centre (\(-1.60\pm 0.1\); Rice et al., 2016), the spiral arms of M 51 (\(-1.79\pm 0.09\); Colombo et al., 2014), the inner regions of M 33 (\(-1.80\pm 1\); Rosolowsky et al., 2007), NGC 300 (\(-1.80\pm 0.07\); Faesi et al., 2016) and Local Group galaxies (\(\approx-1.7\); Blitz et al., 2007). The GMC mass functions of the different regions are somewhat different from each other. We note that to avoid a failure of some of the fits, the five most massive clouds from each of the nucleus and the nodes were excluded from the fits (although the nucleus truncated power-law fit still fails). The GMCs of the nucleus, arcs, nodes and dust lanes have a best-fitting non-truncated power-law slope of \(\gamma\) of \(-2.22\pm 0.39\), \(-1.63\pm 0.06\), \(-2.04\pm 0.27\) and \(-2.27\pm 0.14\), respectively. Due to limited number of clouds (i.e. the small sample size) in each region, and the fact that our modified cprostoo Figure 4: Number distributions of \(R_{\rm c}\), \(\log(M_{\rm gas}/{\rm M}_{\odot})\), \(\sigma_{\rm obs,los}\) and \(\log(\Sigma_{\rm gas}/{\rm M}_{\odot}\ {\rm pc}^{-2})\) with their Gaussian fits overlaid for the 170 resolved clouds of NGC 5806 (black lines and histograms), and for the clouds in the nucleus (blue), arcs (green), nodes (red) and dust lanes (yellow) only. The black arrows in the top-left and bottom-left panels indicate our ability to resolve clouds spatially (\(\eta\sqrt{\sigma_{\rm miss}\ \sigma_{\rm miss}}\), where \(\sigma_{\rm miss,min}\equiv\theta_{\rm miss,min}/2.35\)) and spectrally (channel width of 2 km s\({}^{-1}\)), respectively. code identifies GMCs with a fixed convexity over multiple scales (leading to bumpy mass functions), our best-fitting slopes for each region have large uncertainties and the fits do not always seem to represent the mass functions well, especially in the nucleus and nodes. Despite these limitations, however, the cloud mass functions of the nodes and arcs (i.e. nuclear ring) are significantly shallower than those of the nucleus and dust lanes. The former have a slope shallower than or close to \(-2\), while the latter have a slope steeper than \(-2\). This implies that massive GMCs preferentially reside along the nuclear ring, whereas the mass budgets in the nucleus and dust lanes are dominated by less-massive GMCs. It also suggests either the dust lanes lack an efficient cloud growth mechanism or they have an efficient cloud destruction mechanism. It seems that the evolution and formation of GMCs are influenced by different galactic environments, and thus different GMC populations may exist in the galaxy. In M 51, Colombo et al. (2014) also reported that the galactic environment can affect not only the physical properties of the clouds but also their cumulative mass function, reporting a sharp truncation of the mass function at high masses (\(\approx 10^{6.5}\) M\({}_{\odot}\)) in the nuclear bar (\(\approx 1\) kpc diameter) compared to other regions (e.g. spiral arms). They suggested that galactic shear is likely to be a main driver of cloud destruction in the nuclear bar. In any case, both these results and ours imply that the galactic environment can influence the evolution and formation of GMCs. ## 4 Cloud kinematics ### Velocity gradients of individual clouds Previous GMC studies have shown that the velocity gradients of GMCs can reveal internal cloud rotation (e.g. Blitz, 1993; Phillips, 1999; Rosolowsky et al., 2003; Rosolowsky, 2007; Utomo et al., 2015; Liu et al., 2021). Because clouds in external galaxies are usually poorly spatially resolved, solid-body rotation provides an adequate description of the observations. As previous studies, we thus quantify the observed velocity gradient by fitting a plane to the intensity-weighted first moment map of each cloud. Although the rotation is not necessarily intrinsically solid body (i.e. the angular velocity may vary with radius within each cloud), the parameter \(\omega_{\rm obs}\) defined below nevertheless provides a useful single quantity to quantify the bulk rotation of individual clouds: \[\tilde{v}(x,y)=ax+by+c\;\;, \tag{7}\] where \(a\) and \(b\) are the projected velocity gradient along the \(x\)- and the \(y\)-axis on the sky, respectively, and \(c\) is a zero point,that we determine using the Interactive Data Language code mpfit2dfun(Markwardt, 2009). We can thus calculate the projected angular velocity \(\omega_{\rm obs}\) and position angle of the rotation axis \(\phi_{\rm rot}\): \[\omega_{\rm obs}=\sqrt{a^{2}+b^{2}} \tag{8}\] and \[\phi_{\rm rot}=\tan^{-1}(b/a)\;\;, \tag{9}\] where the uncertainties of \(\omega_{\rm obs}\) and \(\phi_{\rm rot}\) are estimated from the uncertainties of the parameters \(a\) and \(b\) using standard error propagation rules. Table 1 lists the best-fitting \(\omega_{\rm obs}\) and \(\phi_{\rm rot}\). The projected velocity gradients \(\omega_{\rm obs}\) of the 170 resolved clouds of NGC 5806 range from 0.01 to 0.67 km s\({}^{-1}\) pc\({}^{-1}\), with an average and median gradient of 0.10 and 0.08 km s\({}^{-1}\) pc\({}^{-1}\), respectively. These gradients are similar to those of the clouds in the MW (\(\approx 0.1\) km s\({}^{-1}\) pc\({}^{-1}\); Blitz, 1993; Phillips, 1999; Imara & Blitz, 2011), M 3 (\(\leq 0.15\) km s\({}^{-1}\) pc\({}^{-1}\); Rosolowsky et al., 2003; Imara et al., 2011; Braine et al., 2018), M 31 (\(0-0.2\) km s\({}^{-1}\) pc\({}^{-1}\); Rosolowsky, 2007) and M 51 (\(\leq 0.2\) km s\({}^{-1}\) pc\({}^{-1}\); Braine et al., 2020), but they are smaller than those of the clouds in the ETGs NGC 4526 (0.02 - 1.1 km s\({}^{-1}\) pc\({}^{-1}\); Utomo et al., 2015) and NGC 4429 (0.05 - 0.91 km s\({}^{-1}\) pc\({}^{-1}\); Liu et al., 2021). ### Origin of velocity gradients To investigate the origin of the velocity gradients of the GMCs of NGC 5806, we first compare the velocity map of NGC 5806 to the projected rotation axes of its clouds. In Figure 6, the projected rotation axes of the clouds (black arrows) are overlaid on the \({}^{12}\)CO(2-1) mean velocity map and the isovelocity contours (green contours). The arrow length is proportional to the projected angular velocity of each cloud. If the rotation axes of the clouds are aligned with the galaxy isovelocity contours, the bulk rotation of the clouds is likely governed by the large-scale galaxy rotation. Conversely, if the rotation axes of clouds are randomly distributed, the bulk rotation of the clouds likely originates from other mechanisms, such as turbulence and/or cloud-cloud collisions (e.g. Burkert & Bodenheimer, 2000; Wu et al., 2018; Li et al., 2018), that perturb angular momentum conservation. As shown in Figure 6, the rotation axes of the of NGC 5806 clouds are not well aligned with the isovelocity contours, suggesting that the galaxy rotation does not affect the internal rotation of the clouds. This is similar to the case of the MW (Koda et al., 2006), M 31 (Rosolowsky, 2007) and NGC 5064 (Liu et al., 2023), but different from that of the ETGs NGC 4526 (Utomo et al., 2015) and NGC 4429 (Liu et al., 2021), where the rotation axes are well aligned with the isovelocity contours. To further investigate this, we compare in Figure 7 the measured angular velocities (\(\omega_{\rm obs}\)) and rotation axes (\(\phi_{\rm rot}\)) of the clouds Figure 5: Cumulative gas mass distribution of all the clouds of NGC 5806 (black data points) and of the clouds in the nucleus (blue), arcs (green), nodes (red) and dust lanes (yellow) only. Truncated (solid lines) and non-truncated (dashed lines) power-law fits are overlaid. The mass completeness limit is shown as a black vertical dashed line. with those expected (\(\omega_{\rm model}\) and \(\phi_{\rm model}\)), as calculated from a low-resolution (i.e. coarse-grained) \({}^{12}\)CO(2-1) velocity map over the same position and area as each resolved cloud and using the same method as in Section 4.1. The modelled angular velocities (\(\omega_{\rm model}\)) are on average \(\approx 3.5\) times larger than the observed ones. Furthermore, there is no clear correlation between the modelled and observed orientation of the cloud rotation axes. Although not easily visible in Figure 7, there are different trends across the different regions of NGC 5806. About half of the clouds (\(13/25\)) in the arcs have a small difference between the modelled and observed rotation axis orientations (\(|\phi_{\rm rot}-\phi_{\rm model}|\leq 50^{\circ}\)), while only about one third of the clouds (\(50/145\)) in the other regions have such a small difference. Consequently, the velocity gradient of the clouds in the arcs are more likely to be governed by the large-scale galaxy rotation, presumably because the molecular gas there is less affected by the surrounding environment (e.g. AGN feedback and shocks) than the gas in other regions. Conversely, the velocity gradient of the clouds in the nucleus, nodes and dust lanes are more likely to be due to other origins (e.g. random turbulent motions and/or cloud-cloud collisions). Burkert & Bodenheimer (2000) showed that the apparent rotation of clouds can arise from the clouds' turbulence. They claimed a relation of the form \(\left(\frac{\omega_{\rm obs}}{\rm km~{}s^{-1}~{}pc^{-1}}\right)=1.6\left( \frac{R_{\rm c}}{\rm 0.1~{}pc}\right)^{-1/2}\). This formulation yields \(\omega=0.092\) (0.086) \(\rm km~{}s^{-1}~{}pc^{-1}\) for the median (mean) cloud radius of 30.6 (34.5) pc in NGC 5806. These are comparable to the median (mean) of our measured angular velocities, 0.10 (0.12) \(\rm km~{}s^{-1}~{}pc^{-1}\). It is thus most likely that the observed velocity gradients of the clouds of NGC 5806 are due to turbulent motions. However, we find that not all clouds have the same trend. The median cloud radii of the clouds in the nucleus, arcs, nodes and dust lanes are 27.4, 37.8, 35.8 and 26.9 pc, respectively, yielding expected angular velocities of 0.097, 0.082, 0.085 and 0.098 \(\rm km~{}s^{-1}~{}pc^{-1}\), compared to median measured angular velocities of 0.095, 0.13, 0.07 and 0.11 \(\rm km~{}s^{-1}~{}pc^{-1}\). The clouds in the arcs thus show a much larger deviation (\(\approx 45\%\)) than those in the other regions. This implies that additional mechanisms supporting and/or generating cloud rotation are required in the arcs. Another way to assess whether bulk motions due to galaxy rotation contribute significantly to the observed velocity dispersions and velocity gradients of the NGC 5806 clouds is to compare the observed velocity dispersions (\(\sigma_{\rm obs,los}\)) with the gradient-subtracted velocity dispersions (\(\sigma_{\rm gs,los}\)), as shown in Figure 8. If the gradient-subtracted velocity dispersions are much smaller than the observed velocity dispersions, bulk motions are dominant in the clouds. More than half of the clouds (\(107/170\)) of NGC 5806 have a small difference between the two velocity dispersions (i.e. a ratio between the two velocity dispersions \(\sigma_{\rm obs,gs}/\sigma_{\rm obs,los}>0.7\)). Some clouds have somewhat larger deviations, but only four clouds have a difference of more than 5 \(\rm km~{}s^{-1}\). This further suggests that bulk motions due to galaxy rotation are not important to the NGC 5806 clouds. The observed velocity dispersions and velocity gradients are thus likely dominated by turbulence. In turn, we will use the observed velocity dispersions (\(\sigma_{\rm obs,los}\)) rather than the gradient-subtracted velocity dispersions (\(\sigma_{\rm gs,los}\)) for the rest of our analyses, also consistent with previous GMC studies. ## 5 Dynamical states of clouds Scaling relations (e.g. relations between the sizes, linewidths and luminosities of GMCs) have been used as a standard tool to investigate the dynamical states of clouds (e.g. Larson, 1981; Blitz et al., 2007). Among them, the relation between the size and the linewidth (a.k.a. Larson's first relation) is generally considered the most fundamental. The size - linewidth relation is known to have the form of a power law and is generally interpreted as the consequence of turbulent motions within clouds (Falgarone et al., 1991; Elmegreen & Falgarone, 1996; Lequeux, 2005). The left panel of Figure 9 shows the size - linewidth relation of all resolved clouds of NGC 5806, with the best-fitting power-law relation overlaid (black solid line), as well as that of the MW disc (black dashed line; Solomon et al., 1987) and CMZ (black dotted line; e.g. Kauffmann et al., 2017). There is a strong correlation between size and linewidth, with a Spearman rank correlation coefficient of 0.70 and \(p\)-value of \(10^{-35}\). Our best-fitting power law has a steep slope, \[\log\left(\frac{\sigma_{\rm obs,los}}{\rm km~{}s^{-1}}\right)=(1.20\pm 0.10) \log\left(\frac{R_{\rm c}}{\rm pc}\right)-1.07\pm 0.16\enspace, \tag{10}\] with no clear difference between different regions. To achieve the best-fitting line with both \(R_{\rm c}\) and \(\sigma_{\rm obs,los}\) errors, we use a hierarchical Bayesian model called Linnix(Kelly, 2007).1 This slope is steeper than that of the clouds in the MW disc (\(0.5\pm 0.05\); Solomon et al., 1987) and the CMZ (\(0.66\pm 0.18\); Kauffmann et al., 2017), Figure 6: Projected angular momentum vectors of individual resolved GMCs in NGC 5806 (black arrows), overlaid on the \({}^{12}\)CO(2-1) velocity map and isovelocity contours (green contours). The arrow length is proportional to the angular velocity \(\omega_{\rm obs}\) of each clouds. but the zero-point (0.09 km s\({}^{-1}\)) is much smaller than that of the CMZ (\(5.5\pm 1.0\) km s\({}^{-1}\); Kauffmann et al., 2017). This slope is also much steeper than that of the clouds in M 31 (\(0.7\pm 0.2\); Rosolowsky, 2007), M 33 (\(0.45\pm 0.02\); Rosolowsky et al., 2003), NGC 4429 (\(0.82\pm 0.13\); Liu et al., 2021) and local dwarf galaxies (\(0.60\pm 0.10\); Bolatto et al., 2008). Although the GMCs of barred-spiral galaxies have been investigated (e.g. M83 and NGC 1300; Hirota et al., 2018; Maeda et al., 2020), only the LMC shows a clear size - linewidth relation, with a slope of 0.8 (Wong et al., 2011). Another scaling relation used to assess the dynamical states of clouds is the correlation between virial (\(M_{\rm vir}\)) and gas (\(M_{\rm gas}\)) mass. In the absence of non-gravitational forces, this quantifies the dynamical state of clouds based on the virial theorem. The virial parameter \[\begin{array}{l}\alpha_{\rm vir}\equiv\frac{M_{\rm vir}}{M_{\rm gas}}=\frac{ \sigma_{\rm obs,los}^{2}R_{\rm c}/b_{\rm s}G}{M_{\rm gas}}\\ =\frac{3M_{\rm gas}\sigma_{\rm obs,los}^{2}}{3b_{\rm s}GM_{\rm gas}^{2}/R_{\rm c }}=\frac{2K}{|U|}\end{array}, \tag{11}\] where \(b_{\rm s}\) is a geometrical factor that quantifies the effects of inhomogeneities and/or non-sphericity of a cloud on its self-gravitational energy (\(U\)) and \(K\) is the kinetic energy of random motions of the cloud. Here we adopt \(b_{\rm s}=1/5\) assuming the clouds have homogeneous spherical shapes. If \(\alpha_{\rm vir}\approx 1\), a cloud is considered to be in virial equilibrium and is gravitationally bound, while if \(\alpha_{\rm vir}\approx 2\), the cloud is only marginally gravitationally bound. If \(\alpha_{\rm vir}<1\), the cloud is likely to collapse gravitationally, while if \(\alpha_{\rm vir}\gg 1\), the cloud is either confined by non-gravitational forces (e.g. external pressure and/or magnetic fields) or it is short-lived (i.e. transient). The middle panel of Figure 9 shows the virial masses of the resolved clouds of NGC 5806 (calculated using the observed velocity dispersion \(\sigma_{\rm obs,los}\)) as a function of their gas masses, overlaid with the best-fitting power law (black solid line). The black dashed and dotted lines indicate the \(\alpha_{\rm vir}=1\) and \(\alpha_{\rm vir}=2\) relations, respectively. The best-fitting power law estimated from the linmix algorithm is \[\log\left(\frac{M_{\rm obs,vir}}{\rm M_{\odot}}\right)=(0.99\pm 0.03)\,\log \left(\frac{M_{\rm gas}}{\rm M_{\odot}}\right)+0.38\pm 0.20\;\;, \tag{12}\] implying that the resolved clouds of NGC 5806 are virialised on average. Similarly to Larson's first relation in the left panel, the resolved clouds in the different regions tend to have the same best-fitting slope, but the clouds in the arcs are slightly more massive than those in the other regions. To investigate the virialisation of the resolved clouds of NGC 5806 further, we also explore the distribution of \(\alpha_{\rm vir}\) for the whole galaxy and each region individually, as shown in Figure 10. The mean (median) of \(\alpha_{\rm vir}\) is 2.02 (1.72), indicating that on average the clouds are marginally bound. However, \(\alpha_{\rm vir}\) has a broad distribution and only about half of the clouds (\(89/170\)) lie between \(\alpha_{\rm vir}=1\) and \(\alpha_{\rm vir}=2\). About 40% of the clouds (\(62/170\)) have \(\alpha_{\rm vir}>2\), while only a few clouds (\(19/170\)) have \(\alpha_{\rm vir}<1\). Unlike other physical quantities such as size, gas mass, velocity dispersion Figure 8: Comparison of our observed (\(\sigma_{\rm obs,los}\)) and gradient-subtracted (\(\sigma_{\rm obs,los}\)) velocity dispersion measures for the 170 resolved clouds of NGC 5806. The four black diagonal lines represent the \(1:1,1:0.9,1:0.8\) and \(1:0.7\) ratio, respectively. Figure 7: Correlations between the modelled and observed projected angular velocities \(\omega_{\rm obs}\) (left) and position angles of the rotation axes \(\phi_{\rm vir}\) (right) for the 170 resolved clouds of NGC 5806. The data points are colour-coded by region and the black diagonal lines indicate the \(1:1\) relations. and gas mass surface density (see Section 3), there is no significant difference across the different regions. Lastly, we consider the correlation between gas mass surface density (\(\Sigma_{\rm gas}\)) and \(\sigma_{\rm obs,los}R_{\rm c}^{-1/2}\), providing yet another constraint on the physics of clouds (Field et al., 2011). Regardless of how well clouds obey Larson's first relation, if the clouds are virialised, the clouds should be clustered around \(\sigma_{\rm obs,los}R_{\rm c}^{-1/2}=\sqrt{\pi\alpha_{\rm vir}Gb_{\rm s}\Sigma_{ \rm gas}}\), as shown by the black dashed (\(\alpha_{\rm vir}=1\)) and dotted (\(\alpha_{\rm vir}=2\)) diagonal lines in the right panel of Figure 9. If clouds are not virialised (\(\alpha_{\rm vir}\gg 1\)), external pressure (\(P_{\rm ext}\)) should play an important role to constrain the clouds (or the clouds are likely transient structures). In this case, clouds will be distributed around the black solid V-shape curves in the right panel of Figure 9: \[\sigma_{\rm obs,los}R_{\rm c}^{-1/2}=\sqrt{\frac{\pi\alpha_{\rm vir}G\Sigma_{ \rm gas}}{5}+\frac{4}{3}\frac{P_{\rm ext}}{\Sigma_{\rm gas}}} \tag{13}\] (Field et al., 2011). The right panel of Figure 9 shows the relation between \(\sigma_{\rm obs,los}R_{\rm c}^{-1/2}\) and \(\Sigma_{\rm gas}\) for all the resolved clouds of NGC 5806, showing that they are broadly distributed. The gas mass surface densities vary by 1.5 orders of magnitude and reveal a positive correlation with \(\sigma_{\rm obs,los}R_{\rm c}^{-1/2}\). Given the uncertainties, some clouds with \(\alpha_{\rm vir}>2\) distributed across the V-shaped curves do seem to be bound by high external pressures (\(P_{\rm ext}/R_{\rm B}\gtrsim 10^{5}\) K cm\({}^{-3}\), if indeed they are bound). In particular, two clouds in the nucleus at very high pressures (\(P_{\rm ext}/R_{\rm B}\gtrsim 10^{7}\) K cm\({}^{-3}\)) might be affected by nuclear activity. In addition, as expected from the right panel of Figure 9 (but not explicitly shown in the figure), there is a strong correlation between \(\sigma_{\rm obs,los}\) and \(\Sigma_{\rm gas}\) (Spearman rank correlation coefficient of 0.73 and \(p\)-value of \(2\times 10^{-35}\)), while the correlation between \(R_{\rm c}\) and \(\Sigma_{\rm gas}\) is much weaker (Spearman rank correlation coefficient of 0.38 and \(p\)-value of \(7\times 10^{-12}\)). In summary, Figure 9 shows that the size - linewidth relation of the resolved clouds of NGC 5806 has a slope that is twice as steep as that of MW disc clouds, while most of the clouds are only marginally bound (\(\langle\alpha_{\rm vir}\rangle\approx 2\)). ## 6 Discussion ### Turbulence maintained by bar-driven gas inflows High velocity dispersions are present in the central regions of NGC 5806, especially in the nodes (up to 60 km s\({}^{-1}\); see top-right panel of Figure 2). Individual clouds also have large velocity widths, and the clouds have a very steep size-linewidth relation and relatively large virial parameters (see Section 5). Could all these facts be due to the large-scale bar of NGC 5806? Recent observations of barred spiral galaxies have shown that Figure 10: Distribution of \(\log\left(\alpha_{\rm obs,vir}\right)\) of all the resolved clouds of NGC 5806 (grey histogram) and only the clouds in each of the four different regions (coloured histograms). The black solid and dot-dashed lines show the mean and median of the distribution, respectively, while the black dashed and dotted lines indicate \(\alpha_{\rm vir}=1\) and \(\alpha_{\rm vir}=2\), respectively. Figure 9: Left: size – linewidth relation of the 170 resolved clouds of NGC 5806. The black solid line shows the best-fitting power-law relation, while the black dashed and dotted lines show the relations for the clouds of the MW disc (Solomon et al., 1987) and CMZ (Kauffmann et al., 2017), respectively. Middle: molecular gas mass – virial mass relation for the same clouds. The black solid line shows the best-fitting power-law relation, while the black dashed and dotted lines indicate the 1 : 1 and 2 : 1 relations, respectively. Right: gas mass surface density – \(\sigma_{\rm obs,los}R_{\rm c}^{-1/2}\) relation for the same clouds. The black dashed and dotted diagonal lines show the solutions for a simple (i.e. \(\alpha_{\rm vir}=1\)) and a marginal (i.e. \(\alpha_{\rm vir}=2\)) virial equilibrium, respectively. The V-shaped black solid curves show solutions for pressure-bound clouds at different pressures (\(P_{\rm ext}/k_{\rm B}=10^{5}\), \(10^{6}\), \(10^{7}\) and \(10^{8}\) K cm\({}^{-3}\)). Data points are colour-coded by region in all three panels. Typical uncertainties are shown as a black cross in the top-right or bottom-right corner of each panel. bars can drive gas inflows and contribute to the high velocity dispersions observed in the central regions of many galaxies. For example, Salak et al. (2016) reported high molecular gas velocity dispersions (\(\gtrsim 40\) km s\({}^{-1}\)) in the node regions of NGC 1808 ((R)SAB(s)a), that are due to gas streaming along the bar toward the nuclear ring. Sato et al. (2021) also reported high gas velocity dispersions (\(\gtrsim 40\) km s\({}^{-1}\)) in the nuclear ring of NGC 613, especially at the interface between the nuclear ring and the large-scale bar (i.e. the node regions) where gas inflows are observed. To investigate whether the bar in NGC 5806 also drives gas inflows, we probe the shapes of the \({}^{12}\)CO(2-1) line-of-sight (LOS) velocity distributions (LOSVDs) across the different regions, and illustrate specific trends in Figure 11. High velocity dispersions are present in the nuclear ring, especially in the nodes (up to 60 km s\({}^{-1}\)). The LOSVDs in the nodes also have shapes that are totally different from those in the other regions: the LOSVDs in the nodes are often double peaked within a single synthesised beam (see the red and blue circles in Figure 11), while the LOSVDs in the rest of the nuclear ring (i.e. the arcs) have only narrower single peaks (see the purple and grey circles in Figure 11). The LOSVDs in the nucleus have broad and skewed shapes with single peaks (see the yellow and green circles in Figure 11), likely due to strong shear and/or AGN feedback (and beam smearing), that can render the molecular gas more turbulent (e.g. Wada et al., 2009). The double peaks of the node LOSVDs imply that there are multiple clouds (at least two) along each LOS. Furthermore, systematically, for each double peak in the nodes, one peak smoothly connects to the LOSVD of the nearest dust lane, the other to the LOSVD of the nuclear ring, suggesting that the molecular gas in the dust lanes flows toward the nuclear ring. Molecular gas thus appears to be streaming along the bi-symmetric offset dust lanes, causing collisions in the molecular gas in the nodes. This is analogous to the situation in the MW, where Kruijssen et al. (2014) suggested that gas inflows along the bar may be responsible for driving the turbulence in the CMZ. More generally, collisions and shocks resulting from gas streaming into a nuclear ring (from the large-scale bar) can cause significant random motions in the gas (e.g. Kruijssen et al., 2014; Federrath et al., 2016; Sormani et al., 2019; Salas et al., 2021; Wallace et al., 2022). To measure the relative velocity between the gas inflowing along the offset dust lanes and the nuclear ring, we consider individual LOSVDs in the nodes and estimate the velocity difference between the two dominant peaks. The measured average velocity difference is \(V_{\rm in,obs}\approx 100\) km s\({}^{-1}\) in both the eastern and the western node. The gas inflow velocity is then \(V_{\rm in}=V_{\rm in,obs}/\sin i\approx 120\) km s\({}^{-1}\)(e.g. Sato et al., 2021). Adopting this relative velocity, the total mass inflow rate along the two dust lanes can be estimated as \[\dot{M}_{\rm in} =2(\Sigma_{\rm gas})W_{\rm in}V_{\rm in} \tag{14}\] \[\approx 5\ {\rm M}_{\odot}\ {\rm yr}^{-1}\ \,\] where the width of the gas inflow \(W_{\rm in}\) to each node is taken to be \(\approx 100\) pc and the mean molecular gas mass surface density in the dust lanes is \((\Sigma_{\rm gas})\approx 200\ {\rm M}_{\odot}\ {\rm pc}^{-2}\). Similarly, to estimate the contribution of the gas inflows in driving the turbulence, we can estimate the total kinetic energy per unit time provided by the gas inflows as \[\dot{E}_{\rm in} \approx\frac{1}{2}\dot{M}_{\rm in}V_{\rm in}^{2} \tag{15}\] \[\approx 3.5\times 10^{4}\ {\rm M}_{\odot}\ {\rm km}^{2}\ {\rm s}^{-2}\ {\rm yr}^{-1}\ \.\] If turbulence in the nuclear ring is indeed maintained by the bar-driven gas inflows, the turbulence energy dissipation per unit time \(\dot{E}_{\rm diss}\) should be balanced by the input kinetic energy per unit time, i.e. \(\dot{E}_{\rm diss}\approx\dot{E}_{\rm in}\). The energy per unit time dissipated by the observed turbulence can be estimated as \[\dot{E}_{\rm diss} \approx M_{\rm NR}(\sigma_{\rm NR})^{3}/(2\ h_{\rm NR}) \tag{16}\] \[\approx 2.8\times 10^{4}\ {\rm M}_{\odot}\ {\rm km}^{2}\ {\rm s}^{-2}\ { \rm yr}^{-1}\ \.\] (e.g. Mac Low & Klessen, 2004), where \(M_{\rm NR}\approx 2.2\times 10^{8}\ {\rm M}_{\odot}\). \(\langle\sigma_{\rm NR}\rangle\approx 16\) km s\({}^{-1}\) and \(h_{\rm NR}\approx 16\) pc are the total mass, mean velocity dispersion and scale height of the molecular gas in the nuclear ring, respectively. This is indeed approximately equal to our estimated \(\dot{E}_{\rm in}\), so bar-driven molecular gas inflows are a viable mechanism to explain the high velocity dispersions present in the nuclear ring. The aforementioned scale height was estimated as \(h_{\rm NR}=\langle\sigma_{\rm NR}\rangle/\kappa_{\rm NR}\)(Lin & Pringle, 1987), where \(\kappa_{\rm NR}\) is the epicyclic frequency of the nuclear ring radius, that can be calculated as \(\kappa_{\rm NR}^{2}\equiv\left.\left(R\frac{d\Omega^{2}(R)}{dR}+4\Omega^{2}(R) \right)\right|_{R=R_{\rm NR}}\), where \(\Omega(R)\equiv V_{\rm c}(R)/R\), \(V_{\rm c}(R)\) is the circular velocity of NGC 5806 and \(R_{\rm NR}\) is the radius (at the centre) of the nuclear ring (\(R_{\rm NR}\approx 330\) pc). As the molecular gas in NGC 5806 is so dynamically cold, we took \(V_{\rm c}(R)\) to be the observed rotation curve, derived from our data cube using 3dbarolo (Di Teodoro & Fraternali, 2015) and from our first-moment map using 2dbat (Oh et al., 2018). Both approaches yield consistent results, leading to \(\kappa_{\rm NR}\approx 1\) km s\({}^{-1}\) pc\({}^{-1}\) and thus the adopted scale height \(h_{\rm NR}\approx 16\) pc. ### Nuclear ring GMC lifetimes As argued above (Section 6.1), bar-driven gas inflows should strongly influence the cloud properties in the nuclear ring (see also Salak et al., 2016; Sato et al., 2021). It is thus important to probe whether cloud properties vary azimuthally along the ring. Figure 12 shows both the number of clouds and a number of cloud properties (virial parameter, gas mass, velocity dispersion, size and gas mass surface density) as a function of azimuthal angle (measured counter-clockwise from the western node). Interestingly, while none of the cloud properties varies significantly with azimuthal angle (see panels (b) - (f) of Figure 12), the number of clouds (as well as the CO surface brightness) strongly decreases from one node to the other (see panel (a) of Figure 12). Which mechanisms can cause this steep decrease of the cloud number along the nuclear ring downstream from the nodes? Most likely, when molecular gas from the large-scale bar enters the nuclear ring at the nodes, the ensuing violent collisions will lead to the formation of many clouds. For their number to decrease, these clouds formed at the nodes must then be gradually destroyed while moving along the nuclear ring (see Figure 13). This may be due to a number of mechanisms such as further cloud-cloud collisions, shear, stellar feedback, AGN feedback and/or violent turbulence. Irrespective of the exact cloud disruption mechanism, the observed azimuthal variation of the cloud number embodies the resulting cloud lifetimes. Indeed, the characteristic cloud lifetime can be estimated from the travel time \(t_{\rm travel}\) between the two nodes and the fraction of clouds lost as they move \(\dot{F}_{\rm lost}\) (i.e. the decline of the number of clouds as their travel between the two nodes): \[t_{\rm lifetime}=\frac{t_{\rm travel}}{2}\frac{1}{F_{\rm lost}}\ . \tag{17}\] We note that this method to estimate the cloud lifetimes in the nuclear ring of a barred galaxy is similar to that introduced by Figure 11: Velocity dispersion map of NGC 5806. The red and blue circles indicate regions where velocity dispersions are above 50 km s\({}^{-1}\) in the nodes, the yellow and green circles regions where velocity dispersions are above 50 km s\({}^{-1}\) in the nucleus, and the purple and grey circles regions where the velocity dispersions are below 20 km s\({}^{-1}\) in the arcs. Inset panels show the corresponding spectra and the black vertical dashed line in each panel indicates the systemic velocity of NGC 5806. For double peaked spectra, the black arrows indicate the velocity differences \(V_{\rm in,obs}\) discussed in the text. Figure 12: GMC properties in the nuclear ring (both arcs and nodes), as a function of the azimuthal angle (measured counter-clockwise from the western node). From left to right, top to bottom: number of resolved clouds, virial parameter, gas mass, velocity dispersion, size and gas mass surface density. The red data points are averages in radial bins of width 30\({}^{\circ}\), while the red error bars indicate the 1\(\sigma\) scatter within each radial bin. Black vertical dashed lines indicate the positions of the two nodes. Meidt et al. (2015) to estimate the cloud lifetimes in the inter-arm region of a spiral galaxy. However, as noted below, we measure the travel time \(t_{\rm travel}\) with respect to the large-scale bar rotating pattern, as inspired by the work of Koda (2021). We can estimate \(t_{\rm travel}\) as \[t_{\rm travel}=\pi R_{\rm NR}/(V_{\rm c,NR}-\Omega_{\rm p}R_{\rm NR}) \tag{18}\] (see also Koda, 2021), where \(V_{\rm c,NR}\) is the circular velocity at the (radius of the) centre of nuclear ring (\(V_{\rm c,NR}\approx 150\) km s\({}^{-1}\)) and \(\Omega_{\rm p}\) is the pattern speed of the large-scale bar. The pattern speed of the bar of NGC 5806 has never been measured. We could obtain a firm lower limit to the travel time by adopting \(\Omega_{\rm p}=0\) in Equation 18 (yielding \(t_{\rm travel}\gtrsim 6.9\) Myr), but instead we estimate the bar pattern speed by assuming its corotation radius is located at \(1.2\pm 0.2\) times the half-length of the bar, as is the case for most barred disc galaxies (e.g. Athanassoula, 1992; Aguerri et al., 1998). Erwin (2005) measured the deprojected half-length of the bar of NGC 5806 to be \(38^{\prime\prime}\) or \(3.9\) kpc at our adopted distance, leading to a pattern speed \(\Omega_{\rm p}=V_{\rm c}[(1.2\pm 0.2)\ R_{\rm bar}]/[(1.2\pm 0.2)\ R_{\rm bar }]=45^{+11}_{-7}\) km s\({}^{-1}\) kpc\({}^{-1}\). In turn, using Equation 18, this leads to a travel time \(t_{\rm travel}=7.4^{+0.3}_{-0.1}\) Myr. To estimate \(F_{\rm lost}\), we first measure the number of clouds \(N_{\rm I}\) and \(N_{\rm II}\) in two adjacent zones (each mirrored on both halves of the nuclear ring) that span equal ranges of azimuth, as shown in Figure 13 (see also Figure 1 of Meidt et al., 2015). The fraction of clouds lost between the two nodes is \[F_{\rm lost}=\frac{N_{\rm I}-N_{\rm II}}{N_{\rm I}}\ . \tag{19}\] Counting the numbers of clouds in the first half of the nuclear ring (from the western node to the eastern node) yields \(N_{\rm I}=23\) and \(N_{\rm II}=9\) (and thus \(F_{\rm lost}=0.61\)), while in the second half (from the eastern node to the western node) this yields \(N_{\rm I}=40\) and \(N_{\rm II}=11\) (and thus \(F_{\rm lost}=0.72\)). This apparent loss of clouds along the nuclear ring is probably tightly related to the decrease of the CO intensity downstream from the nodes (see the top-left panel of Figure 2). Combined with our estimated travel time, these two fractions of lost clouds yield two cloud lifetime estimates, that we take as a range \(t_{\rm lifetime}=5.1-6.3\) Myr. This cloud lifetime is smaller than that of clouds in the central \(3.5\) kpc radius of M 51 (\(20-50\) Myr; Meidt et al., 2015), nearby galaxies (\(10-100\) Myr; Jeffreson & Kruijssen, 2018; Chevance et al., 2020), the LMC (\(\approx 11\) Myr; Ward et al., 2022) and between spiral arms in disc galaxies (\(\approx 100\) Myr; e.g. Koda et al., 2009), but it is larger than that of clouds in the CMZ (\(1-4\) Myr; e.g. Kruijssen et al., 2015; Jeffreson et al., 2018). ### Nuclear ring GMC destruction mechanisms Having estimated the lifetimes of the clouds in the nuclear ring of NGC 5806, we now briefly discuss the possible mechanisms regulating those lifetimes. A cloud's lifetime is mainly set by cloud-cloud collisions, shear, stellar feedback, AGN feedback and/or violent turbulence (e.g. Meidt et al., 2015; Jeffreson & Kruijssen, 2018; Chevance et al., 2020; Kim et al., 2022). We therefore now derive the relevant time scales of these processes, and compare them with our derived cloud lifetime. Processes that take longer than the estimated cloud lifetime are likely to play a less important role setting the cloud lifetime than processes with shorter timescales. **Cloud-cloud collisions.** Cloud-cloud collisions can be an important mechanism limiting cloud lifetimes, as clouds can be destroyed when merging with other clouds. The cloud-cloud collision timescale in the nuclear ring can be estimated as \(t_{\rm coll}=1/N_{\rm mc}D_{\rm c}\sigma_{\rm cc}\)(Koda et al., 2006), where \(N_{\rm mc}\) is the cloud number surface density in the nuclear ring, \(D_{\rm c}\) is the mean cloud diameter in the nuclear ring (\(2\left<R_{\rm c}\right>\approx 78\) pc; see panel (e) of Figure 12) and \(\sigma_{\rm cc}\) is the cloud-cloud velocity dispersion, generally assumed to be \(\approx 10\) km s\({}^{-1}\)(e.g. Koda et al., 2006; Inutsuka et al., 2015). To estimate \(N_{\rm mc}\), we consider the 85 nuclear ring clouds contained within an elliptical annulus of inner semi-major axis length 230 pc, outer semi-major axis length 370 pc and ellipticity 0.3, that nicely encloses the nuclear ring, yielding \(N_{\rm mc}\approx 450\) kpc\({}^{-2}\) and in turn \(t_{\rm coll}\approx 3.1\) Myr. Our derived collision timescale is approximately half the estimated cloud travel time between the nodes \(t_{\rm travel}\) and is smaller than the estimated cloud lifetime \(t_{\rm lifetime}\). **Shear.** Shear generally appears to be an important mechanism regulating cloud lifetimes in galaxy centres, where strong shear can lead to mass loss and/or complete cloud dispersal (e.g. Meidt et al., 2015; Jeffreson & Kruijssen, 2018). We estimate the shear timescale as \(t_{\rm shear}=1/2A\)(Liu et al., 2021), where \(A\equiv\frac{1}{2}\left(\frac{V_{\rm c}(R_{\rm NR})}{R_{\rm NR}}-\frac{dV_{ \rm c}}{dR}\right|_{R\rm NR}\)) \(\approx 0.15\) km s\({}^{-1}\) pc\({}^{-1}\) is Oort's constant evaluated at (centre of the nuclear ring using the aforementioned rotation curve, yielding \(t_{\rm shear}\approx 3.2\) Myr. This is again approximately half the cloud travel time between the nodes and is smaller than the estimated cloud lifetime. **Stellar feedback.** The destruction of molecular clouds by stellar feedback occurs on a feedback timescale \(t_{\rm feedback}\), i.e. the timescale of coexistence of molecular gas and stars within a cloud (e.g. Chevance et al., 2020). This can be estimated by measuring the spatial offset between cold gas (the fuel for star formation, traced by e.g. CO) and star formation (traced by e.g. H\(\alpha\)) through the now widely-used "tuning fork" diagram (Kruijssen & Longmore, 2014; Kruijssen et al., 2018). However, the absence of a map of a star-formation tracer at both high angular resolution and free of dust extinction prohibits a direct measurement of the feedback timescale in NGC 5806. Chevance et al.'s (2020) molecular gas measurements in nine nearby star-forming disc galaxies range from 1 to 5 Myr, with a typical timescale \(t_{\rm feedback}\approx 3.5\) Myr that we adopt for the clouds in our galaxy. Figure 13: Schematic diagram of the scenario envisaged for the nuclear ring of NGC 5806. The nuclear ring is shown as a large pale grey annulus, clouds as small blue filled ellipses within the nuclear ring, and the two offset dust lanes as thick yellow arrows. The two solid vertical black lines indicate the midpoints that divide each half of the nuclear ring into two zones. \(N_{\rm I}\) and \(N_{\rm II}\) are the number of clouds in each of those two zones. **AGN feedback.** Nuclear activity is a powerful mechanism that can severely affect the medium surrounding a nucleus. Several observational studies have reported that AGN feedback is the most likely mechanism to explain the high velocity dispersions of molecular gas (and even molecular gas disruption) in galaxy centres (e.g. Schawinski et al., 2009; Simionescu et al., 2018; Nesvadba et al., 2021). Several simulations also support AGN having a significant impact on the molecular gas in galaxy centres (e.g. Wada et al., 2009; Mukherjee et al., 2018). However, these studies have also shown that this impact on the surrounding media is limited to several hundred parsecs in radius in the galactic discs (while extending beyond 1 kpc perpendicularly to the discs). Furthermore, in NGC 5806, not only is the mm continuum not detected in our observations (Section 2.2), but only the nucleus (inner 100 pc radius) was classified as AGN/shocks by Erroz-Ferrer et al. (2019) using Baldwin et al.'s (1981) diagnostics and optical integral-field spectroscopic observations. All these results thus suggest that the AGN of NGC 5806 is unlikely to directly affect the molecular gas in the nuclear ring. **Turbulence.** Strong turbulence could be another important process dispersing clouds in the nuclear ring (e.g. Dobbs & Pettitr, 2015; Kim et al., 2022). Its effect can be characterised by a cloud's turbulent crossing timescale, \(t_{\rm cross}\approx 2R_{\rm c}/\sigma_{\rm obs,los}\)(e.g. Kruijssen et al., 2019). We can thus estimate the turbulent crossing timescales of the clouds in the nuclear ring, yielding timescales from 5 to 20 Myr, with a mean turbulent crossing timescale \(\langle t_{\rm cross}\rangle\approx 11\) Myr. This is much larger than our estimated cloud travel time between the nodes and our estimated cloud lifetime. Overall, we can rule out turbulence as an important factor limiting the cloud lifetimes in the nuclear ring, as it acts on timescales much longer than the characteristic lifetime of the clouds (\(t_{\rm cross}\approx 11\) Myr while \(t_{\rm lifetime}\approx 5.1-6.3\) Myr). This is consistent with the fact that the nuclear ring clouds have a mean virial parameter of 2.02 and only \(\approx 30\%\) (26/85) of the clouds have \(\alpha_{\rm vir}>2\) (see Section 5 and Figure 10). On the other hand, the timescales estimated for cloud-cloud collisions, shear and stellar feedback are comparable to each other, relatively short (\(\approx 3-3.5\) Myr) and all smaller than the estimated cloud lifetime, implying they could all play an important role setting cloud lifetimes. ### Steep size - linewidth relation As observations with different spatial resolutions are likely to trace different cloud sizes, we compare in Figure 14 the size - linewidth relations of NGC 5806 (spatial resolution \(\approx 24\) pc and sensitivity \(\sigma_{\rm rms}\approx 0.8\) K), the LMC (\(\approx 11\) pc and \(\approx 0.3\) K) and the two ETGs NGC 4526 (\(\approx 20\) pc and \(\approx 0.7\) K) and NGC 4429 (\(\approx 13\) pc and \(\approx 0.5\) K), whose observations have spatial resolutions and sensitivities similar to each other. While the observations of the MWd (Heyer et al., 2009), M 51 (Colombo et al., 2014) and M 33 (Gratier et al., 2012) are more different, we also include them in Figure 14 as those galaxies have a morphological type more similar to that of NGC 5806. As discussed in Section 5, the GMCs of NGC 5806 have a size - linewidth relation with a power-law slope of \(1.20\pm 0.10\), much steeper than those found in other galaxies (e.g. \(0.5-0.7\) in the MW, Solomon et al., 1987; Kauffmann et al., 2017; \(0.4-0.8\) in nearby galaxies, Rosolowsky et al., 2003; Rosolowsky, 2007; Bolatto et al., 2008; Wong et al., 2011). The steep slope of the size - linewidth relation of NGC 5806 is unlikely to be due primarily to the fact that it is measured in the central region of the galaxy (as opposed to the galaxy disc), as the size - linewidth relations measured in the centres of other WISDOM galaxies appear to be much shallower and more similar to that of the MWd (e.g. no correlation in NGC 4526, Utomo et al., 2015; slope of \(0.6\pm 0.1\) in NGC 5064, Liu et al., 2023; slope of \(0.3\pm 0.07\) in NGC 1387, Liang et al., 2023). A possible exception in the current WISDOM sample is the ETG NGC 4429, for which the GMCs in the central kpc-size disc have a size - linewidth slope of \(0.82\pm 0.13\). However, once contamination of the cloud velocity dispersions by large-scale galaxy rotation (inducing bulk rotation within the clouds) is removed, the NGC 4429 clouds have a shallower slope than that of MWd clouds (\(\approx 0.24\); Liu et al., 2021). A steep cloud size - linewidth relation is also present in the central region of the WISDOM dwarf lenticular galaxy NGC 404 (Liu et al., 2022), although that study focused on much smaller structures, i.e clumps with sizes of \(\approx 3\) pc, so the comparison is arguably inappropriate. Overall, the distinct environment of galaxy centres is thus unlikely to be the only driver of the observed steep cloud size - linewidth relation of NGC 5806. The steep slope of the size - linewidth relation is thus more likely to be due to gas inflows and shocks induced by the large-scale bar of NGC 5806. That bar-driven gas inflows contribute to the high velocity dispersions in the nuclear ring of NGC 5806 has already been discussed in Section 6.1. However, the bar also drives strong shocks in the central region, as illustrated by the offset dust lanes and associated moleculas gas (a generic prediction of bar-driven shocks; e.g. Athanassoula, 1992). In NGC 5806, turbulence can not dissipate all the energy through increasingly small spatial scales (i.e. through the usual tur Figure 14: Size – linewidth relation of extragalactic GMCs. Coloured circles show the resolved clouds of NGC 5806, while coloured contours encompass 68% of the distribution of the data points for each galaxy (NGC 4526, Utomo et al., 2015; NGC 4429, Liu et al., 2021; M 51, Colombo et al., 2014; M 33, Gratier et al., 2012; MWd, Heyer et al., 2009; LMC, Wong et al., 2011). Contours are colour-coded by galaxy morphological type. The black solid line shows the best-fitting power-law relation of all NGC 5806 resolved clouds, the black dashed line that of the MWd clouds (Solomon et al., 1987) and the black dotted line that of the CMZ clouds (Kauffmann et al., 2017). bulent "cascade"), as kinetic energy is also being spent on shocks and/or gas compression (e.g. Mac Low, 1999; Mac Low and Klessen, 2004; Cen, 2021). Since the energy transmission is no longer conservative (\(\sigma_{\rm obs,los}\propto R_{\rm c}^{1/3}\) for a constant mass energy density transfer rate, Kolmogorov, 1941; \(\sigma_{\rm obs,los}\propto R_{\rm c}^{3/5}\) for a constant volumetric energy density transfer rate, Cen, 2021), the size - linewidth relation slope is expected to be steeper than \(1/3-3/5\), as is indeed the case. An analogous example is probably that of the CMZ. Indeed, the MW also has a large-scale bar and the CMZ is most likely the equivalent of a nuclear ring in a barred galaxy, and the CMZ cloud size - linewidth relation is rather steep (slope of \(0.66\pm 0.18\); Kauffmann et al., 2017), if not as steep as that of NGC 5806. The LMC and NGC 4526 also each have a large-scale bar, but comparisons with those galaxies are not justified as the LMC has a (poorly understood) off-centred bar strongly affected by a tidal interaction (e.g. de Vaucouleurs and Freeman, 1972; van der Marel, 2001) while NGC 4526 has a relatively weak bar (Buta et al., 2007). Further studies of the impact of bars on the size - linewidth relation would be highly valuable. ### Dependence of the virial parameter on cloud properties By definition (\(\alpha_{\rm vir}\equiv\frac{\sigma_{\rm obs,los}^{2}R_{\rm c}}{B_{\rm c}GM_{ \rm gas}}\); see Equation 11), assuming all quantities are independent, the virial parameter \(\alpha_{\rm vir}\) is expected to have clear dependences on the velocity dispersion (\(\sigma_{\rm obs,los}\)), size (\(R_{\rm c}\)) and gas mass (\(M_{\rm gas}\)). However, for virialised clouds, these are expected to be correlated (see e.g. Shetty et al., 2010). To reveal which physical quantity primarily affects the virialisation of the clouds, we therefore probe the dependence of \(\alpha_{\rm vir}\) on these quantities in Figure 15. There is no clear dependence of \(\alpha_{\rm vir}\) on either \(R_{\rm c}\) or \(M_{\rm gas}\) for all the resolved clouds of NGC 5806 (second and third row of Figure 15). This is inconsistent with previous studies that showed a clear negative correlation between \(\alpha_{\rm vir}\) and \(M_{\rm gas}\)(e.g. Shetty et al., 2010; Miville-Deschenes et al., 2017; Veltchev et al., 2018). The clouds of the nucleus do show correlations, but these are weak and largely depend on two exceptionally massive clouds. We also note that there is no correlation between \(\alpha_{\rm vir}\) and \(\Sigma_{\rm gas}\) (Spearman rank correlation coefficient of 0.18 and \(p\)-value of 0.094). On the other hand, there is a clear positive correlation between \(\alpha_{\rm vir}\) and \(\sigma_{\rm obs,los}\) in the nucleus, nodes and dust lanes, while the arcs shows a negative correlation (first row of Figure 15), with a Spearman rank correlation coefficient of 0.67 and \(p\)-value of \(5\times 10^{-20}\) for all resolved clouds. The best-fitting power-law of all clouds estimated from the limmix algorithm is \(\alpha_{\rm vir}\propto\sigma_{\rm obs,los}^{0.27\pm 0.09}\) (black solid lines in the first row of Figure 15). The trends of the node and dust lane clouds are similar to that of all the clouds, while the nucleus clouds have a steeper slope and the arc clouds have a negative correlation. Although the strength of the correlation between \(\alpha_{\rm vir}\) and \(\sigma_{\rm obs,los}\) varies between regions (Spearman coefficient (\(p\)-value) of \(0.89\) (\(4\times 10^{-8}\)), \(0.37\) (\(0.003\)), \(0.77\) (\(3\times 10^{-6}\)) and \(0.78\) (\(2\times 10^{-13}\)) for the nucleus, arcs, nodes and dust lanes, respectively, this result clearly shows that the gravitational boundedness of GMCs in NGC 5806 primarily depends on how turbulent they are (i.e. \(\sigma_{\rm obs,los}\)) rather than other physical properties. As expected, the positive power-law index implies that clouds with weaker turbulence are more gravitationally bound than clouds with stronger turbulence. ### CO conversion factor By rejecting the assumption of a uniform CO-to-H\({}_{2}\) conversion factor \(X_{\rm CO}\) and assuming instead that all resolved clouds are virialised (\(\alpha_{\rm obs,los}=1\)), we can infer the variations of \(X_{\rm CO}\). We define \[X_{\rm CO,20}\equiv\frac{X_{\rm CO}}{1\times 10^{20}\ {\rm cm^{-2}}\ ({\rm K \ km\ s^{-1}})^{-1}}\ \, \tag{20}\] and show in Figure 16 the distribution of \(X_{\rm CO,20}\) of all resolved clouds of NGC 5806 (identical to Figure 10 within a scaling factor), with a logarithmic mean of \(0.61\) (\(X_{\rm CO,20}\approx 4.05\)). Considering that a typical \(X_{\rm CO,20}\) for Milky Way disc clouds is 2, the average conversion factor of NGC 5806 is slightly larger. It is also larger than that derived in the centres of many galaxies (i.e. \(X_{\rm CO,20}\approx 0.1\)\(-\) 1; Oka et al., 1998; Israel, 2009; Sandstrom et al., 2013). However, the mean conversion factor of NGC 5806 is comparable to those of 12 nearby galaxies (\(\approx 3.5\); Bolatto et al., 2008). If we applied the median \(X_{\rm CO,20}\) from the literature (\(\approx 0.5\)) to NGC 5806, the mean virial parameter of the clouds would be 4 times higher, which seems unrealistically high. Finally, Figure 16 shows that the distributions of \(X_{\rm CO,20}\) in the four regions are similar to each other, with similar means. A spatially-varying conversion factor might therefore not be necessary in the central kiloparsec of NGC 5806. It should of course be noted that we used the \({}^{12}\)CO(2-1) transition (and assumed a constant \(R_{21}=1\)) instead of the \({}^{12}\)CO(1-0) transition used by the majority of previous studies. Thus, a spatially-varying \(R_{21}\) may be required to infer more plausible molecular gas masses. Observations of different \({}^{12}\)CO transitions would help to derive more accurate \(X_{\rm CO}\) and cloud properties. ## 7 Conclusions We presented \({}^{12}\)CO(2-1) ALMA observations of the barred spiral galaxy NGC 5806 at \(25\times 22\) pc\({}^{2}\) spatial resolution, and identified 366 GMCs (170 of which are spatially and spectrally resolved) using our modified version of the ccppoststo code. The molecular gas of NGC 5806 has a highly structured distribution with a clear nucleus, nuclear ring (including nodes and arcs) and offset dust lanes. We studied the cloud properties and scaling relations in the different regions, and investigated how they are influenced by the large-scale bar. The main findings are as follows: 1. The GMCs of NGC 5806 have slightly larger molecular gas masses (\(10^{5}-10^{7.5}\) M\({}_{\odot}\)) and comparable sizes (\(15-85\) pc) but larger velocity dispersions (\(1.6\) - \(30\) km s\({}^{-1}\)) and gas mass surface densities (\(80\) - \(1000\) M\({}_{\odot}\) pc\({}^{-2}\)) than those of MW disc and Local Group galaxy clouds. On the other hand, they have larger sizes and gas masses but smaller velocity dispersions and gas mass surface densities than those of CMZ clouds (Figure 4). The GMCs in the nuclear ring are larger, brighter and more turbulent than the clouds in the nucleus, while the GMCs in the dust lanes are intermediate. 2. The cumulative gas mass function of the NGC 5806 clouds follows a truncated power law with a slope of \(-1.72\pm 0.12\). The nodes and arcs (i.e. the nuclear ring) have cloud mass functions that are significantly shallower than those of the nucleus and dust lanes, suggesting at least two different GMC populations, and massive GMCs are preferentially located in the nuclear ring. (Figure 5). 3. The GMCs of NGC 5806 have a mean velocity gradient of \(0.1\) km s\({}^{-1}\) pc\({}^{-1}\), comparable to those of the clouds in the MW and Local Group galaxies, but smaller than those of the clouds in the ETGs studied so far (NGC 4429 and NGC 4526). These velocity gradients are likely induced by turbulence rather than large-scale galaxy rotation (Section 4). 4. The GMCs of NGC 5806 have an unusually steep size - linewidth relation (\(\sigma_{\rm obs,los}\propto R_{\rm c}^{1.20\pm 0.10}\); Figure 14), that may be due to gas inflows and shocks induced by the large-scale bar (Section 6.4). 5. The NGC 5806 GMCs are only marginally bound (\(\langle\alpha_{\rm vir}\rangle\approx 2\)), and the virial parameters do not significantly differ across the different regions (see Figure 10). The virial parameters are positively correlated with the linewidths (see Figure 15). 6. There are molecular gas inflows from the large-scale bar into the nuclear ring, with a velocity \(V_{\rm in}\approx 120\) km s\({}^{-1}\) and a total mass inflow rate \(\dot{M}_{\rm in}\approx 5\) M\({}_{\odot}\) yr\({}^{-1}\) (Section 6.1). These inflows could be at origin of the observed high velocity dispersions in the nuclear ring and the clouds therein. 7. The number of clouds decreases azimuthally from one node to the other within the nuclear ring, downstream from the nodes (Section 6.2). By tracking cloud disruption through GMC number statistics, we estimate the typical cloud lifetime to be \(\approx 6\) Myr. This is larger than the estimated timescales of cloud-cloud collisions, shear and/or stellar feedback (\(\approx 3\) Myr), suggesting that any of those could contribute to the destruction of clouds within the nuclear ring. Overall, the large-scale bar seems to play an important role (via gas inflows and shocks) shaping the cloud population in the central region of NGC 5806, including potentially creating an unusually steep cloud size - linewidth relation. ## Acknowledgements We thank the anonymous referee for helpful and constructive comments and Prof. Daniel Wang for useful discussions. WC and AC acknowledge support by the National Research Foundation of Korea (NRF), grant Nos. 2018R1D1A1B07 048314, 2022R1A2C100298211 and 2022R1A6A1A03053472. LL was supported by a Hintze Fellowship, funded by the Hintze Family Charitable Foundation, and by a DAWN Fellowship, funded by Figure 15: Dependence of the virial parameter (\(\alpha_{\rm vir}\)) on the cloud velocity dispersion (\(\sigma_{\rm obs,los}\); first row), size (\(R_{\rm c}\); second row) and gas mass (\(M_{\rm gas}\); third row) for all the resolved clouds of NGC 5806. From left to right, the panels focus on the clouds in the nucleus (blue data points), arcs (green data points), nodes (red data points) and dust lanes (yellow data points); grey circles show all other resolved clouds. Black dotted lines indicate \(\alpha_{\rm vir}=1\) and black dashed lines the mean virial parameter of all resolved clouds (\(\langle\alpha_{\rm vir}\rangle=2.02\)). In the first row, the black solid lines show the best-fitting power-law relation of all resolved clouds, while the coloured dashed line in each panel shows the best-fitting power-law relation of the resolved clouds in that region only. the Danish National Research Foundation under grant No. 140. MB was supported by STFC consolidated grant "Astrophysics at Oxford" ST/H002456/1 and ST/K00106X/1. TAD acknowledges support from the UK Science and Technology Facilities Council through grants ST/S00033X/1 and ST/W000830/1. JG gratefully acknowledges financial support from the Swiss National Science Foundation (grant No. CRSIIS 193826). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2016.1.00437.S and ADS/JAO.ALMA#2016.2.00053.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This paper also makes use of observations made with the NASA/ESA _Hubble Space Telescope_, obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STSCI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. ## Data Availability The data underlying this article are available in the ALMA archive ([https://almascience.eso.org/asax/](https://almascience.eso.org/asax/)) under project code: (i) 2016.1.00437.S and (ii) 2016.2.00053.S. All analysed data will be shared upon request.
2305.10758
Extracting Low-/High- Frequency Knowledge from Graph Neural Networks and Injecting it into MLPs: An Effective GNN-to-MLP Distillation Framework
Recent years have witnessed the great success of Graph Neural Networks (GNNs) in handling graph-related tasks. However, MLPs remain the primary workhorse for practical industrial applications due to their desirable inference efficiency and scalability. To reduce their gaps, one can directly distill knowledge from a well-designed teacher GNN to a student MLP, which is termed as GNN-to-MLP distillation. However, the process of distillation usually entails a loss of information, and ``which knowledge patterns of GNNs are more likely to be left and distilled into MLPs?" becomes an important question. In this paper, we first factorize the knowledge learned by GNNs into low- and high-frequency components in the spectral domain and then derive their correspondence in the spatial domain. Furthermore, we identified a potential information drowning problem for existing GNN-to-MLP distillation, i.e., the high-frequency knowledge of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during distillation; we have described in detail what it represents, how it arises, what impact it has, and how to deal with it. In this paper, we propose an efficient Full-Frequency GNN-to-MLP (FF-G2M) distillation framework, which extracts both low-frequency and high-frequency knowledge from GNNs and injects it into MLPs. Extensive experiments show that FF-G2M improves over the vanilla MLPs by 12.6% and outperforms its corresponding teacher GNNs by 2.6% averaged over six graph datasets and three common GNN architectures.
Lirong Wu, Haitao Lin, Yufei Huang, Tianyu Fan, Stan Z. Li
2023-05-18T06:57:06Z
http://arxiv.org/abs/2305.10758v2
Extracting Low-/High- Frequency Knowledge from Graph Neural Networks and Injecting it into MLPs: An Effective GNN-to-MLP Distillation Framework ###### Abstract Recent years have witnessed the great success of Graph Neural Networks (GNNs) in handling graph-related tasks. However, MLPs remain the primary workhorse for practical industrial applications due to their desirable inference efficiency and scalability. To reduce their gaps, one can directly distill knowledge from a well-designed teacher GNN to a student MLP, which is termed as GNN-to-MLP distillation. However, the process of distillation usually entails a loss of information, and _"which knowledge patterns of GNNs are more likely to be left and distilled into MLPs?"_ becomes an important question. In this paper, we first factorize the knowledge learned by GNNs into low- and high-frequency components in the spectral domain and then derive their correspondence in the spatial domain. Furthermore, we identified a potential _information drowning_ problem for existing GNN-to-MLP distillation, i.e., the high-frequency knowledge of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during distillation; we have described in detail what it represents, how it arises, what impact it has, and how to deal with it. In this paper, we propose an efficient _Full-Frequency GNN-to-MLP_ (FF-62M) distillation framework, which extracts both low-frequency and high-frequency knowledge from GNNs and injects it into MLPs. Extensive experiments show that FF-62M improves over the vanilla MLPs by 12.6% and outperforms its corresponding teacher GNNs by 2.6% averaged over six graph datasets and three common GNN architectures. Codes are publicly available at: ## Introduction In many real-world applications, including social networks, chemical molecules, and citation networks, data can be naturally modeled as graphs. Recently, the emerging Graph Neural Networks (GNNs) [16, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] have demonstrated their powerful capability to handle various graph-related tasks [25, 26, 27, 28]. However, practical deployments of GNNs in the industry are still less popular due to inference efficiency and scalability challenges incurred by data dependency [21, 22]. In other words, GNNs generally rely on message passing to aggregate features from the neighborhood, but fetching and aggregating these nodes during inference can burden latency-sensitive applications. In contrast, Multi-Layer Perceptrons (MLPs) involve no data dependence between pairs of nodes and infer much faster than GNNs, but often with less competitive performance. Motivated by these complementary strengths and weaknesses, one solution to reduce their gaps is to perform GNN-to-MLP knowledge distillation [25, 26, 27, 28], which extracts knowledge from a well-trained teacher GNN and then distills it into a student MLP with the same network architecture (e.g., layer number and layer size). Most of the existing GNN-to-MLP distillation methods [25, 26, 27] focus on special designs on either student MLPs or teacher GNNs, but default to distill knowledge in a node-to-node fashion. For example, CPF [26] combines Label Propagation (LP) [13] into the student MLPs to improve classification performance and thus still suffers from the neighborhood-fetching latency caused by label propagation, which defeats the original intention of MLPs to be inference-efficient. In contrast, GLNN [26] directly distills knowledge from arbitrary GNNs to vanilla MLPs with the same network architecture. While the distilled MLPs of GLNN can be greatly improved by employing more powerful teacher GNNs, the process of distillation usually entails a loss of information [28], which may lead to sub-optimal student MLPs. In this paper, we look away from specific instantiations of teacher GNNs and student MLPs, but rather explore two fundamental questions: (1) _Can existing GNN-to-MLP distillation ensure that sufficient knowledge is distilled from teacher GNNs to student MLPs?_ If not, (2) _Which knowledge patterns of GNNs are more likely to be distilled into student MLPs?_ **Present Work.** In this paper, we identify a potential _information drowning_ problem for existing GNN-to-MLP distillation, i.e., the high-frequency knowledge of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during distillation. To illustrate this, we first factorize GNN knowledge into low- and high-frequency components using graph signal processing theory in the spectral domain and then derive their correspondence in the spatial domain. Moreover, we conduct a comprehensive investigation of the roles played by low- and high-frequency components in the distillation process and describe in detail what _information drowning_ represents, how it arises, what impact it has, and how to deal with it. Extensive experiments have shown that high-frequency and low-frequency knowledge are complementary to each other, and they can further improve performance on top of each other. In this paper, we propose a novel _Full-Frequency GNN-to-MLP_ (FF-G2M) distillation framework, which extracts both low- and high-frequency knowledge from teacher GNNs and injects it into student MLPs. ## Related Work **Graph Neural Networks (GNNs).** The early GNNs define graph convolution kernels in the spectral domain [1, 13] based on the graph signal processing theory, known as ChebyNet [13] and Graph Convolutional Networks (GCN) [14]. The later GNNs directly define updating rules in the spatial space and focus on the design of neighborhood aggregation functions. For instance, GraphSAGE [1] employs a generalized induction framework to generate embeddings for previously unseen nodes by aggregating known node features. Moreover, GAT [15] introduces the self-attention mechanism to assign different importance scores to neighbors for better information aggregation. We refer interested readers to the surveys [16, 17, 18] for more GNN architectures. **Graph Knowledge Distillation.** Despite the great progress, most existing GNNs share the de facto design that relies on message passing to aggregate features from neighborhoods, which may be one major source of latency in GNN inference. To address this problem, there are previous works that attempt to distill knowledge from large teacher GNNs to smaller student GNNs, termed as GNN-to-GNN [1, 17, 16, 18]. For example, the student model in RDD [19] and TinyGNN [16] is a GNN with fewer parameters but not necessarily fewer layers than the teacher GNN, which makes both designs still suffer from the neighborhood-fetching latency caused by data dependency. To enjoy the low-latency of MLPs and high-accuracy of GNNs, the other branch of graph knowledge distillation is to directly distill from large teacher GNNs to student MLPs, termed as GNN-to-MLP. The existing work on GNN-to-MLP distillation can be mainly divided into two branches: student MLPs-focused and teacher GNNs-focused. The former branch, such as CPF [16], _directly_ improves student MLPs by adopting deeper and wider network architectures or incorporating label propagation, both of which burden the inference latency. The other branch, such as GLNN [19], distills knowledge from teacher GNNs to vanilla MLPs with the same network architectures but without other computing-consuming operations; while the performance of their distilled MLPs can be _indirectly_ improved by employing more powerful GNNs, they still cannot match their corresponding teacher GNNs. Moreover, PGKD [17] proposes a Prototype-Guided Knowledge Distillation (PGKD) method, which does not require graph edges yet learns structure-aware MLPs. In this paper, we aim to develop a **model-agnostic** GNN-to-MLP distillation that is applicable to various GNN architectures. **Noticons.** Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) be an attributed graph, where \(\mathcal{V}\) is the set of \(N\) nodes with features \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{N}]\in\mathbb{R} ^{N\times d}\) and \(\mathcal{E}\) denotes the edge set. Each node \(v_{i}\in\mathcal{V}\) is associated with a \(d\)-dimensional features vector \(\mathbf{x}_{i}\), and each edge \(e_{i,j}\in\mathcal{E}\) denotes a connection between node \(v_{i}\) and \(v_{j}\). The graph structure is denoted by an adjacency matrix \(\mathbf{A}\in[0,1]^{N\times N}\) with \(\mathbf{A}_{i,j}=1\) if \(e_{i,j}\in\mathcal{E}\) and \(\mathbf{A}_{i,j}=0\) if \(e_{i,j}\notin\mathcal{E}\). Consider a semi-supervised node classification task where only a subset of node \(\mathcal{V}_{L}\) with labels \(\mathcal{Y}_{L}\) are known, we denote the labeled set as \(\mathcal{D}_{L}=(\mathcal{V}_{L},\mathcal{Y}_{L})\) and unlabeled set as \(\mathcal{D}_{U}=(\mathcal{V}_{U},\mathcal{Y}_{U})\), where \(\mathcal{V}_{U}=\mathcal{V}\backslash\mathcal{V}_{L}\). The node classification aims to learn a mapping \(\Phi:\mathcal{V}\rightarrow\mathcal{Y}\) with labeled data, so that it can be used to infer the labels \(\mathcal{Y}_{U}\). **Graph Neural Networks (GNNs).** Most existing GNNs rely on message passing to aggregate features from the neighborhood. A general GNN framework consists of two key computations for each node \(v_{i}\): (1) \(\mathrm{AGGREGATE}\): aggregating messages from neighborhood \(\mathcal{N}_{i}\); (2) \(\mathrm{UPDATE}\): updating node representation from its representation in the previous layer and aggregated messages. Considering a \(L\)-layer GNN, the formulation of the \(l\)-th layer is as follows \[\mathbf{m}_{i}^{(l)}= \,\mathrm{AGGREGATE}^{(l)}\left(\left\{\mathbf{h}_{i}^{(l-1)}:v_{ j}\in\mathcal{N}_{i}\right\}\right) \tag{1}\] \[\mathbf{h}_{i}^{(l)}= \,\mathrm{UPDATE}^{(l)}\left(\mathbf{h}_{i}^{(l-1)},\mathbf{m}_ {i}^{(l)}\right)\] where \(1\leq l\leq L\), \(\mathbf{h}_{i}^{(0)}=\mathbf{x}_{i}\) is the input feature, and \(\mathbf{h}_{i}^{(l)}\) is the representation of node \(v_{i}\) in the \(l\)-th layer. **Multi-Layer Perceptrons (MLPs).** To achieve efficient inference, the vanilla MLPs (with the same network architecture as the teacher GNNs) are used as the student model by default in this paper. For a \(L\)-layer MLP, the \(l\)-th layer is composed of a linear transformation, an activation function \(\sigma=\mathrm{ReLu}(\cdot)\), and a dropout function \(\mathrm{Dropout}(\cdot)\), as \[\mathbf{z}_{i}^{(l)}=\mathrm{Dropout}\left(\sigma\big{(}\mathbf{z}_{i}^{(l-1)} \mathbf{W}^{(l-1)}\big{)}\big{)},\quad\mathbf{z}_{i}^{(0)}=\mathbf{x}_{i} \tag{2}\] where \(\mathbf{W}^{(0)}\in\mathbb{R}^{d\times F}\) and \(\mathbf{W}^{(l)}\in\mathbb{R}^{F\times F}\)\((1\leq l<L)\) are weight matrices with the hidden dimension \(F\). In this paper, the network architecture of MLPs, such as the layer number \(L\) and layer size \(F\), is set the same as that of teacher GNNs. **GNN-to-MLP Knowledge Distillation.** The knowledge distillation is first introduced in [10] to handle mainly image data, where knowledge is transferred from a cumbersome teacher model to a simpler student model. The later works on GNN-to-MLP distillation [16, 19, 16, 17] extend it to the graph domain by imposing KL-divergence constraint \(\mathcal{D}_{KL}(\cdot,\cdot)\) between the softmax label distributions generated by teacher GNNs and student MLPs and directly optimizing the objective function as follows \[\mathcal{L}_{\mathrm{KD}}=\frac{1}{|\mathcal{V}|}\sum_{i\in\mathcal{V}} \mathcal{D}_{KL}\left(\mathrm{softmax}\left(\mathbf{z}_{i}^{(L)}\right),\mathrm{ softmax}\left(\mathbf{h}_{i}^{(L)}\right)\right) \tag{3}\] ## Knowledge Factorization from the Perspective of Spectral and Spatial Domain In this section, we first theoretically factorize the knowledge learned by GNNs into low- and high-frequency components in the spectral domain based on graph signal processing theory (Shuman et al., 2013). The normalized graph Laplacian matrix of graph \(\mathcal{G}\) is defined as \(\mathbf{L}=\mathbf{I}_{N}-\widetilde{\mathbf{D}}^{-\frac{1}{2}}\widetilde{ \mathbf{A}}\widetilde{\mathbf{D}}^{-\frac{1}{2}}\), where \(\widetilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}_{N}\in\mathbb{R}^{N\times N}\) is an adjacency matrix with self-loop, \(\widetilde{\mathbf{D}}\in\mathbb{R}^{N\times N}\) is a diagonal degree matrix with \(\widetilde{\mathbf{D}}_{i,i}=\sum_{j}\widetilde{\mathbf{A}}_{i,j}\), and \(\mathbf{I}_{N}\) denotes the identity matrix. Since \(\mathbf{L}\) is a real symmetric matrix, it can be eigendecomposed as \(\mathbf{L}=\mathbf{U}\Lambda\mathbf{U}^{\top}\), where \(\Lambda=\mathrm{diag}\left([\lambda_{1},\lambda_{2},\cdots,\lambda_{N}]\right)\) with each eigenvalue \(\lambda_{l}\in[0,2]\) corresponding to an eigenvectors \(\mathbf{u}_{l}\) in \(\mathbf{U}\)(Chung and Graham, 1997). According to graph signal processing theory, we can directly take the eigenvector \(\{\mathbf{u}_{l}\}_{l=1}^{N}\) as bases. Given signal \(\mathbf{x}\in\mathbb{R}^{d}\), the graph Fourier transform and inverse Fourier transform (Sandryhaila and Moura, 2013; Ricaud et al., 2019) are defined as \(\widehat{\mathbf{x}}=\mathbf{U}^{\top}\mathbf{x}\) and \(\mathbf{x}=\mathbf{U}\widehat{\mathbf{x}}\). Thus, the convolutional \(*_{G}\) between the signal \(\mathbf{x}\) and convolution kernel \(\mathcal{F}\) can be defined as follows \[\mathcal{F}*_{G}\mathbf{x}=\mathbf{U}\left(\left(\mathbf{U}^{\top}\mathcal{F }\right)\odot\left(\mathbf{U}^{\top}\mathbf{x}\right)\right)=\mathbf{U} \mathbf{g}_{\theta}\mathbf{U}^{\top}\mathbf{x} \tag{4}\] where \(\odot\) denotes the element-wise product and \(\mathbf{g}_{\theta}\) is a parameterized diagonal matrix. Most of the existing GNNs architectures can be regarded as a special instantiation on the convolutional kernel \(\mathcal{F}\) (i.e., the matrix \(\mathbf{g}_{\theta}\)). For example, GCN-Cheby parameterizes \(g_{\theta}\) with a polynomial expansion \(\mathbf{g}_{\theta}=\sum_{k=0}^{K-1}\alpha_{k}\Lambda^{k}\), and GCN defines the convolutional kernel as \(\mathbf{g}_{\theta}\!=\!\mathbf{I}_{N}-\Lambda\). Considering a special convolution kernel \(\mathcal{F}_{A}\!=\!\mathbf{I}_{N}\), we have \(\mathcal{F}_{A}*_{G}\mathbf{x}\!=\!\mathbf{U}\mathbf{I}_{N}\mathbf{U}^{\top} \mathbf{x}\!=\!\mathbf{U}\widehat{\mathbf{x}}\!=\!\mathbf{x}\), i.e., this is an identity mapping, where all information can be preserved. Next, we decompose the graph knowledge into low-frequency and high-frequency components (Bo et al., 2021; Wu et al., 2019) by factorizing \(\mathcal{F}_{A}\!=\!\mathbf{I}_{N}\) as follows \[\mathcal{F}_{A}\!=\!\mathbf{I}_{N}\!=\!\frac{1}{2}\!\left(\left(\underbrace{ \mathbf{I}_{N}+\widetilde{\mathbf{D}}^{-\frac{1}{2}}\widetilde{\mathbf{A}} \widetilde{\mathbf{D}}^{-\frac{1}{2}}}_{\text{Low-Phase Filter}\,\,\mathcal{F}_{M}} \right)+\left(\underbrace{\mathbf{I}_{N}-\widetilde{\mathbf{D}}^{-\frac{1}{2}} \widetilde{\mathbf{A}}\widetilde{\mathbf{D}}^{-\frac{1}{2}}}_{\text{High- Phase Filter}\,\,\mathcal{F}_{H}}\right)\right)\] For a given signal \(\mathbf{x}\in\mathbb{R}^{d}\), e.g., node feature, we have \(\mathcal{F}_{A}*_{G}\mathbf{x}=\frac{1}{2}\left(\mathcal{F}_{M}+\mathcal{F}_{H} \right)*_{G}\mathbf{x}=\frac{1}{2}(\mathcal{F}_{M}*_{G}\mathbf{x}+\mathcal{F} _{H}*_{G}\mathbf{x})=\mathbf{x}\), which means that any signal \(\mathbf{x}\) can be decomposed into the average of two components \(\mathcal{F}_{M}*_{G}\mathbf{x}\) and \(\mathcal{F}_{H}*_{G}\mathbf{x}\). **Analysis on the Spectral Domain.** The Proposition 1 states what the two components \(\mathcal{F}_{M}*_{G}\mathbf{x}\) and \(\mathcal{F}_{H}*_{G}\mathbf{x}\) represent. **Proposition 1**: _The convolution kernel \(\mathcal{F}_{M}\) works as a low-pass filter, which filters out high-frequency information, and \(\mathcal{F}_{M}*_{G}\mathbf{x}\) represents low-frequency knowledge; \(\mathcal{F}_{H}\) works as a high-pass filter, which filters out low-frequency information, and \(\mathcal{F}_{H}*_{G}\mathbf{x}\) represents high-frequency knowledge._ **Proof 1**: _For a \(L\)-layer GNN, the signal \(\mathbf{x}\) is filtered by the \(L\)-order convolution kernel \(\mathcal{F}_{M}^{L}=(\mathbf{I}_{N}+\widetilde{\mathbf{D}}^{-\frac{1}{2}} \widetilde{\mathbf{A}}\widetilde{\mathbf{D}}^{-\frac{1}{2}})^{L}=(2\mathbf{I}_ {N}-\mathbf{L})^{L}\) to output \(\mathcal{F}_{M}^{L}*_{G}\mathbf{x}=\mathbf{U}(2\mathbf{I}_{N}-\Lambda)^{L} \mathbf{U}^{\top}\mathbf{x}\) with \(g_{\theta}^{L}(\lambda_{i})=(2-\lambda_{i})^{L}\). As shown in Fig. 1, \(g_{\theta}^{L}(\lambda_{i})\) decreases monotonically in the range \(\lambda_{i}\in[0,2]\) and reaches \(g_{\theta}^{L}(\lambda_{i}\!=\!2)=0\) at \(\lambda_{i}=2\), which mainly amplifies the low-frequency information and filters out the high-frequency information. Similarly, the \(L\)-order convolution kernel \(\mathcal{F}_{H}^{L}\) has \(g_{\theta}^{L}(\lambda_{i})=\lambda_{i}^{L}\). As shown in Fig. 1, \(g_{\theta}^{L}(\lambda_{i})\) increases monotonically in the range \(\lambda_{i}\in[0,2]\) and reaches \(g_{\theta}^{L}(\lambda_{i}=0)=0\) at \(\lambda_{i}=0\), which mainly filters out the low-frequency information but in turn amplifies the high-frequency information._ **Correspondence on the Spatial Domain.** We have derived that \(\mathcal{F}_{M}*_{G}\mathbf{x}\) and \(\mathcal{F}_{H}*_{G}\mathbf{x}\) represent mainly the low- and high-frequency components of signal \(\mathbf{x}\), and we next can derived their correspondences in the spatial domain, as follows \[\mathcal{F}_{M}*_{G}\mathbf{x}_{i} \rightarrow \mathbf{x}_{i}^{(low)}=\mathbf{x}_{i}+\sum_{j\in\mathcal{N}_{i}} \frac{\mathbf{x}_{j}}{\sqrt{|\mathcal{N}_{i}||\mathcal{N}_{j}|}} \tag{5}\] \[\mathcal{F}_{H}*_{G}\mathbf{x}_{i} \rightarrow \mathbf{x}_{i}^{(high)}=\mathbf{x}_{i}-\sum_{j\in\mathcal{N}_{i}} \frac{\mathbf{x}_{j}}{\sqrt{|\mathcal{N}_{i}||\mathcal{N}_{j}|}}\] By the derivation in Eq. (5), the low-frequency knowledge \(\mathcal{F}_{M}*_{G}\mathbf{x}\) is the sum of node feature and its neighborhood features in the spatial domain. On the other hand, the high-frequency knowledge \(\mathcal{F}_{H}*_{G}\mathbf{x}\) represents the differences between the target node feature with its neighborhood features. There have recently been some novel GNN models (Bo et al., 2021; Pei et al., 2020; Zhu et al., 2020; Chien et al., 2021) that can capture both low- and high-frequency information simultaneously or adaptively. However, in this paper, we focus on the design of distillation objective functions and do not consider indirect performance improvements by employing these more powerful but complex GNNs. Instead, we consider the most commonly used GNNs, such as GCN (Kipf and Welling, 2016), GraphSAGE (Hamilton, Ying, and Leskovec, 2017), and GAT (Velickovic et al., 2017), all of which rely on multi-layer message passing to aggregate features of neighboring nodes that are multiple hops away, i.e., they essentially work as a low-pass filter \(\mathcal{F}_{M}^{L}\) or its variants. ## Roles Played by Low- and High-Frequency Knowledge during Distillation ### Rethinking the Core of Knowledge Distillation We rethink the core of knowledge distillation from three shallow-to-deep perspectives to highlight our motivations. * _Firstly_, knowledge distillation enables the representations of MLPs to "mimic" those of GNNs as closely as possible by imposing KL-divergence constraints between their softmax distribution probabilities. However, such a Figure 1: Eigenvalues _vs._ Amplitudes mimicking (or fitting) process is inevitably accompanied by a loss of information, especially high-frequency information, which explains why the performance of student MLPs is always hard to match with that of teacher GNNs. * _Secondly_, for a neural network framework, any change in the final representations is achieved indirectly by optimizing the mapping function, i.e., the network parameters. In this sense, knowledge distillation essentially optimizes the parameter matrices \(\{\mathbf{W}^{(l)}\}_{l=0}^{L-1}\) of the student MLPs to make it functionally approximates the convolution kernel of the teacher GNNs, which makes the student MLPs also serve as a low-pass filter \(\widetilde{\mathcal{F}}_{M}^{L}\) for graph data. * _Finally_, the low-pass filter in the spectral domain is equivalent to neighborhood aggregation in the spatial domain as derived in Eq. (5), which in essence can be considered as a special use of the graph topology. To explore the roles played by graph topology during GNN-to-MLP distillation, we plot the _mean cosine similarity_ of nodes with their first-order neighbors for vanilla GCNs, vanilla MLPs, and Distilled MLPs (GLNN) on the Cora dataset in Fig. 2(a), from which we observe that the mean similarity of GCNs and GLNN gradually increases with training, while that of vanilla MLPs gradually decreases, which indicates that knowledge distillation has introduced graph topology as an inductive bias (as GCNs has done), while vanilla MLPs do not. As a result, the distilled MLPs can enjoy the benefits of topology-awareness in training but without neighborhood-fetching latency in inference. ### High-Frequency Information Drowning Next, we discuss a potential high-frequency information drowning problem from both spectral and spatial domains, i.e., the high-frequency information of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during the process of GNN-to-MLP knowledge distillation. **How information drowning arises?**_From the perspective of spectral domain_, the knowledge distillation optimizes the network parameters of the student MLPs to make it functionally approximate the convolution kernel of the teacher GNNs, i.e., \(\widetilde{\mathcal{F}}_{M}^{L}\approx\mathcal{F}_{M}^{L}\). The information loss induced by such approximation may be inconsequential for high-amplitude low-frequency information but can be catastrophic for those high-frequency information with very low amplitude, as shown in Fig. 2(c). As a result, compared to low-frequency information, high-frequency information is more likely to be drowned by these optimization errors. **What impact does information drowning have?**_From the perspective of spatial domain_, the information drowning may lead to distilled MLPs that, despite preserving neighborhood smoothing well, can easily neglect differences between nodes, such as pairwise distances. To illustrate this, we consider a target node \(v_{i}\) and its two neighboring nodes \(v_{j}\) and \(v_{k}\) in Fig. 2(d), where they are mapped closely by GNNs. In the process of knowledge distillation, the representations of these three nodes may be mapped around the representations of teacher GNNs, i.e., they are still mapped closely with most of their low-frequency information preserved; however, the relative distances between nodes, i.e., their high-frequency information, may be drowned dramatically. For example, node \(v_{i}\) is adjacent to node \(v_{j}\) but far from node \(v_{k}\) in the representation space of teacher GNNs. However, in the representation space of student MLPs, node \(v_{i}\) becomes closer to node \(v_{k}\) and farther from node \(v_{j}\). The curves of the pairwise distance differences between the teacher GCNs and the student MLP in Fig. 2(b) show that common knowledge distillation (e.g., GLNN) is not good at capturing high frequency information, compared to our proposed FF-G2M. Moreover, extensive qualitative and quantitative experiments have been provided to demonstrate the harmfulness of the identified high-frequency information drowning problem in the experimental section. The detailed experimental settings, including hyperparameters and evaluation metric definitions, are available in **Appendix B&E**. ## Full-Frequency GNN-to-MLP (FF-G2M) Knowledge Distillation The above discussions reached two important insights: (1) the inductive bias of graph topology plays an important role, and (2) it is mainly the low-frequency knowledge of graph data that has been distilled from the teacher GNNs to the student MLPs. Inspired by these two insights, we propose _Low-Frequency Distillation_ (LFD) and _High-Frequency Distillation_ (HFD) to fully capture the low-frequency and high-frequency knowledge learned by GNNs, respectively. An high-level overview of the proposed _Full-Frequency GNN-to-MLP_ (FF-G2M) framework is shown in Fig. 3. Figure 2: (a) Mean cosine similarity (the higher, the better) between nodes with their first-order neighbors on Cora. (b) Pairwise distance differences (the lower, the better) between teacher GCNs and student MLPs on Cora. (c)(d) Illustrations of how the high-frequency information drowning arises and what potential impact it has in the spectral and spatial domains, respectively. ### Low-Frequency Distillation (LFD) The node representations of teacher GNNs are generated by explicit message passing, so it mainly captures the low-frequency information of the graph data as analyzed earlier. Unlike aggregating features from neighborhoods as in GNNs, we directly distill (diffuse) knowledge from teacher GNNs into the neighborhoods of student MLPs in order to better utilize topological information and low-frequency knowledge captured by GNNs, formulated as follows \[\mathcal{L}_{\mathrm{LFD}}\!=\!\frac{1}{|\mathcal{E}|}\sum_{i\in\mathcal{V}}\sum_ {j\in\mathcal{N}_{i}\cup i}\mathcal{D}_{KL}\Big{(}\sigma(\mathbf{z}_{j}^{(L)} /\tau_{1}),\sigma(\mathbf{h}_{i}^{(L)}/\tau_{1})\Big{)} \tag{6}\] where \(\tau_{1}\) is the low-frequency distillation temperature, and \(\sigma=\mathrm{softmax}(\cdot)\) denotes an activation function. ### High-Frequency Distillation (HFD) As derived in Eq. (5), the high-frequency components in the spectral domain represent the differences between node feature and its neighborhood features in the spatial domain. Inspired by this, we propose High-Frequency Distillation (HFD), a GNN knowledge objective that trains student MLPs to preserve the neighborhood pairwise differences from the representation space of teacher GNNs. The neighborhood pairwise differences around node \(v_{i}\) are defined as the differences between the target node feature \(\mathbf{s}_{i}\) and its neighborhood features \(\{\mathbf{s}_{j}\mid j\in\mathcal{N}_{i}\}\), which can be computed by the kernel \(\mathcal{K}\left(\mathbf{s}_{i},\mathbf{s}_{j}\right)=|\mathbf{s}_{i}-\mathbf{ s}_{j}|\), where \(|\cdot|\) denotes the element-wise absolute values. The high-frequency distillation trains the student model to mimic the neighborhood pairwise differences from the teacher GNNs via KL-divergence constraints, which can be defined as follows \[\mathcal{L}_{\mathrm{HFD}}\!=\!\frac{1}{|\mathcal{E}|}\sum_{i\in \mathcal{V}}\sum_{j\in\mathcal{N}_{i}}\mathcal{D}_{\mathrm{KL}}\Big{(}\sigma \big{(}\mathcal{K}\big{(}\mathbf{z}_{i}^{(L)},\mathbf{z}_{j}^{(L)}\big{)}/ \tau_{2}\big{)}, \tag{7}\] \[\sigma\big{(}\mathcal{K}\big{(}\mathbf{h}_{i}^{(L)},\mathbf{h}_{ j}^{(L)}\big{)}/\tau_{2}\big{)}\Big{)}\] where \(\mathcal{K}\left(\cdot,\cdot\right)\) denotes the element-wise absolute values, and \(\tau_{2}\) is the high-frequency distillation temperature. ### Training Strategy The pseudo-code of the FF-G2M framework is summarized in **Appendix C**. To achieve GNN-to-MLP knowledge distillation, we first pre-train the teacher GNNs with the classification loss \(\mathcal{L}_{\mathrm{label}}=\frac{1}{|\mathcal{V}_{L}|}\sum_{i\in\mathcal{V}_ {L}}\mathcal{H}\big{(}y_{i},\sigma(\mathbf{h}_{i}^{(L)})\big{)}\), where \(\mathcal{H}(\cdot)\) denotes the cross-entropy loss and \(y_{i}\) is the ground-truth label of node \(v_{i}\). Finally, the total objective function to distill the low- and high-frequency knowledge from the teacher GNNs into the student MLPs is defined as follows \[\mathcal{L}_{\mathrm{total}}=\frac{\lambda}{|\mathcal{V}_{L}|}\sum_{i\in \mathcal{V}_{L}}\mathcal{H}\big{(}y_{i},\sigma(\mathbf{z}_{i}^{(L)})\big{)}\! +\!\big{(}1\!-\!\lambda\big{)}\big{(}\mathcal{L}_{\mathrm{LFD}}\!+\!\mathcal{L }_{\mathrm{HFD}}\big{)}\] where \(\lambda\) is the weights to balance the influence of the classification loss and two knowledge distillation losses. The time complexity analysis of FF-G2M is available in **Appendix D**. ### Discussion and Comparision In this subsection, we compare the proposed FF-G2M framework with the commonly used node-to-node distillation (e.g., GLNN) in Fig. 3. While the node-to-node distillation can map neighboring nodes closely in the representation space of MLPs, i.e., preserving low-frequency knowledge, it completely confounds the relative distance between node pairs, i.e., high-frequency knowledge is drowned, leading to a different (incorrect) class-boundary with the teacher GNNs. In terms of the proposed FF-G2M framework, the Low-Frequency Distillation distills (diffuses) the features aggregated from the neighborhood in the teacher GNNs back into their neighborhood of the student MLPs to better utilize the extracted low-frequency knowledge. Besides, High-Frequency Distillation directly distills the neighborhood pairwise differences from teacher GNNs into the student MLPs to better capture the high-frequency knowledge patterns, i.e., the relative positions between pairs of nodes. ## Experiments **Datasets.** The effectiveness of the FF-G2M framework is evaluated on six public real-world datasets, including Cora [14], Citeseer [11], Pubmed [15], Coauthor-CS, Coauthor-Physics, and Amazon-Photo [12]. For each dataset, following the data splitting settings of [13, 14], we select 20 nodes per class to construct a training set, 500 nodes for validation, and 1000 nodes for testing. A statistical overview Figure 3: Illustration of the Full-Frequency GNN-to-MLP (FF-G2M) distillation framework, where the dotted red lines denote the predicted class-boundary, the solid black lines denote feature aggregation from the neighborhood, and the dashed black lines denote the distillation of knowledge (neighborhood features and pairwise distances) from teacher GNNs to student MLPs. of these datasets is placed in **Appendix A**. Besides, we defer the implementation details and hyperparameter settings for each dataset to **Appendix B** and supplementary materials. **Baselines.** Three basic components in knowledge distillation are (1) teacher model, (2) student model, and (3) distillation loss. As a model-agnostic general framework, FF-G2M can be combined with any teacher GNN architecture. In this paper, we consider three types of teacher GNNs, including GCN [11], GraphSAGE [10], and GAT [21]. As for the student model, we default to using pure MLPs (with the same network architecture as the teacher GNNs) as the student model for a fair comparison. Finally, the focus of this paper is on designing distillation objectives rather than powerful teacher and student models. Therefore, we only take GLNN [13] as an important baseline to compare FF-G2M with the conventional node-to-node distillation approach. The experiments of all baselines and FF-G2M are implemented based on the standard implementation in the DGL library [23] using PyTorch 1.6.0 library on NVIDIA V100 GPU. Each set of experiments is run five times with different random seeds, and the average are reported as metrics. ### Classification Performance Comparison This paper aims to explore which knowledge patterns _should_ and _how to_ be distilled into student MLPs, rather than designing more powerful teacher GNNs. Therefore, we consider three classical GNNs, including GCN, GraphSAGE, and GAT, as the teacher models and distill their knowledge into MLPs with the same network architecture. The experimental results on six datasets are reported in Table. 1, from which we can make the following observations: (1) In general, more powerful teacher GNNs can lead to student MLPs with better classification performance. However, such improvements are usually very limited and do not work for all datasets and GNN architectures. For example, on the Citeseer dataset, the performance of GLNN drops over the vanilla implementation of teacher GNNs by 0.4% (GraphSAGE) and 0.6% (GAT), respectively. (2) The proposed FF-G2M framework can consistently improve the performance of student MLPs across three GNN architectures on all six datasets. For example, FF-G2M can outperform vanilla teacher GNNs by 2.63% (GCN), 2.58% (GraphSAGE), and 2.55% (GAT) averaged over six datasets, respectively. ### Qualitative and Quantitative Analysis Extensive qualitative and quantitative experiments are conducted to explore the existence and harmfulness of the information drowning problem and how to solve it by FF-G2M. **Qualitative Analysis on Visualizations.** We consider GCNs as the teacher model and compare its visualization with that of vanilla MLPs, GLNN, and FF-G2M on the Cora dataset (due to space limitations, more results can be found in **Appendix F**). We select a target node (id 27 for Cora) and analyze its relative position relationship with its neighbors in Fig.4, from which we observe that: (1) The vanilla MLPs map neighboring nodes apart, which indicates that it is not even good at capturing low-frequency information. (2) GLNN fails to capture the relative positions between the target node and its neighbors, i.e., high-frequency information. (3) FF-G2M well preserves the relative positions between nodes while mapping neighboring nodes closely, which suggests that it is good at capturing both low- and high-frequency information. For example, on the Cora dataset, the target node (id 27) is the closest to node (id 1810) and the farthest from node (id 2678) in the visualizations of both teacher GCNs and FF-G2M's student MLPs. high-frequency knowledge, measured by _KL-divergence_ between the pairwise distances of teacher GNNs and student MLPs, respectively. The detailed mathematical definitions of these two evaluation metrics are available in **Appendix E**. From the experimental results on the Cora dataset reported in Fig. 5, we make four observations: (1) The vanilla MLP does not consider the inductive bias of the graph topology at all and thus fails to capture the low-and high-frequency knowledge in the graph data. (2) GLNN is capable of successfully capturing low-frequency information, i.e., neighborhood smoothing, but is not good at capturing high-frequency knowledge, i.e., difference information between pairs of nodes. (3) The proposed low- and high-frequency distillation has an advantage over GLNN in capturing one type of individual frequency but lags behind in another frequency. (4) The proposed FF-G2M combines both the two distillation and is better at capturing both low- and high-frequency knowledge than GLNN, especially the latter. ### Roles of Low- and High-frequency Knowledge To evaluate the roles played by low- and high-frequency knowledge in GNN-to-MLP distillation, we consider distillation with only \(\mathcal{L}_{\mathrm{LFD}}\) and \(\mathcal{L}_{\mathrm{HFD}}\), in addition to the full FF-G2M model. The experiments (with GCNs as the teacher model) on six datasets are reported in Table. 2, from which we observe that: (1) The proposed low-frequency distillation \(\mathcal{L}_{\mathrm{LFD}}\) makes fuller use of the graph topology and the low-frequency information from GNNs, and in turn outperforms GLNN that adopts node-to-node distillation on all six datasets. (2) While both low- and high-frequency distillation can work alone to improve the performance of vanilla MLPs, the former plays a _primary_ role and the latter a _secondary_ (auxiliary) role. More importantly, these two distillations are complementary to each other and can further improve performance on top of each other. (3) The FF-G2M (full model) considers both low- and high-frequency distillation and is capable of capturing full-frequency knowledge, and thus can far outperform GLNN on all six datasets. ## Conclusion In this paper, we factorize the knowledge learned by GNNs into low- and high-frequency components in the spectral and spatial domains and then conduct a comprehensive investigation on their roles played in GNN-to-MLP distillation. Our key finding is existing GNN-to-MLP distillation may suffer from a potential _information drowning_ problem, i.e., the high-frequency knowledge of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during distillation. Therefore, we propose a novel _Full-Frequency GNN-to-MLP_ (FF-G2M) knowledge distillation framework, which extracts both low- and high-frequency knowledge from GNNs and injects it into MLPs. As a simple but general framework, FF-G2M outperforms other leading methods across various GNN architectures and graph datasets. Limitations still exist; for example, this paper pays little attention to the special designs on teacher GNNs, and designing more expressive teachers to directly capture full-frequency knowledge may be another promising direction. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method** & **Cora** & **Citeseer** & **Pubmed** & **Amazon-Photo** & **Coauthor-CS** & **Coauthor-Phy** \\ \hline Vanilla GCN & 82.2 \(\pm\) 0.5 & 71.6 \(\pm\) 0.4 & 79.3 \(\pm\) 0.3 & 91.8 \(\pm\) 0.6 & 89.9 \(\pm\) 0.7 & 91.9 \(\pm\) 1.2 \\ Vanilla MLP & 59.7 \(\pm\) 1.0 & 60.7 \(\pm\) 0.5 & 71.5 \(\pm\) 0.5 & 77.4 \(\pm\) 1.2 & 87.5 \(\pm\) 1.4 & 89.2 \(\pm\) 0.9 \\ GLNN (Zhang et al., 2021) & 82.8 \(\pm\) 0.5 & 72.7 \(\pm\) 0.4 & 80.2 \(\pm\) 0.6 & 91.4 \(\pm\) 1.0 & 92.7 \(\pm\) 1.0 & 93.2 \(\pm\) 0.5 \\ \hline \hline Low-Frequency KD w/ \(\mathcal{L}_{\mathrm{LFD}}\) & 83.4 \(\pm\) 0.9 & 73.7 \(\pm\) 0.6 & 81.0 \(\pm\) 0.5 & 92.1 \(\pm\) 0.8 & 93.2 \(\pm\) 0.8 & 93.7 \(\pm\) 0.8 \\ High-Frequency KD w/ \(\mathcal{L}_{\mathrm{HFD}}\) & 68.5 \(\pm\) 0.8 & 63.2 \(\pm\) 0.7 & 74.4 \(\pm\) 0.4 & 82.5 \(\pm\) 1.3 & 89.3 \(\pm\) 1.7 & 91.0 \(\pm\) 1.6 \\ FF-G2M (full model) & **84.3 \(\pm\) 0.4** & **74.0 \(\pm\) 0.5** & **81.8 \(\pm\) 0.4** & **94.2 \(\pm\) 0.4** & **93.8 \(\pm\) 0.5** & **94.4 \(\pm\) 0.9** \\ \hline \hline \end{tabular} \end{table} Table 2: Classification accuracy \(\pm\) std (%) on six real-world datasets. The best metrics are marked by **bold**. Figure 4: Representation 2D-Visualizations (by UMAP (McInnes et al., 2018)) of the teacher model and three student models on Cora. Each node is colored by its ground-truth label, and the numbers around the nodes denote the node ids. Figure 5: (a) Curves of mean cosine similarity (**the higher, the better**) between nodes with their first-order neighbors. (b) Curves of pairwise distance differences (**the lower, the better**) between the teacher GNNs and student MLPs. ## Acknowledgement This work is supported in part by Ministry of Science and Technology of the People's Republic of China (No. 2021YFA1301603) and National Natural Science Foundation of China (No. U21A20427).
2309.01580
Dissipative Landau-Zener transitions in a three-level bow-tie model: accurate dynamics with the Davydov multi-D2 Ansatz
We investigate Landau-Zener (LZ) transitions in the three-level bow-tie model (3L-BTM) in a dissipative environment by using the numerically accurate method of multiple Davydov D2 Ansatze. We first consider the 3L-TBM coupled to a single harmonic mode, study evolutions of the transition probabilities for selected values of the model parameters, and interpret the obtained results with the aid of the energy diagram method. We then explore the 3L-TBM coupled to a boson bath. Our simulations demonstrate that sub-Ohmic, Ohmic and super-Ohmic boson baths have substantially different influences on the 3L-BTM dynamics, which cannot be grasped by the standard phenomenological Markovian single-rate descriptions. We also describe novel bath-induced phenomena which are absent in two-level LZ systems.
Lixing Zhang, Maxim F. Gelin, Yang Zhao
2023-09-04T13:08:03Z
http://arxiv.org/abs/2309.01580v3
Dissipative Landau-Zener transitions in a three-level bow-tie model: accurate dynamics with the Davydov multi-D\({}_{2}\) Ansatz ###### Abstract We investigate Landau-Zener (LZ) transitions in the three-level bow-tie model (3L-BTM) in a dissipative environment by using the numerically accurate method of multiple Davydov D\({}_{2}\) Ansatze. We first consider the 3L-BTM coupled to a single harmonic mode, study evolutions of the transition probabilities for selected values of the model parameters, and interpret the obtained results with the aid of the energy diagram method. We then explore the 3L-BTM coupled to a boson bath. Our simulations demonstrate that sub-Ohmic, Ohmic and super-Ohmic boson baths have substantially different influences on the 3L-BTM dynamics, which cannot be grasped by the standard phenomenological Markovian single-rate descriptions. We also describe novel bath-induced phenomena which are absent in two-level LZ systems. ## I Introduction The Landau-Zener (LZ) transition, which occurs in a two-level system (TLS) with energy spacing tuned by an external field, appears as a diabatic transition when the energy levels draw near. This widely recognized phenomenon has been replicated through experiments across various physical systems, such as Rydberg lithium atoms in a strong electric field [1], accelerated optical lattices [2], and atoms in a periodic potential [3]. Moreover, LZ transition variants can be engineered and implemented in many QED devices to fulfill designated functions [4; 5; 6]. Taking into account the dissipative environment of the LZ transition, the driven spin-boson model can be borrowed to describe the dynamics of a dissipative LZ model [7; 8; 9]. By coupling a superconducting qubit to a transmission line, the dissipative LZ transition can be realized in a lab [10; 11], and by tuning the spin-bath coupling strength, a coherent-to-incoherent transition may emerge. The original LZ model captures only the crossing of two energy levels. By changing the number of energy levels and the strength of the external driving field, various avoided crossings can emerge at any points in the energy diagram. This leads to countless novel variants of the LZ model [12; 13; 14; 11; 15]. Experimental realizations of such models usually require systems with large spins or multiple states, such as nitrogen vacancies in diamond [16; 17], triple quantum dots [18; 19], multiple trap Bose-Einstein condensates [20], and Fe\({}_{8}\) molecular nano-magnets [21]. Among myriad variants of the LZ models, the simple yet representative three-level bow-tie model (3L-BTM) is chosen for study here [15]. The name "bow-tie" comes from the energy diagram of the model, where three energy levels join at one point in time. As has already been mentioned, any bare LZ-like model has to be considered as a zero-order approximation, because any quantum system under realistic conditions is coupled to the environment which causes consequential relaxation and decoherence processes in the system [22]. Dissipative variants of the 3L-BTM have been studied, but the environment was often approximated by phenomenological decoherence models [23], effective (non-Hermitian) Hamiltonians [24; 25] and Markovian Lindblad master equations [26]. Such oversimplified treatments of the environment may become insufficient, notably taking into account recent progress in engineering or emulating boson baths with arbitrary spectral densities [27; 28; 29]. On the other hand, accurate simulations of multilevel dissipative LZ systems coupled to realistic boson baths are rare [30], owing to substantial computational challenges. Indeed, conventional methods that are based on master equations of the reduced system density matrices necessitate Hilbert-space truncation to manage computational costs. The quasi-adiabatic path integral (QUAPI) method can, in principle, tackle the problem, and it was applied to the conventional LZ model in Refs. [31; 32]. However, considering dissipation in the form of time-correlation functions, this method has excessive memory requirements for large spin systems, such as the 3L-BTM explored in this work. To address these computational issues, in this study we employ the method of the multiple Davydov Ansatze. It has been shown that, with the increasing multiplicity (i.e., with the inclusion of a sufficient number of coherent states in the trial state), the method is capable of delivering a numerical accurate solution to the multidimensional time-dependent Schrodinger equation [33]. With its computational cost manageable and its accuracy benchmarked by several numerically "exact" computational protocols, the method of multiple Davydov Ansatze has been applied to many physical and chemical problems, such as disordered Tavis-Cummings models [34], exciton dynamics in transition metal dichalcogenides [35], and photon delocalization in a Rabi dimer model [36]. In the present work, the method of multiple Davydov Ansatze is used to scrutinize dynam ics of LZ transitions in the dissipative 3L-BTM. The remainder of this paper is arranged as follows. In Sec. II, we introduce the model, the theoretical framework of the multi-D\({}_{2}\) Ansatz, and the observables of concern. In Sec. III, we present and discuss various obtained results. Sec. IV is the Conclusion. The convergence tests that demonstrate accuracy of our calculations can be found in Appendix Asection*.15. Additional pertinent technical details are given Appendix Bsection*.19. ## II Methodology ### 3L-BTM coupled to a single boson mode The bare Hamiltonian of the 3L-BTM is akin to that of the original LZ model (\(\hbar=1\) from here onwards): \[\hat{H}_{\rm sp} = vtS_{z}+\Delta S_{x}=\begin{bmatrix}vt&\Delta&0\\ \Delta&0&\Delta\\ 0&\Delta&-vt\end{bmatrix} \tag{1}\] Here \(v\) is the scanning velocity, i.e., the rate of change of the external field. \(\Delta\) is the tunneling strength between the three states. Compare with the Hamiltonian of the LZ model, the Pauli matrices are replaced with the spin-1 operators \(S_{z}\) and \(S_{x}\)[37]. A single boson mode that is coupled to the 3L-BTM is then considered. It is described by the quantum harmonic oscillator Hamiltonian \[\hat{H}_{\rm m} = \Omega\hat{a}^{\dagger}\hat{a} \tag{2}\] where \(\Omega\) is the frequency of the boson mode, \(\hat{a}^{\dagger}\) and \(\hat{a}\) are the creation and annihilation operators of the mode, respectively. The experimental realization of the above Hamiltonian usually requires a superconducting quantum interference device (SQUID)[10; 11]. The SQUID is coupled to the spin system via the mutual inductance. The coupling Hamiltonian can be written as \[\hat{H}_{\rm cpl} = \Lambda(\hat{a}^{\dagger}+\hat{a})S_{x} \tag{3}\] where \(\Lambda\) specifies the off-diagonal coupling strength which is related to the strength of the mutual inductance. The addition of the three terms gives the Hamiltonian of the 3L-BTM coupled to a single boson mode: \[\hat{H}_{\rm sgl} = \hat{H}_{\rm sp}+\hat{H}_{\rm m}+\hat{H}_{\rm cpl} \tag{4}\] ### 3L-BTM coupled to a boson bath In reality, due to for example circuit impedance, the spin system undergoes dissipation/dephasing processes. These effects can be described by the coupling of the bare 3L-BTM to a series of harmonic oscillators mimicking a boson bath: \[\hat{H}_{\rm dsp} = \hat{H}_{sp}+\sum_{k}\eta_{k}(\hat{b}^{\dagger}_{k}+\hat{b}_{k})S _{x}+\sum_{k}\omega_{k}\hat{b}^{\dagger}_{k}\hat{b}_{k} \tag{5}\] Here \(\eta_{k}\) is the off-diagonal coupling strength, \(\omega_{k}\) is the frequency, and \(\hat{b}^{\dagger}_{k}\), \(\hat{b}_{k}\) are the creation and annihilation operators of the bath modes. The bath spectral density function can be written as \[J(\omega)=\sum_{k}(\eta_{k})^{2}\delta(\omega-\omega_{k})=2\alpha\omega_{c}^{ 1-s}\omega^{s}e^{-\omega/\omega_{c}} \tag{6}\] where \(\alpha\) is the system-bath coupling strength, \(\omega_{c}\) is the cut-off frequency, and \(s\) is the exponent that characterizes the bath. If \(s<1\), the bath is sub-Ohmic; if \(s=1\), the bath is Ohmic; if \(s>1\), the bath is super-Ohmic. Ohmic-type baths described by Eq. (6equation.2.6) are commonly used to model/emulate cavity QED devices [38]. For practical simulations, \(J(\omega)\) has to be discretized. For small \(s\), the coupling strengths of bath modes with different frequencies are unevenly distributed, and a linear discretization scheme may not be suitable. In order to address this problem, we adopt a "density" discretization scheme, similar to the one proposed in Ref. [39]. The "density" discretization scheme can be introduced as follows. Firstly, the frequency domain \([0,\omega_{m}]\) is divided into \(N\) intervals \([\omega_{k^{\prime}},\omega_{k^{\prime}+1}]\), where \(k^{\prime}=0,1,...,N-1\), \(\omega_{k=N}\equiv\omega_{m}\) is the maximum frequency considered, and \(N\) is the total number of frequency segments. Now we introduce the continuous density function \(\rho(\omega)\) of the discrete modes. The integration of \(\rho(\omega)\) from 0 to \(\omega_{m}\) must be equal to \(N\): \[\int_{0}^{\omega_{m}}d\omega\rho(\omega)=N \tag{7}\] To relate \(J(\omega)\) of Eq. (6equation.2.6) with \(\rho(\omega)\), we construct \(\rho(\omega)\) in the following form: \[\rho(\omega)=\frac{N}{\int_{0}^{\omega_{m}}d\omega^{\prime}J(\omega^{\prime}) }J(\omega) \tag{8}\] By doing this, \(\rho(\omega)\) becomes proportional to \(J(\omega)\). The boundaries of the intervals \([\omega_{k^{\prime}},\omega_{k^{\prime}+1}]\) are chosen to fulfill the requirement \[\int_{\omega_{k^{\prime}}}^{\omega_{k^{\prime}+1}}d\omega\rho(\omega)=1,\quad k ^{\prime}=0,1,...,N-1 \tag{9}\] Then the equivalent frequency and coupling strength for each interval are obtained via the coarse-grained treatment [40]: \[\eta_{k}=\sqrt{\int_{\omega_{k^{\prime}}}^{\omega_{k^{\prime}+1}}d\omega J( \omega)},\quad\omega_{k}=\frac{\int_{\omega_{k^{\prime}}}^{\omega_{k^{\prime}+1 }}d\omega J(\omega)\omega}{\eta_{k}^{2}} \tag{10}\] It is noted this procedure produces equal coupling strengths for all discretized modes, which is given by the expression \[\eta_{k}=\sqrt{\frac{N}{\int_{\omega_{k^{\prime}}}^{\omega_{k^{\prime}+1}}d\omega J (\omega)}} \tag{11}\] ### The multi-D\({}_{2}\) Ansatz To obtain the system dynamics, the time-dependent Schrodinger equation is solved with the multi-D\({}_{2}\) Ansatz in the framework of the time-dependent variational principle [33]. The multi-D\({}_{2}\) Ansatz for \(\hat{H}_{\rm{dsp}}\) can be written as \[|\mathrm{D}_{2}^{M}(t)\rangle = \sum_{n=1}^{M}\sum_{s}^{+,-,0}A_{ns}|s\rangle\prod_{k}\mathcal{D} _{nk}|{\rm{vac}}\rangle\] Here \(M\) is the Ansatz multiplicity, \(|{\rm{vac}}\rangle\) is the vacuum state and \(\mathcal{D}_{nk}\) is the displacement operator of the \(k\)th bath mode, which can be written as \[\mathcal{D}_{nk}=\exp[\alpha_{nk}b_{k}^{\dagger}-\alpha_{nk}^{*}b_{k}] \tag{13}\] where \(\alpha_{nk}\) is the displacement of the effective bath mode and asterisk denotes complex conjugation. The bath part of the wave function is represented by \(M\) coherent states in the Ansatz (i.e., \(n\) goes from 1 to \(M\)). \(|s\rangle\) (\(s=+,-,0\)) denote the three states of the 3L-BTM, each of which is assigned with an amplitude \(A_{ns}\). \(A_{ns}\) and \(\alpha_{nk}\) are called the variational parameters. These parameters can be determined through the Euler-Lagrange equation under the Dirac-Frenkel time-dependent variational principle: \[\frac{d}{dt}\frac{\partial L}{\partial\dot{u}_{n}^{*}}-\frac{\partial L}{ \partial u_{n}^{*}}=0,\ u_{n}\in[A_{ns},\alpha_{nk}] \tag{14}\] with \[L = \frac{i}{2}\left[\langle\mathrm{D}_{2}^{M}(t)|\frac{\overrightarrow {\partial}}{\partial t}|\mathrm{D}_{2}^{M}(t)\rangle-\langle\mathrm{D}_{2}^{M }(t)|\frac{\overleftarrow{\partial}}{\partial t}|\mathrm{D}_{2}^{M}(t)\rangle\right] \tag{15}\] \[-\langle\mathrm{D}_{2}^{M}(t)|\hat{H}_{\theta}|\mathrm{D}_{2}^{M} (t)\rangle.\] The collection of the Euler-Lagrange equations for all variational parameters yields the equations of motion (EOMs). The EOMs are essentially first-order differential equations, which can be solved simultaneously via, e.g., the \(4^{th}\) order Runge-Kutta method. The complete set of the EOMs is presented in Appendix B.19. ### Observables According to Eq. (12equation.2.12), the multi-D\({}_{2}\) wave function is normalized, \[\langle D_{2}^{M}(t)|D_{2}^{M}(t)\rangle=\sum_{m,n}^{M}\sum_{s}^{+,-,0}A_{ms}^ {*}(t)A_{ns}(t)S_{mn}=1 \tag{16}\] Here \[S_{mn} = \langle 0|\sum_{k}\mathcal{D}_{mk}^{\dagger}\mathcal{D}_{nk}|0\rangle\] \[= \exp\left[\sum_{k}\alpha_{mk}^{*}\alpha_{nk}-\frac{1}{2}(|\alpha _{mk}|^{2}+|\alpha_{nk}|^{2})\right]\] is the Debye-Waller factor. Expectation value of any operator \(Q\) can be evaluated in the multi-D\({}_{2}\) framework as \[\langle Q(t)\rangle=\langle\mathrm{D}_{2}^{M}(t)|Q|\mathrm{D}_{2}^{M}(t)\rangle \tag{18}\] The LZ transition probabilities are therefore defined as \[\mathcal{P}_{s}(t)=\langle D_{2}^{M}|\mathcal{P}_{s}|D_{2}^{M} \rangle=\sum_{m,n}^{M}A_{ms}^{*}A_{ns}S_{mn}\] (\(s=+,-,0\)), where \(\mathcal{P}_{s}=|s\rangle\langle s|\) is the projection operator. Due to the unitarity of the Hamiltonian dynamics, the sum of the three transition probabilities equals to 1 at any time \(t\). If two of the transition probabilities are available, the third one is known automatically. ## III Results and discussion In the absence of the coupling to harmonic oscillators, transitions in multilevel LZ models have been extensively studied, and analytical solutions for the asymptotic transition probabilities \(\mathcal{P}_{s}(\infty)\) for the 3L-BTM and other multilevel models have been found [41; 42; 15]. If the 3L-BTM is initialized in \(|0\rangle\), then \(\mathcal{P}_{+}(t)=\mathcal{P}_{-}(t)\) due to the SU2 symmetry of the Hamiltonian (i.e., the tunneling from \(|0\rangle\) to \(|\pm\rangle\) states is equally likely). For LZ systems coupled to harmonic oscillators, the off-diagonal (\(S_{x}\)) coupling is qualitatively similar to tunneling and the so-induced LZ transitions reveal the oscillator frequency \(\Omega\)[7; 8]. The diagonal (\(S_{z}\)) coupling may affect the transition probability at long times and finite temperatures [43], but its effect on the system dynamics is almost trivial at zero temperature. As temperature effects are not considered in this work, we focus on the off-diagonal coupling. In Sec. III.1section*.8, we study a simple case of the 3L-BTM coupled to one harmonic oscillator, and analyze the dynamics and infinite-time transition probabilities. The dissipative regime is explored in Sec. III.2Section*.9. Computational details and convergence tests of our multi-D\({}_{2}\) calculations can be found in Appendix Asection*.15. Note that the actual values of time and other relevant parameters are decided by the choice of the unit frequency \(\omega\), which makes all these parameters dimensionless. The value of \(\omega\) has no effect on the dynamic observables. ### Dynamics of the 3L-BTM coupled a single harmonic mode First, we study the 3L-BTM coupled to a single harmonic oscillator which is described by the Hamiltonian \(\hat{H}_{\rm{sg}}\) of Eq. (4equation.2.4). In Fig. 1(a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sg}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\). The background colors represent different types of the crossings. Gray: forbidden type f1; dark gray: forbidden type f2; dark blue: anti-crossing type a1; light blue: anti-crossing type a2; red: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panelfigure.1(a), we present the time-dependent energy diagram for this system. The instantaneous (time-dependent) eigenvalues of \(\hat{H}_{\rm{sg1}}\) are plotted from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\). The energy diagram is symmetric with respect to \(\omega t=0\). The energy levels can be separated into three groups based on their time gradients \(+v,-v\), and \(0\), which correspond to the spin states \(|+\rangle\), \(|-\rangle\) and \(|0\rangle\), respectively. Each group contains numerous parallel energy levels that are separated by \(\Omega/\omega\) and correspond to different boson numbers. Only the first few levels are included in Fig. 1(a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sg1}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\). The background colors represent different types of the crossings. Gray: forbidden crossing type a1; light blue: anti-crossing type a2; red: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panelfigure.1(a). To distinguish different types of level crossings in Fig. 1(a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sg1}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\). The background colors represent different types of the crossings. Gray: forbidden type f1; dark gray: forbidden type f2; dark blue: anti-crossing type a1; light blue: anti-crossing type a2; red: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panelfigure.1(a), different background colors are used for each type of crossings. For crossings with gray and dark gray background colors, named as type f1 and f2 crossings, respectively, LZ transitions are forbidden. Indeed, type f1 crossings include three energy levels: \(|+,m+2n\rangle,|0,m+n\rangle\) and \(|-,m\rangle(m\in\mathbb{N},n\geq 2)\). LZ transitions between these states require emission/absorption of \(n\) bosons, which is forbidden since \(\hat{H}_{\rm{cpl}}\) supports \(\pm 1\) changes in the boson number only. For type f2 crossings, two energy levels with spin \(+\) and \(-\) are involved only. Direct LZ transition between these levels are also forbidden, because the \(S_{x}\)-operator supports tunneling only between the adjacent spin states. For crossings with dark blue, light blue and red background colors, named as type a1, a2 and a3 crossings, respectively, LZ transitions are allowed. For type a1 and a2 crossings, LZ transitions induce \(\pm 1\) changes in the boson number. The gap opened at these avoided crossings is determined by the coupling strength \(\Lambda\). It is noted that the rotating wave approximation (RWA) is not applied in \(\hat{H}_{\rm{cpl}}\). As vibronic levels are bounded from below by the vacuum state, differences exist between type a1 crossings (which involve three energy levels) and type a2 crossings (which involve only two levels). Hence type a1 crossings are dynamically crucial if the 3L-BTM is initialized in higher vibronic states, while type a2 crossings are important if the 3L-BTM is initialized in the vacuum state or lower vibronic states. For type a3 crossings, LZ transitions involve a change of spin only. Such spin flips are caused by the tunneling, and the gap opened at the avoided crossing is solely decided by \(\Delta\). Type a3 is the only crossing in the bare 3L-BTM. Transitions between \(|+\rangle\) and \(|-\rangle\) are forbidden for type f2 crossings, but are allowed for type a3 crossings via the intermediate state \(|0\rangle\). Displayed in Fig. 1(a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sg1}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\). The Figure 1: (a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sg1}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\). The background colors represent different types of the crossings. Gray: forbidden type f1; dark gray: forbidden type f2; dark blue: anti-crossing type a1; light blue: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panel. background colors represent different types of the crossings. Gray: forbidden type f1; dark gray: forbidden type f2; dark blue: anti-crossing type a1; light blue: anti-crossing type a2; red: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panelfigure.1(b)-(d) are zoom-in plots of the three types of avoided crossings: a1, a3 and a2. The frame colors of these plots correspond to the background colors of the crossings in panel Fig. 1(a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sgl}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\). The background colors represent different types of the crossings. Gray: forbidden type f1; dark gray: forbidden type f2; dark blue: anti-crossing type a1; light blue: anti-crossing type a2; red: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panelfigure.1(c) shows type a3 crossing corresponding to \(E/\omega=0,\omega t=-10\) in the energy diagram. Two gaps of equal size are opened simultaneously in time: between \(|+\rangle\) and \(|0\rangle\), and between \(|-\rangle\) and \(|0\rangle\). As the result, if the wave function is initialized in \(|0\rangle\), the transition probabilities to \(|\pm\rangle\) are equal at all times. Fig. 1(a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sgl}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\). The background colors represent different types of the crossings. Gray: forbidden type f1; dark gray: forbidden type f2; dark blue: anti-crossing type a1; light blue: anti-crossing type a2; red: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panelfigure.1(d), displays type a2 crossing for \(E/\omega=0,\omega t=-10\) (\(\omega t=10\)), which involves only a pair of states \(|+\rangle\) and \(|0\rangle\) (or \(|-\rangle\) and \(|0\rangle\)). In this case, transitions in the 3L Figure 2: Time evolutions of transition probabilities \(\mathcal{P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \(\mathcal{P}_{+}(t)\); orange, \(\mathcal{P}_{0}(t)\), yellow-green, \(\mathcal{P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\). BTM are identical to those in the conventional two-level LZ model. In Fig. 2Time evolutions of transition probabilities \({\cal P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \({\cal P}_{+}(t)\); orange, \({\cal P}_{0}(t)\), yellow-green, \({\cal P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2, transition probabilities from \(\omega t=-30\) to \(\omega t=30\) are presented for different initializations. The top, middle, and bottom rows correspond to the wave function initialized in \(|+,\rm{vac}\rangle\), \(|0,\rm{vac}\rangle\) and \(|-,\rm{vac}\rangle\), respectively. The left column shows the probability to retain the initial spin direction, \(P_{\rm{stay}}\), and the right column shows the transition probability of the spin flip, \(P_{\rm{flip}}\). The transition probabilities for different states are designated in different colors, blue: \({\cal P}_{+}(t)\), orange: \({\cal P}_{0}(t)\) and yellow-green: \({\cal P}_{-}(t)\). The parameters adopted for the plots in Fig. 2Time evolutions of transition probabilities \({\cal P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \({\cal P}_{+}(t)\); orange, \({\cal P}_{0}(t)\), yellow-green, \({\cal P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2 are same as for Fig. 1(a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sgl}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\). The background colors represent different types of the crossings. Gray: forbidden type f1; dark gray: forbidden type f2; dark blue: anti-crossing type a1; light blue: anti-crossing type a2; red: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panelfigure.1: \(\Lambda/\omega=0.1\), \(v=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\). In Figs. 2Time evolutions of transition probabilities \({\cal P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \({\cal P}_{+}(t)\); orange, \({\cal P}_{0}(t)\), yellow-green, \({\cal P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2(a) and (b), the wave function is initialized in \(|+,\rm{vac}\rangle\). The time evolution of \({\cal P}_{\pm}(t)\) can be divided into three stages. The first LZ transition corresponds to type a3 crossing at \(\omega t=0,E/\omega=0\) in the energy diagram. Clearly, \({\cal P}_{\pm}(t)\) and \({\cal P}_{0}(t)\) change considerably in time, while \({\cal P}_{-}(t)\) remains almost zero. This indicates that the direct transition from \(|+\rangle\) to \(|0\rangle\) is much more probable than the indirect transition from \(|+\rangle\) to \(|-\rangle\) via \(|0\rangle\). As type a3 crossing does not involve boson states, the boson degrees of freedom remain in the vacuum state \(|\rm{vac}\rangle\), in agreement with Ref. [44]. At \(\omega t=10\), the wave function goes, simultaneously through type a2 crossing (\(E/\omega=0\) in the energy diagram), and type a1 crossing (\(E/\omega=10\) in the energy diagram). As both crossings are caused by the coupling to the boson mode, the result is decided by the value of this coupling. Since \(\Lambda/\omega=0.1\) is relatively small, indirect transitions from \(|+\rangle\) to \(|-\rangle\) are suppressed, and the direct transition from \(|+\rangle\) to \(|0\rangle\) is dominant. This is a clear indication of the similarity of the off-diagonal coupling and tunneling in LZ transitions. In Figs. 2Time evolutions of transition probabilities \({\cal P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \({\cal P}_{+}(t)\); orange, \({\cal P}_{0}(t)\), yellow-green, \({\cal P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2(c) and (d), the wave function is initialized in \(|0,\rm{vac}\rangle\), and the LZ transition at \(\omega t=-10\) is of type a2 (\(E/\omega=0\) in the energy diagram). It involves only two levels, i.e., \(|0,\rm{vac}\rangle\) and \(|+,1\rangle\). Hence \({\cal P}_{-}(t)\) remains the same before and after this transition, in close similarity with the two-level LZ model [7]. The transition at \(\omega t=0\) has the same origin as the first transition which occurs when the wave function is initialized in \(|+,\rm{vac}\rangle\). As the initial state is now \(|0,\rm{vac}\rangle\) transitions from \(|0\rangle\) to \(|\pm\rangle\) are equally likely, and increase of the values of \({\cal P}_{-}(t)\) and \({\cal P}_{+}(t)\) after the transition are the same. The third transition at \(\omega t=10\) is the result of the combination of a single type a2 crossing and multiple type a1 crossings. After this transition, \({\cal P}_{-}(t)\) and \({\cal P}_{+}(t)\) converge asymptotically to almost the same values. This is due to the fact that the energy diagram in Fig. 1(a): The time evolution of eigenvalues of the Hamiltonian \(\hat{H}_{\rm{sgl}}\) from \(\omega t=-30\) to \(\omega t=30\) for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\). The background colors represent different types of the crossings. Gray: forbidden type f1; dark gray: forbidden type f2; dark blue: anti-crossing type a1; light blue: anti-crossing type a2; red: anti-crossing type a3. (b)-(d): Zoom-in plots of three different types of anti-crossings. The frame colors correspond to the background colors in the top panelfigure.1 is symmetric relative to \(\omega t=0\). Since the wave function is initialized in \(|0,\rm{vac}\rangle\), the ensuing 3L-BTM dynamics can be understood as a superposition of the dynamics of a pair of two-level systems (see Figs. 2Time evolutions of transition probabilities \({\cal P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \({\cal P}_{+}(t)\); orange, \({\cal P}_{0}(t)\), yellow-green, \({\cal P}_{-}(t)\) The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2(c) and (d)). In Figs. 2Time evolutions of transition probabilities \(\mathcal{P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \(\mathcal{P}_{+}(t)\); orange, \(\mathcal{P}_{0}(t)\), yellow-green, \(\mathcal{P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2 (e) and (f), the wave function is initialized in \(|-,\mathrm{vac}\rangle\). Similar to the initialization in \(|+,\mathrm{vac}\rangle\), indirect transitions from \(|-\rangle\) to \(|+\rangle\) are suppressed. However, the system initialized in \(|-,\mathrm{vac}\rangle\) encounters type 41 crossing at \(\omega t=-10\) (\(E/\omega=10\) in the energy diagram). As type a1 crossing involves three energy levels, the first LZ transition at \(\omega t=-10\) affects populations of all three states. As the transition from \(|-\rangle\) to \(|+\rangle\) is suppressed, only a trivial change in \(\mathcal{P}_{+}(t)\) is seen after \(\omega t=-10\). This is at variance with the outcome of the first LZ transition in the case of \(|0,\mathrm{vac}\rangle\) initialization, where only two energy levels are involved. The second transition at \(\omega t=-10\) is similar to the transition which occurs after the wave function is initialized in \(|+,\mathrm{vac}\rangle\) or \(|0,\mathrm{vac}\rangle\). For the third LZ transition, the situation is more involved. If the wave function is initialized in \(|0,\mathrm{vac}\rangle\), the asymmetry of type a1 crossing is cancelled due to the time symmetry of the energy diagram. If the wave function is initialized in \(|-,\mathrm{vac}\rangle\) or \(|+,\mathrm{vac}\rangle\), this does not happen. For example, the LZ transition at \(\omega t=10\) increases \(\mathcal{P}_{+}(t)\) and \(\mathcal{P}_{-}(t)\) (\(\mathcal{P}_{0}(t)\) and \(\mathcal{P}_{-}(t)\)) and decreases \(\mathcal{P}_{0}(t)\) (\(\mathcal{P}_{+}(t)\)) if the wave function is initialized in \(|-,\mathrm{vac}\rangle\) (\(|+,\mathrm{vac}\rangle\)). To grasp the 3L-BTM dynamics in different parameter regimes, Fig. 3Color maps of time evolutions of LZ transition probabilities with respect to different system parameters.(a) and (d): \(\Lambda/\omega\) changes from 0 to 0.4, \(\Delta/\omega\) = 0.1, \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\); (b) and (e): \(\Delta/\omega\) changes from 0 to 0.4, \(\Lambda/\omega=0.1\), \(\Omega/\omega\) Figure 3: Color maps of time evolutions of LZ transition probabilities with respect to different system parameters.(a) and (d): \(\Lambda/\omega\) changes from 0 to 0.4, \(\Delta/\omega=0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\); (b) and (e): \(\Delta/\omega\) changes from 0 to 0.4, \(\Lambda/\omega=0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\). (c) and (f): \(\Omega/\omega\) changes from 0 to 10, \(\Lambda/\omega=0.1\), \(\Delta/\omega=0.1\), the system is initialized in \(|0,\mathrm{vac}\rangle\). The upper panels (a), (b), (c) correspond to \(\mathcal{P}_{0}(t)\), the lower panels (d), (e), (f) correspond to \(\mathcal{P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}=1\). \(=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\). (c) and (f): \(\Omega/\omega\) changes from \(0\) to \(10\), \(\Lambda/\omega=0.1\), \(\Delta/\omega=0.1\), the system is initialized in \(|0,\mathrm{vac}\rangle\). The upper panels (a), (b), (c) correspond to \(\mathcal{P}_{0}(t)\), the lower panels (d), (e), (f) correspond to \(\mathcal{P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}=1\)figure.3 displays time dependent LZ transition probabilities for different off-diagonal coupling strengths \(\Lambda/\omega\), tunneling strengths \(\Delta/\omega\), and boson mode frequencies \(\Omega/\omega\). For the left and middle columns, the wave function is initialized in \(|+,\mathrm{vac}\rangle\), and for the right column it is initialized in \(|0,\mathrm{vac}\rangle\). The scanning velocity is fixed at \(v/\omega^{2}=1\). In Figs. 3Color maps of time evolutions of LZ transition probabilities with respect to different system parameters.(a) and (d): \(\Lambda/\omega\) changes from \(0\) to \(0.4\), \(\Delta/\omega\)\(=0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\); (b) and (e): \(\Delta/\omega\) changes from \(0\) to \(0.4\), \(\Lambda/\omega=0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\). (c) and (f): \(\Omega/\omega\) changes from \(0\) to \(10\), \(\Lambda/\omega=0.1\), \(\Delta/\omega=0.1\), the system is initialized in \(|0,\mathrm{vac}\rangle\). The upper panels (a), (b), (c) correspond to \(\mathcal{P}_{0}(t)\), the lower panels (d), (e), (f) correspond to \(\mathcal{P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}=1\)figure.3 (a) and (d), \(\Lambda/\omega\) is changed from \(0\) to \(0.4\), \(\Omega/\omega=10\), and \(\Delta/\omega=0.1\). The contour plots in Figs. 3Color maps of time evolutions of LZ transition probabilities with respect to different system parameters.(a) and (d): \(\Lambda/\omega\) changes from \(0\) to \(0.4\), \(\Delta/\omega\)\(=0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\); (b) and (e): \(\Delta/\omega\) changes from \(0\) to \(0.4\), \(\Lambda/\omega=0.1\), \(\Omega/\omega\)\(=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\). (c) and (f): \(\Omega/\omega\) changes from \(0\) to \(10\), \(\Lambda/\omega=0.1\), \(\Delta/\omega=0.1\), the system is initialized in \(|0,\mathrm{vac}\rangle\). The upper panels (a), (b), (c) correspond to \(\mathcal{P}_{0}(t)\), the lower panels (d), (e), (f) correspond to \(\mathcal{P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}=1\)figure.3 (a) and (d) are clearly separated by two vertical lines at \(\omega t=0\) and \(\omega t=10\) which correspond to the same LZ transitions which appear in Fig. 2Time evolutions of transition probabilities \(\mathcal{P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \(\mathcal{P}_{+}(t)\); orange, \(\mathcal{P}_{0}(t)\), yellow-green, \(\mathcal{P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2 (b). As has been mentioned, the first LZ transition is governed by the tunneling \(\Delta/\omega\), while the second LZ transition is governed by the coupling \(\Lambda/\omega\) to the boson mode. As \(\Delta/\omega\) is fixed, the impact of the first LZ transition is uniform for all transition probabilities, while the influence of the second LZ transition at \(\omega t=10\) depends significantly on \(\Lambda/\omega\). There are threshold values of \(\Lambda/\omega\) below which \(\mathcal{P}_{0}(t)\) and \(\mathcal{P}_{-}(t)\) do not change substantially during the second LZ transition. These threshold values are around \(0.05\) (\(0.11\)) for \(\mathcal{P}_{0}(t)\) (\(\mathcal{P}_{-}(t)\)). Between the two threshold values, that is for \(0.05<\Lambda/\omega<0.11\), the LZ transition changes \(\mathcal{P}_{0}(t)\), but does not change \(\mathcal{P}_{-}(t)\). This parameter regime corresponds to the situation illustrated by Figs. 2Time evolutions of transition probabilities \(\mathcal{P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The col Figure 4: Time evolutions of the transition probabilities \(\mathcal{P}_{-}(t)\) (upper pannels) and \(\mathcal{P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\) and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\) and the bath is sup-Ohmic. For (a), \(s=0.5,0.75,1\) and the bath is sup-Ohmic. For (a), \(\omega t=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\). (c) and (f): \(\Omega/\omega\) changes from \(0\) to \(10\), \(\Lambda/\omega=0.1\), \(\Delta/\omega=0.1\), the system is initialized in \(|+,\mathrm{vac}\rangle\). The upper panels (a), (b), (c) correspond to \(\mathcal{P}_{0}(t)\), the lower panels (d), (e), (f) correspond to \(\mathcal{P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}=1\)figure.3 (a) and (d) are clearly separated by two vertical lines at \(\omega t=0\) and \(\omega t=10\) which correspond to the same LZ transitions which appear in Fig. 2Time evolutions of transition probabilities \(\mathcal{P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The colors distinguish transition probabilities in different states: blue, \(\mathcal{P}_{+}(t)\); orange, \(\mathcal{P}_{0}(t)\), yellow-green, \(\mathcal{P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2 (b). As has been mentioned, the first LZ transition is governed by the tunneling \(\Delta/\omega\), while the second LZ transition is governed by the coupling \(\Lambda/\omega\) to the boson mode. As \(\Delta/\omega\) is fixed, the impact of the first LZ transition is uniform for all transition probabilities, while the influence of the second LZ transition at \(\omega t=10\) depends significantly on \(\Lambda/\omega\). There are threshold values of \(\Lambda/\omega\) below which \(\mathcal{P}_{0}(t)\) and \(\mathcal{P}_{-}(t)\) do not change substantially during the second LZ transition. These threshold values are around \(0.05\) (\(0.11\)) for \(\mathcal{P}_{0}(t)\) (\(\mathcal{P}_{-}(t)\)). Between the two threshold values, that is for \(0.05<\Lambda/\omega<0.11\), the LZ transition changes \(\mathcal{P}_{0}(t)\), but does not change \(\mathcal{P}_{-}(t)\). This parameter regime corresponds to the situation illustrated by Figs. 2Time evolutions of transition probabilities \(\mathcal{P}_{s}(t)\) from \(\omega t=-30\) to \(\omega t=30\). The col ors distinguish transition probabilities in different states: blue, \(\mathcal{P}_{+}(t)\); orange, \(\mathcal{P}_{0}(t)\), yellow-green, \(\mathcal{P}_{-}(t)\). The left column shows the probabilities to stay in the originally populated state, while the right column shows the probability of the spin flip. The rows (from top to bottom) correspond to the initializations of the system in the states \(|+\rangle\), \(|0\rangle\), \(|-\rangle\), respectively. All transition probabilities are evaluated for \(\Lambda/\omega=0.1\), \(v/\omega^{2}=1\), \(\Delta/\omega=0.1\) and \(\Omega/\omega=10\)figure.2(a) and (b), where the direct transition from \(|+\rangle\) to \(|0\rangle\) is much stronger than the indirect transition from \(|+\rangle\) to \(|-\rangle\). If \(\Lambda/\omega>0.11\), the second LZ transition significantly enhances \(\mathcal{P}_{-}(t)\) and \(\mathcal{P}_{0}(t)\). Similar phenomenon can be seen in Figs. 3Color maps of time evolutions of LZ transition probabilities with respect to different system parameters.(a) and (d): \(\Lambda/\omega\) changes from \(0\) to \(0.4\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\); (b) and (e): \(\Delta/\omega\) changes from \(0\) to \(0.4\), \(\Lambda/\omega=0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\). (c) and (f): \(\Omega/\omega\) changes from \(0\) to \(10\), \(\Lambda/\omega=0.1\), \(\Delta/\omega=0.1\), the system is initialized in \(|0,\mathrm{vac}\rangle\). The upper panels (a), (b), (c) correspond to \(\mathcal{P}_{0}(t)\), the lower panels (d), (e), (f) correspond to \(\mathcal{P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}=1\)figure.3(b) and (e), where \(\Delta/\omega\) is changed from \(0\) to \(0.4\), \(\Omega/\omega=10\), and \(\Lambda/\omega=0.1\). There are also threshold values of \(\Delta/\omega=0.02\) (\(0.13\)) below which \(\mathcal{P}_{0}(t)\) (\(\mathcal{P}_{-}(t)\)) do not change substantially after the LZ transition at \(\omega t=0\). If \(\Delta/\omega>0.13\), the LZ transition at \(\omega t=0\) substantially increases \(\mathcal{P}_{-}(t)\) and \(\mathcal{P}_{0}(t)\). For large \(\Delta/\omega\), for example for \(\Delta/\omega=1\), indirect transitions from \(|+\rangle\) to \(|-\rangle\) dominate over direct transitions from \(|+\rangle\) to \(|0\rangle\). Hence the population transfer occurs mainly between the \(|+\rangle\) and \(|-\rangle\) states and \(\mathcal{P}_{0}(t)\) remains almost unchanged. This regime takes place in the 3L-BTM without boson coupling for large \(\Delta/\omega\)[45]. In Figs. 3Color maps of time evolutions of LZ transition probabilities with respect to different system parameters.(a) and (d): \(\Lambda/\omega\) changes from \(0\) to \(0.4\), \(\Delta/\omega=0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\); (b) and (e): \(\Delta/\omega\) changes from \(0\) to \(0.4\), \(\Lambda/\omega=0.1\), \(\Omega/\omega\) = \(10\), the system is initialized in \(|+,\mathrm{vac}\rangle\). (c) and (f): \(\Omega/\omega\) changes from \(0\) to \(10\), \(\Lambda/\omega=0.1\), \(\Delta/\omega=0.1\), the system is initialized in \(|0,\mathrm{vac}\rangle\). The upper panels (a), (b), (c) correspond to \(\mathcal{P}_{0}(t)\), the lower panels (d), (e), (f) correspond to \(\mathcal{P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}=1\)figure.3 (c) and (f), \(\Omega/\omega\) varies from \(0\) to \(10\), while \(\Lambda/\omega=\Delta/\omega=0.1\). The wave function is now initialized in \(|0,\mathrm{vac}\rangle\). Clearly, the boson mode causes additional LZ transitions which occur at \(\omega t=-\Omega/\omega\) in \(\mathcal{P}_{0}(t)\) and at \(\omega t=\Omega/\omega\) in \(\mathcal{P}_{-}(t)\). Interestingly, changes in \(\Omega/\omega\) cause periodic variations in the steady state transition probabilities and - for sufficiently large \(\Omega/\omega\) - periods and amplitudes of these variations substantially decrease and become negligible. These variations are caused by the interference of LZ transitions at \(\omega t=\pm\Omega/\omega\) and \(\omega t=0\). For definiteness, let us consider \(\mathcal{P}_{0}(t)\). If the time separation between the two LZ transitions is small, the 3L-BMT has no time to evolve after the first transition at \(\omega t=-\Omega/\omega\), and the second transition at \(\omega t=0\) kicks in shortly after the first transition. Depending on the temporal separation between the two transitions, the first transition quenches at different times and yields different steady-state transition probabilities. Consequently, the steady-state transition probabilities vary with \(\Omega/\omega\). If the temporal separation between the two LZ transition is large enough, the system after the first transition at \(\omega t=-\Omega/\omega\) will be fully relaxed before the second transition occurs at \(\omega t=0\). This eliminates oscillations in the steady-state transition probabilities with respect to different \(\Omega/\omega\). However, as the emergence of this phenomenon hinges upon the LZ transition occurring at \(\omega t=-\Omega/\omega\), it does not occur if the wave function is initiated in \(|+,\mathrm{vac}\rangle\). ### Dynamics of the 3L-BTM coupled to a dissipative bath In the previous section, we considered the 3L-BTM coupled to a single harmonic mode. Here the 3L-BTM is coupled to an Ohmic-type bath described by the Hamiltonian \(\hat{H}_{\mathrm{dsp}}\) of Eq. (5equation.2.5). In Figs. 4Time evolutions of the transition probabilities \(\mathcal{P}_{-}(t)\) (upper panels) and \(\mathcal{P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\) and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(a), (b), (d), and (e), effects of \(\Delta/\omega\) and \(\alpha\) on the dynamics of the 3L-BTM coupled to an Ohmic (\(s=1\)) bath are investigated. In Fig. 4Time evolutions of the transition probabilities \(\mathcal{P}_{-}(t)\) (upper panels) and \(\mathcal{P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\) and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(c) and (f), \(\Delta/\omega\) and \(\alpha\) are fixed, and \(s\) is varied to examine the dynamic differences caused by sub-Ohmic (panel (c)) and super-Ohmic (panel (d)) baths. All populations in Fig. 4Time evolutions of the transition probabilities \(\mathcal{P}_{-}(t)\) (upper panels) and \(\mathcal{P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\) and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4 and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4 (a) and (d), we fix \(\alpha\) at \(0.002\), and change the tunneling strength \(\Delta/\omega\) from \(0\) to \(0.5\). The population dynamics in Fig. 4Time evolutions of the transition probabilities \(\mathcal{P}_{-}(t)\) (upper pannels) and \(\mathcal{P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4 (a) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|-\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(d). If \(\Delta/\omega\) is small, the coupling to the bath enhances \(\mathcal{P}_{0}(t)\) at \(\omega t>0\). This is similar to the behaviour of \(\mathcal{P}_{-}(t)\). As \(\Delta/\omega\) becomes larger, \(\mathcal{P}_{0}(t)\) starts decreasing at \(\omega t>0\). This indicates a strong dependence of the bath-induced dissipation on \(\Delta/\omega\). Such a combined dissipation + tunneling affect is absent if the 3L-BTM is coupled to a single harmonic mode. In this latter case, if the wave function is initialized in \(|+,\mathrm{vac}\rangle\), the LZ transition in \(\mathcal{P}_{0}(t)\) induced by the tunneling \(\Delta/\omega\) is independent of the LZ transition induced by \(\Lambda/\omega\), and the steady state transition probability is simply a sum of the two probabilities (see Fig. 3Color maps of time evolutions of LZ transition probabilities with respect to different system parameters.(a) and (d): \(\Lambda/\omega\) changes from \(0\) to \(0.4\), \(\Delta/\omega\) = \(0.1\), \(\Omega/\omega=10\), the system is initialized in \(|+,\mathrm{vac}\rangle\); (b) and (e): \(\Delta/\omega\) changes from \(0\) to \(0.4\), \(\Lambda/\omega=0.1\), \(\Omega/\omega\) = \(10\), the system is initialized in \(|+,\mathrm{vac}\rangle\). (c) and (f): \(\Omega/\omega\) changes from \(0\) to \(10\), \(\Lambda/\omega=0.1\), the system is initialized in \(|0,\mathrm{vac}\rangle\). The upper panels (a), (b), (c) correspond to \(\mathcal{P}_{0}(t)\), the lower panels (d), (e), (f) correspond to \(\mathcal{P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}\) = 1figure.3(e)). When \(\omega t>0\), the changes in transition probabilities are mostly due to the coupling to the bath. This phase is characterized by an overall increase of \(\mathcal{P}_{-}(t)\) which is superimposed with coherent Stuckelberg oscillations of decreasing amplitude and period (cf. Ref. [46]). Note that the gradient (that is, the rate of increase) of transition probabilities is decided by the spectral density function. Since \(\alpha\) is fixed in Fig. 4Time evolutions of the transition probabilities \(\mathcal{P}_{-}(t)\) (upper pannels) and \(\mathcal{P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(a), the \(\mathcal{P}_{-}(t)\) curves for different \(\Delta/\omega\) are roughly parallel to each other at \(\omega t>0\). The situation is different for \(\mathcal{P}_{0}(t)\) depicted in Fig. 4Time evolutions of the transition probabilities \(\mathcal{P}_{-}(t)\) (upper pannels) and \(\mathcal{P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (f), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(d). If \(\Delta/\omega\) is small, the coupling to the bath enhances \(\mathcal{P}_{0}(t)\) at \(\omega t>0\). This is similar to the behaviour (f) correspond to \({\cal P}_{-}(t)\). The scanning velocity is fixed at \(v/\omega^{2}=1\)figure.3). The mutual "entanglement" of the tunneling (\(\Delta\)) and bath (\(\alpha\)) induced effects is also not seen in dissipative two-level LZ models [7]. This makes the bath-tunneling entanglement a signature of the LZ transitions in dissipative 3L-BTMs. In Fig. 4Time evolutions of the transition probabilities \({\cal P}_{-}(t)\) (upper pannels) and \({\cal P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from 0 to 0.5 with a step of 0.1. For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from 0.002 to 0.01 with a step of 0.002. For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\) and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(b) and (e), \(\Delta\) is fixed at 0.1, and the bath coupling strength \(\alpha\) is changed from 0.002 to 0.01. In Fig. 4Time evolutions of the transition probabilities \({\cal P}_{-}(t)\) (upper pannels) and \({\cal P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from 0 to 0.5 with a step of 0.1. For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from 0.002 to 0.01 with a step of 0.002. For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(e), we arrive at the following interesting observation. When \(\alpha\) is small, \({\cal P}_{0}(t)\) is larger than \({\cal P}_{-}(t)\) throughout the entire time evolution. If \(\alpha\) becomes larger, \({\cal P}_{-}(t)\) increases, too, but \({\cal P}_{0}(t)\) decreases. When \(\alpha=0.1\), \({\cal P}_{-}(\infty)\approx 1\), while \({\cal P}_{0}(\infty)\approx 0\). This again indicates that the indirect \(|+\rangle\rightarrow|-\rangle\) transition is dominant when the coupling strength is large. In Figs. 4Time evolutions of the transition probabilities \({\cal P}_{-}(t)\) (upper pannels) and \({\cal P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from 0 to 0.5 with a step of 0.1. For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from 0.002 to 0.01 with a step of 0.002. For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|-\rangle\) is plotted, for (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(c) displays the \({\cal P}_{0}(t)\) evolution. Comparing \({\cal P}_{-}(t)\) (Fig. 4Time evolutions of the transition probabilities \({\cal P}_{-}(t)\) (upper pannels) and \({\cal P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from 0 to 0.5 with a step of 0.1. For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from 0.002 to 0.01 with a step of 0.002. For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(b)) and \({\cal P}_{0}(t)\) (Fig. 4Time evolutions of the transition probabilities \({\cal P}_{-}(t)\) (upper pannels) and \({\cal P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from 0 to 0.5 with a step of 0.1. For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from 0.002 to 0.01 with a step of 0.002. For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(e)), we arrive at the following interesting observation. When \(\alpha\) is small, \({\cal P}_{0}(t)\) is larger than \({\cal P}_{-}(t)\) throughout the entire time evolution. If \(\alpha\) becomes larger, \({\cal P}_{-}(t)\) increases, too, but \({\cal P}_{0}(t)\) decreases. When \(\alpha=0.1\), \({\cal P}_{-}(\infty)\approx 1\), while \({\cal P}_{0}(\infty)\approx 0\). This again indicates that the indirect \(|+\rangle\rightarrow|-\rangle\) transition is dominant when the coupling strength is large. In Figs. 4Time evolutions of the transition probabilities \({\cal P}_{-}(t)\) (upper pannels) and \({\cal P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from 0 to 0.5 with a step of 0.1. For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from 0.002 to 0.01 with a step of 0.002. For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\)and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(c) and (f), \(\alpha\) and \(\Delta\) are fixed at 0.002 and 0.1, and \(s\) is changed from 0.5 to 1.5. Fig. 4Time evolutions of the transition probabilities \({\cal P}_{-}(t)\) (upper pannels) and \({\cal P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6equation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from 0 to 0.5 with a step of 0.1. For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from 0.002 to 0.01 with a step of 0.002. For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|-\rangle\) is plotted. For (c) and (f), is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(c) corresponds to sub-Ohmic baths with \(s\leq 1\), while Fig. 4Time evolutions of the transition probabilities \(\mathcal{P}_{-}(t)\) (upper pannels) and \(\mathcal{P}_{0}(t)\) (lower panels) from \(\omega t=-10\) to \(\omega t=40\) for the 3L-BTM coupled to a harmonic bath with the spectral density of Eq. (6eqation.2.6). For (a) and (d), \(\alpha=0.002\), \(s=1\), and \(\Delta/\omega\) is changed from \(0\) to \(0.5\) with a step of \(0.1\). For (b) and (e), \(\Delta/\omega=0.1\), \(s=1\), and \(\alpha\) varies from \(0.002\) to \(0.01\) with a step of \(0.002\). For (a) and (b), the transition probability to \(|-\rangle\) is plotted, whereas for (d) and (e), the transition probability to \(|0\rangle\) is plotted. For (c) and (f), \(\alpha=0.002\) and \(\Delta/\omega=0.1\), and the transition probability to \(|-\rangle\) is plotted. For (c), \(s=0.5,0.75,1\) and the bath is sub-Ohmic. For (f), \(s=0.5,0.75,1\)and the bath is super-Ohmic. For all figures, \(\omega_{c}/\omega=10\)figure.4(f) corresponds to super-Ohmic baths with \(s\geq 1\). In both sub- and super-Ohmic regimes, increasing \(s\) leads to higher values of \(\mathcal{P}_{-}(\infty)\). Interestingly, equal increments in increasing \(s\) yield lower \(\mathcal{P}_{-}(\infty)\) in the sub-Ohmic regime than in the super-Ohmic regime. The reason is that the value of \(\mathcal{P}_{-}(\infty)\) is governed by the integral of the spectral density function \(J(\omega)\) over the entire range of frequencies. With equal increments in increasing \(s\), the change of this integral is smaller in the sub-Ohmic regime in comparison with the super-Ohmic regime. However, as \(s\) becomes smaller, the situation changes and sub-Ohmic baths cause faster increase of \(\mathcal{P}_{-}(t)\), i.e., produce larger gradients of \(\mathcal{P}_{-}(t)\) between \(\omega t=0\) and \(\omega t=20\). This is not observed in the super-Ohmic regime, where the gradient of \(\mathcal{P}_{-}(t)\) between \(\omega t=0\) and \(\omega t=20\) is approximately the same for all \(s\). The explanation is similar. In the sub-Ohmic regime, as \(s\) becomes smaller, the \(J(\omega)\) maximum shifts towards lower frequencies, which causes faster increase in \(\mathcal{P}_{-}(t)\). ## IV Conclusion By employing the multi-D\({}_{2}\) Davydov Ansatz, we performed numerically "exact" simulations of the dissipative 3L-BTM dynamics. We considered a bare 3L-BTM coupled to a single harmonic mode as well as a bare 3L-BTM coupled to a boson bath with Ohmic, sub-Ohmic and super-Ohmic spectral densities. With the aid of the energy diagrams, we developed a useful qualitative method of characterizing and interpreting population-transfer pathways in dissipative 3L-BTMs. This method has revealed mechanisms behind various LZ transitions in the 3L-BTM and uncovered their contributions to the steady-state populations. We have shown that vibrational splittings of the electronic levels of the 3L-BTM cause nontrivial crossing patterns in the energy diagram which can be understood by inspecting sequential wavepacket scatterings on the relevant electronic/vibrational states. These scattering processes can be directly linked to the steady-state populations. We have demonstrated that the 3L-BTM dynamics is very sensitive to tunneling strengths, system-bath couplings, and characteristic frequencies of the bath. In particular, the presence of boson states breaks the SU2 symmetry of the bare 3L-BTM, causing asymmetry of the pathways leading from the initial \(|0\rangle\) state to the final \(|+\rangle\) and \(|-\rangle\) states. In general, we found profound differences between the time evolution of the original two-level LZ model and the present 3L-BTM. In certain cases, however, the dynamics initiated in the lowest state \(|-\rangle\) of the 3L-BTM is almost insensitive to the presence of the upper \(|+\rangle\) state, which renders the 3L-BTM to behave like an effective two-level LZ model. Our simulations prove that sub-Ohmic, Ohmic and super-Ohmic boson baths have different impact on the 3L-BTM dynamics. In particular, rise times, local maxima and subsequent decays of the 3L-BTM populations depend significantly on the parameters \(\alpha\) and \(s\) specifying the bath spectral density. Hence the phenomenological Lindblad-like descriptions reducing all multifaceted bath-induced phenomena to a single relaxation rate are rendered inadequate for reproducing actual dynamics of dissipative 3L-BTMs. The numerically accurate methodology with the multi-D\({}_{2}\) Davydov Ansatz developed in this work can help to interpret experiments on spin-1 systems, facilitate the development of QED devices based on these systems, and provide the guidance for engineering and optimizing these devices. Note, finally, that the computational efficiency of the multi-D\({}_{2}\) Ansatz does not crucially depend on specific values of the system and bath parameters. By adopting the Thermo Field Dynamics framework, the multi-D\({}_{2}\) Ansatz can be turned into accurate simulator of LZ systems at finite temperatures [47; 48]. Hence the versatile multi-D\({}_{2}\) machinery can become a method of choice for simulations of general multilevel dissipative LZ systems. ###### Acknowledgements. The authors thank Lu Wang, Kewei Sun, Fulu Zheng, and Frank Grossmann for useful discussion, and Zongfa Zhang for providing access to computational resources. Support from Nanyang Technological University "URECA" Undergraduate Research Programme and the Singapore Ministry of Education Academic Research Fund Tier 1 (Grant No. RG87/20) is gratefully acknowledged. ## Author Declarations ### Conflict of Interest The authors have no conflicts to disclose. ### Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Appendix A Convergence proof Here we demonstrate the numerical convergence of the calculations of the present work. The chosen values of multiplicities of the multi-D\({}_{2}\) Ansatz and of other parameters ensure that the results presented in Sec. IIISection*.7 are converged. ### Single-mode 3L-BTM In Sec. III.Asection*.8, we consider the 3L-BTM coupled to a single harmonic mode. The convergence of the results is determined by the multiplicity \(M\) of the multi-D\({}_{2}\) Ansatz. When \(M\) is large enough, the results are independent of \(M\) and convergence is reached. Fig. 5The time evolution of \(\mathcal{P}_{+}(t)\) from \(\omega t=-30\) to \(\omega t=30\) for different multiplicities \(M\). The calculation is initialized in the state \(|-\rangle\). The remaining parameters are as follows: \(\Delta/\omega=0.1\), \(v/\omega^{2}=1\), \(\Lambda/\omega=0.1\), and \(\Omega/\omega=10\)figure.5 depicts population \(\mathcal{P}_{+}(t)\) calculated for different \(M\). The remaining parameters are fixed: \(\Delta/\omega=0.1\), \(v/\omega^{2}=1\), \(\Lambda/\omega=0.1\), and \(\Omega/\omega=10\).It can be seen that difference between \(\mathcal{P}_{+}(t)\) curves for \(M=8\) and \(10\) is negligible. ### 3L-BTM coupled to harmonic bath In Sec. III.Bsection*.9, we consider the 3L-BTM coupled to a harmonic bath. In this case, the results depend on the multi-D\({}_{2}\) multiplicity \(M\) as well as on the parameters specifying discretization of the bath spectral density, viz. the maximum frequency \(\omega_{m}/\omega\) and the discrete mode number \(N\). It is known that the smaller is the exponent \(s\) in the bath spectral density of Eq. (6equation.2.6), the higher \(N\) is required to reach the convergence. Therefore in Fig. 6The convergence test to \(N\), \(M\) and \(\omega_{m}/\omega\) of the time evolution of \(\mathcal{P}_{+}\) from \(\omega t=-10\) to \(\omega t=50\). In each one of the subplots, only one of the parameters is changed, the rest of the parameters are fixed. (a). \(N=40\), \(60\) and \(80\). \(M=4\), \(\omega_{m}=5\omega_{c}\) and \(s=0.5\). (b). \(M=3\), \(4\) and \(5\). \(N=40\), \(\omega_{m}=5\omega_{c}\) and \(s=1\). (c). \(\omega_{m}=3\omega_{c},4\omega_{c},5\omega_{c}\). \(M=4\), \(N=40\) and \(s=1.75\). The rest of the parameters are consistent for all of the subplots: \(\Delta=0.1\), \(v=1\), \(\alpha=0.002\) and \(\omega_{c}=10\omega\)figure.6(a) we choose the smallest \(s\) that is used in the simulations of Sec. III.Bsection*.9, \(s=0.5\). It can be seen that different choices of \(N\) have no significant effect on \(\mathcal{P}_{+}(t)\). In Fig. 6The convergence test to \(N\), \(M\) and \(\omega_{m}/\omega\) of the time evolution of \(\mathcal{P}_{+}\) from \(\omega t=-10\) to \(\omega t=50\). In each one of the subplots, only one of the parameters is changed, the rest of the parameters are fixed. (a). \(N=40\), \(60\) and \(80\). \(M=4\), \(\omega_{m}=5\omega_{c}\) and \(s=0.5\). (b). \(M=3\), \(4\) and \(5\). \(N=40\), \(\omega_{m}=5\omega_{c}\) and \(s=1.75\). The rest of the parameters are consistent for all of the subplots: \(\Delta=0.1\), \(v=1\), \(\alpha=0.002\) and \(s=0.5\). (b). \(M=3\), \(4\) and \(5\). \(N=40\), \(\omega_{m}=5\omega_{c}\) and \(s=1\). (c). \(\omega_{m}=3\omega_{c},4\omega_{c},5\omega_{c}\). \(M=4\), \(N=40\) and \(s=1.75\). The rest of the parameters are consistent for all of the subplots: \(\Delta=0.1\), \(v=1\), \(\alpha=0.002\) and \(\omega_{c}=10\omega\)figure.6(b), a similar procedure is performed for \(M\). It can be seen that \(M=3\) is already sufficient to yield the converged results. In Fig. 6The convergence test to \(N\), \(M\) and \(\omega_{m}/\omega\) of the time evolution of \(\mathcal{P}_{+}\) from \(\omega t=-10\) to \(\omega t=50\). In each one of the subplots, only one of the parameters is changed, the rest of the parameters are fixed. (a). \(N=40\), \(60\) and \(80\). \(M=4\), \(\omega_{m}=5\omega_{c}\) and \(s=0.5\). (b). \(M=3\), \(4\) and \(5\). \(N=40\), \(\omega_{m}=5\omega_{c}\) and \(s=1\). (c). \(\omega_{m}=3\omega_{c},4\omega_{c},5\omega_{c}\). \(M=4\), \(N=40\) and \(s=1.75\). The rest of the parameters are consistent for all of the subplots: \(\Delta=0.1\), \(v=1\), \(\alpha=0.002\) and \(\omega_{c}=10\omega\)figure.6(c), shows \(\mathcal{P}_{+}(t)\) for different \(\omega_{m}\). It is observed that \(\mathcal{P}_{+}(t)\) stops changing if \(\omega t<\omega_{m}/\omega\). In other words, the choice of \(\omega_{m}/\omega\) has no influence on the dynamics if \(\omega t<\omega_{m}/\omega\). The indicates that if \(\omega_{m}/\omega>\omega t_{\max}\) where \(t_{\max}\) is the final time of the calculation, the results are converged. ### Comparison of different discretization methods In order to validate the performance of the density discretization method introduced in Sec. II.Asection*.3, we benchmark it against the commonly used linear discretization method for the 3L-BTM coupled to the Ohmic bath. Fig. 7A comparison between density discretization and the linear discretization by testing the time evolution of \(\mathcal{P}_{+}\) from \(t=-10\omega^{-1}\) to \(t=50\omega^{-1}\) with different discrete mode number \(N\). (a). 40 discrete modes are used for both discretization methods. (b). 40 modes are used for density discretization, and 60 modes are used for linear discretization. (b). 40 modes are used for density discretization, and 80 modes are used for linear discretization. The rest of the parameters are consistent for all of the subplots: \(\Delta=0.1\), \(v=1\), \(s=1\), \(\alpha=0.002\) and \(\omega_{c}=10\)figure.7(a) shows \(\mathcal{P}_{+}(t)\) calculated by both methods by using \(N=40\) discrete modes. The density discretization method yields correct smooth evolution of \(\mathcal{P}_{+}(t)\) (see below), while spurious stair-like pattern caused by undersampling can be seen in \(\mathcal{P}_{+}(t)\) calculated by the linear discretization method. As the number of modes in the linear discretization method increases (Fig. 7A comparison between density discretization and the linear discretization by testing the time evolution of \(\mathcal{P}_{+}\) from \(t=-10\omega^{-1}\) to \(t=50\omega^{-1}\) with different discrete mode number \(N\). (a). 40 discrete modes are used for both discretization methods. (b). 40 modes are used for density discretization, and 60 modes are used for linear discretization. (b). 40 modes are used for density discretization, and 80 modes are used for linear discretization. The rest of the parameters are consistent for all of the subplots: \(\Delta=0.1\), \(v=1\), \(s=1\), \(\alpha=0.002\) and \(\omega_{c}=10\)figure.7(b)), the spurious structures smoothen out. Finally, \(\mathcal{P}_{+}(t)\) produced by the linear discretization method with 80 modes overlaps with the \(\mathcal{P}_{+}\) calculated by the density method with 40 discrete modes (Fig. 7A comparison between density discretization and the linear discretization by testing the time evolution of \(\mathcal{P}_{+}\) from \(t=-10\omega^{-1}\) to \(t=50\omega^{-1}\) with different discrete mode number \(N\). (a). 40 discrete modes are used for both discretization methods. (b). 40 modes are used for density discretization, and 60 modes are used for linear discretization. (b). 40 modes are used for density discretization, and 80 modes are used for linear discretization. The rest of the parameters are consistent for all of the subplots: \(\Delta=0.1\), \(v=1\), \(s=1\), \(\alpha=0.002\) and \(\omega_{c}=10\)figure.7(c)). This proves that the density discretization method converges faster than the linear discretization method. Figure 7: A comparison between density discretization and the linear discretization by testing the time evolution of \(\mathcal{P}_{+}\) from \(t=-10\omega^{-1}\) to \(t=50\omega^{-1}\) with different discrete mode number \(N\). (a). 40 discrete modes are used for both discretization methods. (b). 40 modes are used for density discretization, and 60 modes are used for linear discretization. (b). 40 modes are used for density discretization, and 80 modes are used for linear discretization. The rest of the parameters are consistent for all of the subplots: \(\Delta=0.1\), \(v=1\), \(s=1\), \(\alpha=0.002\) and \(\omega_{c}=10\). ## Appendix B Equations of motions for the multi-\(\mathbf{D}_{2}\) Ansatz For \(A_{m+}^{*}\): \[i\sum_{n=1}^{M}\Big{[}\dot{A}_{n+}+A_{n+}\sum_{k}(\dot{\alpha}_{nk} \alpha_{mk}^{*}-\frac{1}{2}\dot{\alpha}_{nk}\alpha_{nk}^{*}-\frac{1}{2}\alpha_ {nk}\dot{\alpha}_{nk}^{*})\Big{]}S_{mn}\] \[=\sum_{n=1}^{M}\Big{[}A_{n+}vt+A_{n0}\Big{(}\Delta+\sum_{k}\eta _{k}(\alpha_{mk}^{*}+\alpha_{nk})\Big{)}+A_{n+}\Big{(}\sum_{k}\omega_{k}\big{(} \alpha_{mk}^{*}\alpha_{nk}\big{)}\Big{)}\Big{]}S_{mn} \tag{10}\] For \(A_{m0}^{*}\): \[i\sum_{n=1}^{M}\Big{[}\dot{A}_{n0}+A_{n0}\sum_{k}(\dot{\alpha}_{ nk}\alpha_{mk}^{*}-\frac{1}{2}\dot{\alpha}_{nk}\alpha_{nk}^{*}-\frac{1}{2} \alpha_{nk}\dot{\alpha}_{nk}^{*})\Big{]}S_{mn}\] \[=\sum_{n=1}^{M}\Big{[}(A_{n+}+A_{n-})\Big{(}\Delta+\sum_{k}\eta _{k}(\alpha_{mk}^{*}+\alpha_{nk})\Big{)}+A_{n0}\Big{(}\sum_{k}\omega_{k}\big{(} \alpha_{mk}^{*}\alpha_{nk}\big{)}\Big{)}\Big{]}S_{mn} \tag{11}\] For \(\alpha_{mk}^{*}\): \[i\sum_{m,n}^{M}\sum_{s}^{+,-,0}\Big{[}A_{ms}^{*}\dot{A}_{ns}+A_{ ms}^{*}A_{ns}\Big{(}\dot{\alpha}_{nk}+\alpha_{nk}\sum_{k^{\prime}}\dot{\alpha}_{ nk^{\prime}}\alpha_{mk^{\prime}}^{*}-\frac{1}{2}\sum_{k^{\prime}}(\dot{\alpha}_{ nk^{\prime}}\alpha_{nk^{\prime}}^{*}+\alpha_{nk^{\prime}}\dot{\alpha}_{nk^{\prime}}^{*}) \Big{)}\Big{]}S_{mn}\] \[=\sum_{m,n}^{M}\Big{[}\Big{(}A_{m+}^{*}A_{n+}-A_{m-}^{*}A_{n-} \Big{)}vt\alpha_{nk}+\sum_{s}^{+,-,0}A_{ms}^{*}A_{ns}\Big{(}\omega_{k}\alpha_{ nk}+\alpha_{nk}\sum_{k^{\prime}}\omega_{k^{\prime}}\alpha_{mk^{\prime}}^{*} \alpha_{nk^{\prime}}\Big{)}\] \[+\Big{(}A_{m0}^{*}A_{n+}+A_{m+}^{*}A_{n0}+A_{m-}^{*}A_{n0}+A_{m0} ^{*}A_{n-}\Big{)}\Big{(}\Delta\alpha_{nk}+\eta_{k}+\alpha_{nk}\sum_{k^{\prime}} \eta_{k^{\prime}}(\alpha_{mk^{\prime}}^{*}+\alpha_{nk^{\prime}})\Big{)}\Big{]} S_{mn} \tag{12}\]
2303.05965
Scalable and Efficient Functional Map Computations on Dense Meshes
Spectral geometric methods have brought revolutionary changes to the field of geometry processing. Of particular interest is the study of the Laplacian spectrum as a compact, isometry and permutation-invariant representation of a shape. Some recent works show how the intrinsic geometry of a full shape can be recovered from its spectrum, but there are approaches that consider the more challenging problem of recovering the geometry from the spectral information of partial shapes. In this paper, we propose a possible way to fill this gap. We introduce a learning-based method to estimate the Laplacian spectrum of the union of partial non-rigid 3D shapes, without actually computing the 3D geometry of the union or any correspondence between those partial shapes. We do so by operating purely in the spectral domain and by defining the union operation between short sequences of eigenvalues. We show that the approximated union spectrum can be used as-is to reconstruct the complete geometry [MRC*19], perform region localization on a template [RTO*19] and retrieve shapes from a database, generalizing ShapeDNA [RWP06] to work with partialities. Working with eigenvalues allows us to deal with unknown correspondence, different sampling, and different discretizations (point clouds and meshes alike), making this operation especially robust and general. Our approach is data-driven and can generalize to isometric and non-isometric deformations of the surface, as long as these stay within the same semantic class (e.g., human bodies or horses), as well as to partiality artifacts not seen at training time.
Robin Magnet, Maks Ovsjanikov
2023-03-10T14:54:21Z
http://arxiv.org/abs/2303.05965v1
# Scalable and Efficient Functional Map Computations on Dense Meshes ###### Abstract We propose a new scalable version of the functional map pipeline that allows to efficiently compute correspondences between potentially very dense meshes. Unlike existing approaches that process dense meshes by relying on ad-hoc mesh simplification, we establish an integrated end-to-end pipeline with theoretical approximation analysis. In particular, our method overcomes the computational burden of both computing the basis, as well the functional and pointwise correspondence computation by approximating the functional spaces and the functional map itself. Errors in the approximations are controlled by theoretical upper bounds assessing the range of applicability of our pipeline. With this construction in hand, we propose a scalable practical algorithm and demonstrate results on dense meshes, which approximate those obtained by standard functional map algorithms at the fraction of the computation time. Moreover, our approach outperforms the standard acceleration procedures by a large margin, leading to accurate results even in challenging cases. \(\bullet\)**Computing methodologies**\(\rightarrow\) Shape analysis; \(\bullet\)**Theory of computation**\(\rightarrow\) Computational geometry; ## 1 Introduction Processing and analyzing complex 3D objects is a major area of study with applications in computer graphics, medical imaging and other domains. The underlying structure of such data can be highly detailed and require dense point sets and meshes to capture important features. At the same time, shape analysis methods are often designed to only handle objects that consist of tens of thousands of points, thus requiring decimation algorithms to process meshes containing millions of points that can arise in real-world applications. While mesh simplification can lead to good results, it suffers from several drawbacks. First, the simplification process might lead to artifacts and significant loss of detail. Second, for many applications, it remains highly non-trivial to accurately transfer the results of analysis from the simplified to original shapes. Finally, the transfer process can introduce errors and aliasing artifacts. In this work, we focus on computing correspondences between non-rigid shapes. This is a long-standing problem in Geometry Processing and related fields, with a wide range of techniques developed in the past few years [14, 15]. A notable line of work in this domain uses the so-called functional map framework, which is based on manipulating correspondences as matrices in a reduced basis [13]. Methods based on this framework have recently achieved high accuracy on a range of difficult non-rigid shape matching tasks [16, 17, 18]. Unfortunately, these approaches require costly and time-consuming precomputation of the Laplacian basis and, potentially, other auxiliary data-structures [19]. As a result, these techniques do not scale well to densely sampled meshes and, thus, are most often applied on simplified shapes. Moreover, while accelerated versions of some methods [17] have recently been proposed, these lack theoretical approximation guarantees, and can be error-prone. At the same time, several approaches have recently been proposed for efficient approximation of the Laplace-Beltrami basis Figure 1: Our method produces point-to-point correspondences between dense meshes efficiently, using values only located at sparse samples, displayed in white. The source and target shapes from the _DeformingThings4D dataset [13]_ are composed of roughly \(100\ 000\) vertices, and correspondences are displayed using texture transfer. The map computation (including all preprocessing) took 60 seconds on a standard machine. [18, 19]. These approaches can successfully scale to very large meshes, and are especially effective for computing low frequency eigenfunctions. While these methods have been shown to be efficient when, e.g., using approximated spectra as shape descriptors [14] or for individual shape processing, they can come short when applied in _shape correspondence scenarios_. Conceptually, this is because the objectives and guarantees in [18, 19] only apply at a _global_ scale of individual shapes, instead of the local function approximation or function transfer required for functional and point-to-point map computation. In this work, we make a step towards creating scalable and efficient non-rigid shape correspondence methods, which can handle very large meshes, and are backed by theoretical approximation bounds. We focus on the functional map framework [13] and especially its recent variants based on spectral upsampling, such as the ZoomOut method [14] and its follow-up works [15, 16, 17]. These methods are based on iteratively updating functional and point-to-point maps and have been shown to lead to high-quality results in a wide range of cases. Unfortunately, the two major steps: basis pre-computation and interactive updating of the pointwise maps can be costly for dense shapes. To address this challenge, we propose an integrated pipeline that helps to make both of these steps scalable and moreover comes with approximation guarantees. For this we first establish a new functional space inspired by [18], and demonstrate how it can be used to define an approximation of functional maps without requiring either a dense pointwise correspondence or a even basis on the dense meshes. We then provide theoretical approximation bounds for this construction that, unlike the original definition in [13] is fully agnostic to the number of points in the original mesh. Following this analysis, we extend the approach introduced in [18] to improve our functional map approximation, and present an efficient and scalable algorithm for map refinement, based on our constructions, which eventually produces accurate results in the fraction of the time required for standard processing, as displayed on Figure 2. ## 2 Related Works Our main focus is on designing a scalable and principled approach for non-rigid shape correspondence, within the functional map framework. We therefore review works that are most closely related to ours, especially those using spectral techniques for shape matching, and refer the interested readers to recent surveys [12, 13, 14, 15, 16, 17] for a more comprehensive overview of other approaches. Spectral methods in shape matchingThe idea of using the spectral properties of the Laplace-Beltrami, and especially its eigenfunctions for shape correspondence has been investigated in many existing works. Early approaches focused on directly aligning the eigenfunctions, seen as descriptors, [16, 18] or using diffusion processes to derive descriptors or embedding spaces, e.g., [19, 18], among others. A more principled framework was introduced in [13], based on the idea of functional maps. The overall strategy is to express the pull-back of functions as an operator in a reduced basis, and to formulate objective functions based on desirable properties of such an operator. The main advantage of this approach is that it leads to small-scale optimization problems, with the number of unknowns independent of the size of the underlying meshes. Despite the simplicity of the original approach, its performance is strongly dependent on accurate descriptors and hyper-parameter tuning. As a result, this basic strategy has been extended significantly in many follow-up works, based both on geometric insights [15, 16, 17, 18], improved optimization strategies [19, 18, 17, 16], and richer correspondence models going beyond isometries across complete shapes, [13, 14, 15], among others. Functional and pointwise mapsWhile many approaches in the functional map literature focus on the optimization in the spectral domain, it has also been observed that the _interaction_ between pointwise and functional correspondences can lead to significant improvement in practice. This was used in the form of the Iterative Closest Point (ICP) refinement in the original article and follow-up works [13, 14, 15] and has since then been extended to map deblurring and denoising [1], as well as powerful refinement, and even map optimization strategies [14, 15, 16, 17]. All of these works are based on the insight that manipulating maps in _both_ the spectral and spatial (primal) domains can lead to overall improvement in the quality of the results. Unfortunately, such approaches can often come at a cost of scalability, since the complexity of pointwise maps is directly dependent on the mesh resolution, making it difficult to scale them to highly dense meshes. Multi-resolution spectral approachesOur work is also related to multi-resolution techniques for approximating spectral quantities, as, e.g., in [16], and especially to recent developments for accurate and scalable eigen-solvers geared towards Laplacian eigenfunctions on complex meshes [18, 19]. The latter set of methods have been shown to lead to excellent performance and scalability on tasks involving individual shapes, such as computing their Shape-DNA [14] descriptors, or performing mesh filtering. Similarly, there exist several spectral coarsening and simplification approaches [15, 16, 17] that explicitly aim to coarsen operators, such as the Laplacian while preserving their low frequency eigenapairs. Unfortunately, these methods typically rely on the eigenfunctions on the dense shapes, while the utility of the former approaches in the context of _functional maps_ has not yet been fully analyzed and exploited, in part, since, as we show below, this requires _local_ approximation bounds. Finally, we mention that our work is also related to hierarchical techniques, including functional maps between subdivision surfaces proposed in [12], and even more closely, to refinement via spectral upsampling [14]. However, the former approach relies on a subdivision hierarchy, while the acceleration strategy of the latter, as we discuss below, is based on a scheme that unfortunately can fail to converge in the in the presence of full information. Limitations of existing techniques and our contributionsTo summarize, the scalability of existing functional maps-based methods is typically limited by two factors: first, the pre-processing costs associated with the computation of the eigenfunctions of the Laplace-Beltrami operator, and second, the complexity of simultaneously manipulating pointwise and functional correspondences. In this context, our key contributions include: 1. We define an approximation of the functional map, which requires only a sparse correspondence, and provide a theoretical basis for this construction. 2. We analyze the basis approximation approach in [16] for functional map computation, obtaining explicit theoretical upper bounds. We then modify this approach to improve the approximation guarantees, leading to more accurate maps. 3. We present a principled and scalable algorithm for functional map refinement, based on our constructions, which produces accurate results at a fraction of the time of comparable methods. ## 3 Method Overview As mentioned above, our overall goal is to design a scalable pipeline for non-rigid shape matching that can handle potentially very dense meshes. We base our approach on the ZoomOut variant of the functional map framework [14]. However, our constructions can be easily extended to other recent functional maps methods, e.g., [13, 12], which share the same general algorithmic structure. Specifically, ZoomOut and related methods are based on two main building blocks: computing the eigenfunctions of the Laplace-Beltrami operator first, and then iterating between updating the point-to-point and functional maps. Our general pipeline is displayed on Figure 2 and consists of the following major steps. First, we generate for each shape a sparse set of samples and a factorized functional space using a modification of the approach introduced in [16], described in Sec 5.3. Secondly, we use the approximation of the functional map that we introduce (Sec. 5.1) to define a scalable version of the ZoomOut algorithm producing a sparse pointwise map. Finally, we extend this sparse map to a dense pointwise map with sub-sample accuracy, by using the properties of the functional subspaces we consider. The rest of the paper is organized as follows: in Section 4 we introduce the notations and background necessary for our approach. In Section 5.1, we introduce our functional map approximation based on the basis construction approach in [16]. Section 5.2 provides explicit approximation errors and Section 5.3 describes our modification of the method of [16], which helps to improve the theoretical upper bounds we obtained for functional map computation. Given these constructions, we show in Sec. 5.4 how ZoomOut-like algorithms can be defined, first by iteratively updating functional and pointwise maps in the reduced functional spaces, and then how the computed functional map can be extended onto the dense shapes efficiently. Section 5.5 provides implementation details, while Section 6 is dedicated to extensive experimental evaluation of our approach. ## 4 Notations & Background ### Notations For a triangle mesh, we denote by \(\mathbf{W}\) and \(\mathbf{A}\) its stiffness and mass matrices that together define the (positive semi-definite) Laplace Beltrami Operator as \(L=\mathbf{A}^{-1}\mathbf{W}\). Given two shapes \(\mathcal{N}\) and \(\mathcal{M}\) with, respectively, \(n\) and \(m\) vertices, any vertex-to-vertex map \(\mathcal{T}:\mathcal{N}\rightarrow\mathcal{M}\) can be represented as a binary matrix \(\mathbf{\Pi}\in\{0,1\}^{n\times m}\) with \(\mathbf{\Pi}_{ij}=1\) if and only if \(T(x_{i})=y_{j}\), where \(x_{i}\) denotes the \(i\)-th vertex on \(\mathcal{N}\) and \(y_{j}\) the \(j\)-th vertex in \(\mathcal{M}\). The eigenfunctions of the Laplace Beltrami operator can be obtained by solving a generalized eigenproblem: \[\mathbf{W}\psi_{i}=\lambda_{i}\mathbf{A}\psi_{i}, \tag{1}\] where in practice, we typically consider the eigenfunctions corresponding to the \(K\) smallest eigenvalues. ### Functional Maps and the ZoomOut algorithm Functional maps were introduced in [17] as a means to perform dense non-rigid shape matching. The key insight is that any Figure 2: Overall pipeline of our method, using real data from [12]. Given two dense input shapes, we first generate an approximate eigenbasis computation by using a modified version of the approach introduced in [16] (Sec. 5.3). We then propose a new scalable version of ZoomOut (Sec. 5.4), which exploits our functional map approximation (Sec 5.1) and comes with theoretical approximation bounds. Ultimately, this leads to dense pointwise correspondences between the two input shapes visualized here via color transfer. pointwise map \(T:\mathcal{N}\rightarrow\mathcal{M}\) can be transformed into a functional map via composition \(F_{T}:f\in\mathcal{F}(\mathcal{M})\mapsto f\circ T\in\mathcal{F}(\mathcal{N})\), where \(\mathcal{F}(\mathcal{S})\) is the space of real-valued functions on a surface \(\mathcal{S}\). Since \(F_{T}\) is linear, it can be represented as a matrix in the given basis for each space \(\left(\mathbf{w}_{i}^{\mathcal{M}}\right)_{i}\) and \(\left(\mathbf{w}_{i}^{\mathcal{N}}\right)_{i}\). If the basis on shape \(\mathcal{N}\) is orthonormal with respect to \(\mathbf{A}^{\mathcal{N}}\), the functional map \(\mathbf{C}\) can be expressed in the truncated basis of size \(K\) on each shape as a \(K\times K\) matrix: \[\mathbf{C}=\left(\mathbf{\Psi}^{\mathcal{N}}\right)^{\top}\mathbf{A}^{ \mathcal{N}}\mathbf{\Pi}\mathbf{\Psi}^{\mathcal{M}}, \tag{2}\] where each basis function on \(\mathcal{M}\) (resp. \(\mathcal{N}\)) is stacked as a column of \(\mathbf{\Psi}^{\mathcal{M}}\) (resp. \(\mathbf{\Psi}^{\mathcal{N}}\)), \(\mathbf{\Pi}\) is the matrix representing the underlying pointwise map, and we use \({}^{\top}\) to denote the matrix transpose. #### ZoomOut Given the Laplace-Beltrami eigenbasis, the ZoomOut algorithm [14] allows to recover high-quality correspondences starting from an approximate initialization, by iterating between two steps: (1) Converting a \(k\times k\) functional map to a pointwise map, (2) converting the pointwise map to a functional map of size \(k+1\times k+1\). This method has also been extended to other settings, to both promote cycle consistency [14] and optimize various energies [14] among others. Unfortunately, although simple and efficient, the scalability of this approach is limited, first, by the precomputation of the Laplacian basis, and second by the pointwise map recovery which relies on possibly expensive nearest-neighbor search queries across dense meshes. Several ad-hoc acceleration strategies have been proposed in [13]. However, as we discuss below, these do not come with approximation guarantees and indeed can fail to converge in the limit of complete information. ### Eigenbasis approximation To improve the scalability of spectral methods, recent works [13, 15] have tried to develop approximations of the Laplace Beltrami eigenbasis, via the reduction of the search space. Specifically, in [13], the authors first sample a set of \(p\) points \(\mathcal{S}=\{v_{1},\ldots,v_{p}\}\) on shape \(\mathcal{M}\) and create a set of \(p\) local functions \((u_{1},\ldots,u_{p})\), each centered on a particular sample point. Each function \(u_{j}\) is built from an unnormalized function \(\bar{u}_{j}\) supported on a geodesic ball of radius \(\rho\) around the sample \(v_{j}\), which decreases with the geodesic distance from the center: \[\bar{u}_{j}:x\in\mathcal{M}\mapsto\chi_{\rho}\left(d^{\mathcal{M}}(x,v_{j}) \right)\in\mathbb{R} \tag{3}\] where \(d^{\mathcal{M}}\) is the geodesic distance on shape \(\mathcal{M}\) and \(\chi_{\rho}:\mathbb{R}_{+}\rightarrow\mathbb{R}\) is a differentiable non-increasing function with \(\chi_{\rho}(0)=1\) and \(\chi_{\rho}(x)=0\) for \(x\geq\rho\). Choices for \(\chi\) are discussed in Appendix A. Finally, local functions \(u_{j}\) are defined to satisfy the partition of the unity by using: \[u_{j}(x)=\frac{\bar{u}_{j}(x)}{\sum_{k}\bar{u}_{k}(x)}\ \forall\ x\in\mathcal{M} \tag{4}\] Now only considering functions that lie in the \(\text{Span}\left\{u_{1},\ldots,u_{p}\right\}\), the original eigendecomposition system in Eq. (1) reduces to a generalized eigenproblem of size \(p\times p\): \[\overline{\mathbf{W}}\,\overline{\phi}_{i}=\bar{\lambda}_{i}\overline{\mathbf{ A}}\,\overline{\phi}_{i} \tag{5}\] with \(\overline{\mathbf{W}}=\mathbf{U}^{\top}\mathbf{W}\mathbf{U}\) and \(\overline{\mathbf{A}}=\mathbf{U}^{\top}\mathbf{A}\mathbf{U}\) where \(\mathbf{W}\) and \(\mathbf{A}\) are the stiffness and area matrices of \(\mathcal{M}\), and \(\mathbf{U}\) a _sparse_ matrix whose columns are values of functions \(\{u_{j}\}_{j}\). Eigenvectors \(\overline{\phi}_{i}\) are \(p\)-dimensional vectors describing the coefficients with respect to \(\{u_{j}\}\), which define the approximated eigenvectors as \(\overline{\psi}_{i}=\mathbf{U}\overline{\phi}_{i}\). Note that since \(\overline{\phi}_{i}\) are orthonormal with respect to \(\overline{\mathbf{A}}\) this implies that \(\overline{\psi}_{i}\) are orthonormal with respect to \(\mathbf{A}\). While the original work [13] focused on global per-shape applications such as filtering and Shape-DNA [13] computation, we build on and modify this pipeline in order to obtain reliable functions to perform dense _shape correspondence_. ## 5 Our approach In this section, we first present a functional map definition using the basis approximation strategy from of [13], and provide theoretical bounds on the approximation error (Secs. 5.1 and 5.2 respectively). Based by these results, we then introduce our modification of [13] in Section 5.3 which we use in our approach in order to minimize the computed bound. Finally we present our Extended ZoomOut algorithm and provide implementation details in Sections 5.4 and 5.5. ### Approximate Functional Map As mentioned in Sec. 4.3 the eigenfunctions computed using the approach in [13] are, by construction, orthonormal with respect to the area matrix \(\mathbf{A}\). Thus, they can be used to compute a functional map following Eq. (2). This leads to the following definition: **Definition 5.1** Given two shapes \(\mathcal{M}\) and \(\mathcal{N}\) with approximated eigenfunctions \(\left(\mathbf{\Psi}_{i}^{\mathcal{M}}\right)_{i}\) stacked as columns of matrix \(\overline{\mathbf{\Psi}}^{\mathcal{M}}\) (resp. with \(\mathcal{N}\)), the _reduced_ functional map associated to a pointwise map \(\mathbf{\Pi}:\mathcal{N}\rightarrow\mathcal{M}\) is defined as: \[\overline{\mathbf{C}}=\left(\overline{\mathbf{\Psi}}^{\mathcal{N}}\right)^{ \top}\mathbf{A}^{\mathcal{N}}\mathbf{\Pi}\overline{\mathbf{\Psi}}^{\mathcal{M}} \tag{6}\] Note that this functional map definition uses the approximated bases. However, it still relies on the knowledge of a full point-to-point map between complete (possibly very dense) shapes. To alleviate this constraint, we introduce another functional map \(\overline{\mathbf{C}}\) that only relies on maps between samples, independently from the original number of points: **Definition 5.2** Using the same setting as in Definition 5.1, with eigenfunctions arising from Eq. (5), \(\left(\overline{\phi}_{i}^{\mathcal{M}}\right)_{i}\) (resp. with \(\mathcal{N}\)) being stacked as columns of a matrix \(\overline{\mathbf{\Phi}}^{\mathcal{M}}\) (resp. with \(\mathcal{N}\)), given a pointwise map \(\overline{\mathbf{\Pi}}:\mathcal{S}^{\mathcal{N}}\rightarrow\mathcal{S}^{ \mathcal{M}}\), our _restricted_ functional map is defined as: \[\widehat{\mathbf{C}}=\left(\overline{\mathbf{\Phi}}^{\mathcal{N}}\right)^{ \top}\overline{\mathbf{A}}^{\mathcal{N}}\overline{\mathbf{\Pi}}\,\overline{ \mathbf{\Phi}}^{\mathcal{M}} \tag{7}\] Recall that, as mentioned in Sec 4.3\(\mathcal{S}\) denotes the sparse set of samples on each shape. Therefore, in order to define \(\widehat{\mathbf{C}}\), we only need to have access to a pointwise map between _the sample points_ on the two shapes. This restricted functional map \(\widehat{\mathbf{C}}\) is a pull-back operator associated to the reduced spaces \(\text{Span}\left\{\overline{\phi}_{k}^{\mathcal{M}}\right\}_{k}\) and \(\text{Span}\left\{\overline{\phi}_{k}^{\mathcal{N}}\right\}_{k}\) since both families are orthonormal with respect to \(\overline{\mathbf{\Lambda}}\). Furthermore, using the factorization \(\overline{\mathbf{\Psi}}=\mathbf{U}\overline{\mathbf{\Phi}}\) on each shape in (6) as well as the definition of \(\overline{\mathbf{\Lambda}}\), we remark that going from Eq. (6) to (7) only requires the approximation \(\mathbf{\Pi}\mathbf{U}^{\mathcal{M}}\simeq\mathbf{U}^{\mathcal{N}}\overline{ \mathbf{\Pi}}\), for which we will later on derive an upper bound in Proposition 5.2. Note that one might want to replace \(\overline{\mathbf{\Phi}}^{\mathcal{M}}\) by \(\overline{\mathbf{\Psi}}^{\mathcal{M}}\) in Eq. (7) so that the map \(\overline{\mathbf{\Pi}}\) actually transports pointwise values rather than coefficients. In practice as evaluated in Appendix B, we did not observe any improvement using this modification. The first benefit of the approximated functional map in Eq. (7) compared to the exact one in Eq. (6) is the exclusive use of small-sized matrices. Observe that functions \(\left(\overline{\mathbf{\Psi}}_{i}\right)_{i}\), are associated with the area and stiffness matrices \(\overline{\mathbf{\Lambda}}\) and \(\overline{\mathbf{W}}\), which define the \(L_{2}\) and \(W_{1}\) inner products, thus allowing to use _all_ functional map related algorithms in a straightforward way _without_ using any extra approximation or acceleration heuristics. Eventually a dense pointwise-map between complete shapes can be obtained by identifying the two pull-back operators \(\widehat{\mathbf{C}}\) and \(\overline{\mathbf{C}}\), as described later in Section 5.4. As we will see, the resulting correspondences outperform those obtained using remeshed versions of shape and nearest neighbor extrapolation, as our functional map produces sub-sample accuracy. Secondly, as shown in the following section, our approach is backed by strong theoretical convergence guarantees, providing bounds on approximation errors. In contrast, previous approaches, such as the accelerated version of ZoomOut [19] (Sec. 4.2.3) _might not_ converge to the true functional maps even when using all available information. Namely, Fast ZoomOut indeed samples \(q\) points on shapes \(\mathcal{M}\) and \(\mathcal{N}\), and approximates \(\overline{\mathbf{C}}\) using \[\mathbf{C}_{\text{F-ZO}}=\underset{\mathbf{X}}{\operatorname{argmin}}\,\| \mathbf{Q}^{\mathcal{N}}\mathbf{\Psi}^{\mathcal{N}}\mathbf{X}-\mathbf{\Pi} \mathbf{Q}^{\mathcal{M}}\mathbf{\Psi}^{\mathcal{M}}\|_{F}^{2} \tag{8}\] where \(\mathbf{Q}^{\mathcal{N}}\in\{0,1\}^{q\times n^{\mathcal{N}}}\) with \(\mathbf{Q}^{\mathcal{N}}_{ij}=1\) if and only if \(x_{j}\) is the \(i^{\text{th}}\) sample on shape \(\mathcal{N}\) (similarly for \(\mathcal{M}\)). Using all points means \(\mathbf{Q}\) matrices are identity. This approximation gives equal importance to all sampled points regardless of their area, and thus fails to converge towards the underlying \(\mathbf{C}\) as the number of samples increases. This means a near uniform sampling strategy is required in practice, which is difficult to achieve on very dense meshes. In the following section, we provide approximation error bounds for our functional map definition, which we later use to modify the approach from [17] to reduce these errors and obtain a more accurate and principled correspondence approach. ### Approximation Errors Most expressions above involve a given pointwise map \(\mathbf{\Pi}\) between surfaces \(\mathcal{N}\) and \(\mathcal{M}\). The following lemma provides simple assumptions to obtain a Lipschitz constant for its associated functional map, which will be very useful to derive bounds on the approximation errors of our estimators: **Lemma 5.1** Let \(\mathcal{M}\) and \(\mathcal{N}\) be compact surfaces and \(T:\mathcal{N}\to\mathcal{M}\) a diffeomorphism. Then there exists \(B_{T}\in\mathbb{R}\) so that: \[\|f\circ T\|_{\mathcal{N}}\leq B_{T}\,\|f\|_{\mathcal{M}}\quad\forall\,f\in L^ {2}(\mathcal{M}) \tag{9}\] the proof of which can be found in [10] (Proposition 3.3). Our overall goal is to use the newly designed functional map \(\overline{\mathbf{C}}\) within a ZoomOut-like functional map estimation algorithm. We therefore expect the approximated functional map to mimic the underlying map \(\mathbf{C}\) when the computed eigenvectors \(\overline{\mathbf{\Psi}}_{k}\) approximate well the true ones \(\Psi_{k}\). The following proposition bounds the error between the two functional maps: **Proposition 5.1** Let \(\overline{\mathbf{\Psi}}^{\mathcal{N}}\) (resp. \(\overline{\mathbf{\Psi}}^{\mathcal{M}}\)) and \(\mathbf{\Psi}^{\mathcal{N}}\) (resp. \(\mathbf{\Psi}^{\mathcal{M}}\)) the approximated and true first \(K\) eigenvectors of the Laplacian on \(\mathcal{N}\) (resp. \(\mathcal{M}\)). Let \(\mathbf{C}\) and \(\overline{\mathbf{C}}\) be the original and reduced (see Eq. (6)) functional maps of size \(K\), associated to the map \(T\). Suppose that \(T\) is a diffeomorphism, and let \(B_{T}\) be the bound given by Lemma 5.1. If there exists \(\epsilon\in\mathbb{R}_{+}^{+}\) so that for any \(j\in\{1,\dots,K\}\) : \[\|\mathbf{\Psi}_{j}^{\mathcal{N}}-\overline{\mathbf{\Psi}}_{j}^{\mathcal{N}}\|_ {\infty}\ \leq\epsilon\text{ and }\|\mathbf{\Psi}_{j}^{\mathcal{M}}-\overline{\mathbf{\Psi}}_{j}^{ \mathcal{M}}\|_{\infty}\ \leq\epsilon\] Then: \[\frac{1}{K}\,\|\mathbf{C}-\overline{\mathbf{C}}\|_{2}^{2}\leq\epsilon^{2}\left( 1+B_{T}^{2}\right) \tag{10}\] The proof can be found in Appendix C. This proposition ensures that a good estimation of the spectrum implies an accurate functional map approximation, and thus its good behavior within matching algorithms. A more fundamental error to control is the estimation error between the functional maps \(\overline{\mathbf{C}}\) from Def. 5.1 and \(\widehat{\mathbf{C}}\) from Def. 5.2. As mentioned above, the estimation relies on the identification \(\mathbf{\Pi}\overline{\mathbf{\Psi}}^{\mathcal{M}}\simeq\mathbf{U}^{\mathcal{N }}\overline{\mathbf{\Pi}}\ \overline{\mathbf{\Phi}}^{\mathcal{M}}\), where \(\overline{\mathbf{\Pi}}\) is a map between the two sets of samples \(\mathcal{S}^{\mathcal{N}}\) and \(\mathcal{S}^{\mathcal{M}}\), which we expect to be similar to \(\overline{\mathbf{\Pi}}\) on these spaces. This approximation treats equivalently the two following procedures: 1) interpolating between values on \(\mathcal{S}^{\mathcal{M}}\) then transferring using the map \(\mathbf{\Pi}\), 2) transferring values on \(\mathcal{S}^{\mathcal{M}}\) to values on \(\mathcal{S}^{\mathcal{N}}\) using \(\overline{\mathbf{\Pi}}\) and then interpolating on \(\mathcal{N}\). The following proposition bounds the error of this approximation: **Proposition 5.2** Let \(T:\mathcal{N}\to\mathcal{M}\) be a pointwise map between the shapes represented by \(\mathbf{\Pi}\), and let \(B_{T}\) be the bound given by Lemma 5.1. Suppose that \(T_{|\mathcal{S}^{\mathcal{N}}}:\mathcal{S}^{\mathcal{N}}\to\mathcal{S}^{ \mathcal{M}}\) is represented by \(\overline{\mathbf{\Pi}}\). Let \(\alpha=\min_{j}u_{j}^{\mathcal{M}}(v_{j})\in[0,1]\). Suppose further that there exists \(\epsilon>0\) so that for any \(k\in\{1,\dots,K\}\) and \(x,y\in\mathcal{S}^{\mathcal{M}}\): \[d^{\mathcal{M}}(x,y)\leq\rho^{\mathcal{M}}\Rightarrow|\overline{\mathbf{\Psi}}_{ k}^{\mathcal{M}}(x)-\overline{\mathbf{\Psi}}_{k}^{\mathcal{M}}(y)|\leq\epsilon \tag{11}\] and \[d^{\mathcal{M}}(x,y)\leq\rho^{\mathcal{M}}\Rightarrow|\overline{\mathbf{\Phi}}_{ k}^{\mathcal{M}}(x)-\overline{\mathbf{\Phi}}_{k}^{\mathcal{M}}(y)|\leq\epsilon. \tag{12}\] Then \[\frac{1}{K}\,\Big{\|}\mathbf{\Pi}\overline{\mathbf{\Psi}}^{\mathcal{M}}-\mathbf{U} ^{\mathcal{N}}\overline{\mathbf{\Pi}}\ \overline{\mathbf{\Phi}}^{\mathcal{M}}\Big{\|}_{\mathcal{N}}^{2}\leq\epsilon^{2}( 1-\alpha)+\epsilon^{2}B_{T}^{2} \tag{13}\] The proof is given in Appendix D. This proposition shows that the estimation error depends on two parameters: 1) the variation \(\epsilon\) of the eigenfunctions w.r.t to the sample distance \(\varrho\), 2) the _self-weights_\(u_{j}(v_{j})\) from the local functions defined in the basis approximation. Note that since the basis functions \(u_{j}\) verify \(0\leq u_{j}\leq 1\) and satisfy the partition of unity, they can be interpreted as interpolation weights from values at sampled points to values on the entire shape. This makes the dependence in \(\alpha\) more intuitive, as our approximation relies on the local identification of basis coefficients with function values. A discussion on the numerical values of the quantities used in Proposition 5.2 are provided in Appendix E. In the following, we will therefore seek to modify the basis approximation [2] in order to maximize \(\alpha\) while retaining both the quality of the approximation of the true Laplacian spectrum, necessary to apply functional maps-related algorithms. ### Improved Eigenbasis Approximation In this section, we propose a modification of the approach from [2], based on the theoretical bounds introduced above. For the rest of this section, we focus on a single shape, as the basis computations are done on each shape independently. As seen from Prop. 5.2, high self weights allow to stabilize our functional map approximation. Interestingly with the construction in [2], the value \(u_{j}(v_{j})\) only depends on the geodesic distance between \(v_{j}\) and other sampled points \(v_{i}\) for \(i\neq j\): \[u_{j}(v_{j})=\frac{1}{1+\sum_{i\neq j}\tilde{u}_{i}(v_{j})}. \tag{14}\] where \(\tilde{u}_{i}\) are the un-normalized local functions. We modify the pipeline from [2] in order to increase these values as follows: we first define a per-sample radius \(\rho_{j}\) for \(j\in\{1,\dots,p\}\) instead of a single global value \(\rho\). Given a sample point \(v_{j}\) with a small self-weight \(u_{j}(v_{j})\), radius \(\rho_{j}\) is kept untouched as it has no influence on the self-weight, but we instead reduce the radius \(\rho_{i}\) of its most influential neighbor - that is the radius of the point \(v_{i}\) with the highest value \(\tilde{u}_{i}(v_{j})\). Following Eq. (14) this eventually increases the value of \(u_{j}(v_{j})\). Note that this modification doesn't change the value \(u_{i}(v_{i})\) and increase the self weights of all its neighbors. This way all self weights are non-decreasing during the algorithm, with at least one of them increasing. This extra adaptation additionally comes at a negligible computational cost as it only requires re-evaluating \(u_{j}\) at a set of fixed vertices. In particular, this does not require additional local geodesic distance computations. More details are provided in Sec. 5.5, and the algorithm to compute these new functions is displayed in Algorithm 1. We observe that the adaptive radius strategy generates better local functions than those introduced in [2], especially for non-uniform sampling, as can be seen on a surface from the DFaust dataset [1] in Figure 3. Note that since we focus on _local_ analysis, a desirable property of the local interpolation function is the consistency across different shapes when only values at the samples are provided. With a single global radius, we see on Figure 3 that these functions can be heavily distorted by the normalization procedure, which is corrected by our approach. However, increasing the self-weights too close to 1 also deteriorates the results, as any vertex \(x\) within the radius of a single sample will be given the value of the sample point. There thus exists a limit at which this procedure ceases to be helpful, and the only solution then lies in increasing the number of samples on the shape. ``` 0: Mesh \(\mathcal{M}\), samples \((v_{k})_{k}\), initial \(\rho_{0}\), threshold \(\epsilon\) 1:\(\rho_{j}\leftarrow\rho_{0}\quad\forall j\) 2: Compute local functions \(U\) with radius \(\rho:\) (3), (4) 3: Add sample points if necessary 4:while some \(k\) with \(u_{k}(v_{k})<\epsilon\)do 5:\(j\leftarrow\underset{i\neq k}{\operatorname{argmax}}\;u_{i}(v_{k})\) 6:\(\rho_{j}\leftarrow\rho_{j}/2\) 7: update all \(u\) using Eq. (3), (4) 8:endwhile 9: Add unseen vertices in the sample ``` **Algorithm 1** Computation of local functions with adaptive radius The positive effect of our adaptive radius algorithm for functional map estimation is further visualized in Figure 4, where given a single pointwise map \(T\), we display the exact functional map on the approximated spaces \(\overline{\mathbf{C}}\), and two approximated functional maps \(\widehat{\mathbf{C}}\), one being computed with a shared radius [2] and the other with our adaptive radius scheme.We highlight that the ground truth functional map actually differ for each approximation \(\widehat{\mathbf{C}}\) as the reduced functional spaces are modified, which makes values not directly comparable. However, we observe that the two ground truth maps have nearly identical sparsity structure (see Appendix F), which is why we only display one in Figure 4. Note that using the adaptive radius strategy then generates a sparsity pattern on matrix \(\widehat{\mathbf{C}}\) very close to the ground truth one. Figure 4: Effect of the adaptive radius on functional map approximation. Top row displays a pointwise map \(T\) from the right mesh to the left mesh using color transfer. Bottom row displays \(\overline{\mathbf{C}}\) (Left), \(\widehat{\mathbf{C}}\) when using the pipeline from [2] (Middle) and our functional map \(\widehat{\mathbf{C}}\) (Right). Figure 3: Example of a local function \(u_{j}\) (red color) centered on \(v_{j}\) (red vertex), visualized without (Left) and with (Right) our adaptive radius strategy. Other samples \(v_{k}\) are displayed in black. ### Scalable ZoomOut In light of the previous discussions and theoretical analysis, we now describe how to use the approximated functional map \(\widehat{\mathbf{C}}\) within a standard ZoomOut pipeline [16]. Our complete pipeline is summarized in Algorithm 2, where the notation \(\overline{\boldsymbol{\Phi}}_{1:k}\) indicates that we only use the first \(k\) column of matrix \(\overline{\boldsymbol{\Phi}}_{1:k}\). ``` 0: Meshes \(\mathcal{M}\) and \(\mathcal{N}\), threshold \(\epsilon\), initial map 1: Sample \(\mathcal{S}^{\mathcal{M}}\) and \(\mathcal{S}^{\mathcal{N}}\) using Poisson Disk Sampling 2: Compute \(\mathbf{U}^{\mathcal{M}}\) and \(\mathbf{U}^{\mathcal{N}}\) using Algo. 1 3: Approximate eigenvectors \(\overline{\boldsymbol{\Phi}}^{\mathcal{M}}\) and \(\overline{\boldsymbol{\Phi}}^{\mathcal{M}}\) solving (5) 4: Set \(\overline{\boldsymbol{\Psi}}^{\mathcal{M}}=\mathbf{U}^{\mathcal{M}}\overline{ \boldsymbol{\Phi}}^{\mathcal{M}}\) and \(\overline{\boldsymbol{\Psi}}^{\mathcal{N}}=\mathbf{U}^{\mathcal{N}}\overline{ \boldsymbol{\Phi}}^{\mathcal{N}}\) 5: Obtain \(\overline{\boldsymbol{\Pi}}\) between samples using the initial map 6:for\(k=k_{\text{init}}\cdot k_{\text{final}}\)do 7:\(\widehat{\mathbf{C}}=\left(\overline{\boldsymbol{\Phi}}_{1:k}^{\mathcal{N}} \right)^{\top}\overline{\boldsymbol{\Lambda}}^{\mathcal{N}}\overline{ \boldsymbol{\Pi}}\,\overline{\boldsymbol{\Phi}}_{1:k}^{\mathcal{M}}\) 8:\(\overline{\boldsymbol{\Pi}}=\text{NNsearch}\left(\overline{\boldsymbol{\Phi}}_ {1:k}^{\mathcal{M}},\overline{\boldsymbol{\Phi}}_{1:k}^{\mathcal{N}}\widehat {\mathbf{C}}\right)\) potentially using (16) 9:endfor 10:\(\boldsymbol{\Pi}=\text{NNsearch}\left(\overline{\boldsymbol{\Psi}}_{1:k}^{ \mathcal{M}},\overline{\boldsymbol{\Psi}}_{1:k}^{\mathcal{N}}\widehat{\mathbf{ C}}\right)\) 11:Return\(\boldsymbol{\Pi}\) ``` **Algorithm 2** Scalable ZoomOut As mentioned earlier, using \(\widehat{\mathbf{C}}\) and matrices \(\overline{\boldsymbol{\Lambda}}\) and \(\overline{\boldsymbol{\Psi}}\) allows to apply the ZoomOut algorithm directly, as if it was applied on remeshed versions of the shapes with only \(p\) vertices. This results in a refined functional map \(\widehat{\mathbf{C}}^{*}\) and a refined pointwise map _between samples_\(\overline{\boldsymbol{\Pi}}^{*}\). The last remaining non-trivial task consists in converting the refined functional map into a global pointwise map \(\boldsymbol{\Pi}^{*}\) between the original dense meshes. Standard approaches using remeshed versions of the shapes extend maps via nearest neighbors, resulting in locally constant maps. Instead, we identify \(\widehat{\mathbf{C}}\) and \(\overline{\mathbf{C}}\), which then allows us to compute the pointwise map \(\boldsymbol{\Pi}^{*}\) by solving the standard least square problem: \[\boldsymbol{\Pi}^{*}=\underset{\boldsymbol{\Pi}}{\text{argmin}}\,\|\overline{ \boldsymbol{\Psi}}^{\mathcal{N}}\widehat{\mathbf{C}}^{*}-\boldsymbol{\Pi} \overline{\boldsymbol{\Psi}}^{\mathcal{M}}\|_{\boldsymbol{\Lambda}^{\mathcal{ N}}}^{2}. \tag{15}\] Since \(\mathbf{A}\) is diagonal this problem reduces to a nearest neighbor search for each vertex \(x\in\mathcal{N}\). This way, the obtained pointwise map is no longer locally constant which results in a significant gain of quality with respect to typical approaches. On meshes containing millions of vertices, this nearest neighbor search can, however, still be very slow. In these cases, we propose to use the computed pointwise map \(\overline{\boldsymbol{\Pi}}\) as a guide to reduce the search space as follows: for \(x\in\mathcal{N}\), we first select the indices of its nearest sample points \(N(x)=\{j\mid w_{j}^{\mathcal{N}}(x)>0\}\), and create the set of possible _images_ as the points in \(\mathcal{M}\) close to the image of this set under the map \(\overline{\boldsymbol{\Pi}}\), that is \[\mathcal{I}(x)=\{y\mid\exists j\in N(x),\;u_{\widehat{T}(j)}(y)>0\} \tag{16}\] where \(\hat{T}\) is the function representation of \(\overline{\boldsymbol{\Pi}}\). Since local functions \(u_{j}\) are compactly supported, in practice, they are stored as sparse vectors and extracting the set of possible images of a given vertex therefore can be done efficiently through simple indexing queries. ### Implementation We implement the complete algorithm in Python and provide the code at [https://github.com/RobinMagnet/Scalable_FM](https://github.com/RobinMagnet/Scalable_FM). Following [17], we generate sparse samples \(\mathcal{S}\) using Poisson Disk sampling, and run a fixed-radius Dijkstra algorithm starting at all sampled points \(v_{j}\) to build local functions \(u_{j}\). Values can be stored in a sparse \(n\times p\) matrix where \(p\) is the number of samples. Note that the adaptive radius algorithm presented in Section 5.3 does not require additional geodesic distance computations. Furthermore finding the set of potential images for a point as described in Section 5.4 simply reduces to checking non-zero indices in a sparse matrix. More details are provided in Appendix G. ## 6 Results In this section we evaluate our method, while focusing on two aspects. Firstly we verify that our method outperforms existing approaches in terms of speed at all steps of the pipeline - that is pre-processing as well as the ZoomOut algorithm. Secondly we show this gain in speed comes at a minimal cost in terms of quantitative metrics. In particular we verify that although our pipeline relies on sparse samples, we eventually obtain clear sub-sample accuracy in the correspondences. ### Timings The method introduced in [17] aimed at approximating the spectrum of the Laplace Beltrami Operator efficiently. As mentioned above, the additional building blocks we introduced in Section 5.3 come at a nearly negligible computational cost as the main bottleneck lies in local geodesic distances computations, which are not recomputed. The main benefit of our method appears when considering the processing time of the ZoomOut algorithm. Indeed since our algorithm does not involve any \(n\)-dimensional matrices, its running time becomes entirely agnostic to the original number of vertices. Only the final conversion step, which converts the refined functional map into a dense point-wise map, scales with the number of vertices. Table 1 displays an example of timings when applying the ZoomOut algorithm between two meshes with respectively 50 and 200 thousands vertices. We compare the standard ZoomOut algorithm (ZO), the Fast ZoomOut algorithm (Fast ZO), the standard ZoomOut applied to remeshed versions of the shapes with nearest neighbor extrapolation (R+ZO) and our complete pipeline with \(p=3000\) samples on each shape. Notice that farthest point sampling used in Fast ZoomOut can become quite slow on dense \begin{table} \begin{tabular}{c c c c c|c} \hline \hline methods & _Preprocess_ & _LBO_ & _ZoomOut_ & _Conversion_ & _Total (s)_ \\ \hline ZO & 1 & 132 & 410 & 83 & 626 \\ Fast ZO & 10 & 132 & 1 & 44 & 187 \\ R + ZO & 14 & 2 & 3 & 1 & 21 \\ \hline Ours & 10 & 7 & 5 & 44 & 65 \\ \hline \hline \end{tabular} \end{table} Table 1: Timing in seconds for different methods when processing a pair with 50K and 200K vertices and applying ZoomOut from spectral size 20 to 100 meshes compared to Poisson sampling, which explains the similar preprocessing timings between our method and Fast ZoomOut. ### Evaluation **Dataset** As most shape matching methods scale poorly with the number of vertices, there are few benchmarks with dense meshes and ground truth correspondences for evaluation. The SHREC19 dataset [19] consists of 430 pairs of human shapes with different connectivity, all of which come with initial correspondences. Meshes in this dataset have on average 38 000 vertices, with the smallest and largest number of vertices having respectively 4700 and 200 000 vertices. Due to the limitations of existing shape matching methods, a remeshed version of this dataset is commonly used. In contrast, we display results on the _complete dense dataset_, and show that our method obtains similar results as ZoomOut [19] in only a fraction of the required time. **Metrics** We evaluate different methods using standard metrics [18] for dense shape correspondence, that is accuracy, coverage and smoothness. The accuracy of a computed dense map \(T:\mathcal{N}\rightarrow\mathcal{M}\) gives the average geodesic distance between \(T(x)\) and \(T^{*}(x)\) for all \(x\in\mathcal{N}\) where \(T^{*}\) denotes the ground truth map. Note that since maps on SHREC19 are only evaluated on a small subset of 6890 points this metric only captures partial information, and locally constant maps can still achieve high accuracy. Coverage and smoothness metrics provide additional information on the quality of correspondences and are sensitive to locally constant correspondences. Coverage is defined as the ratio of area covered by the pointwise map, and smoothness is the Dirichlet energy defined as the squared \(L^{2}\) norm of the gradient of the transferred coordinates. **ZoomOut** We compare our method (Ours) using 3000 samples first to the same algorithm without adaptive radius (Ours w/o radius), to the standard ZoomOut [19] algorithm applied on the dense meshes (ZO) and on remeshed versions with 3000 vertices (R+ZO). We don't compare to other standard shape matching baseline [1, 2] first since we only wish to approximate results from ZoomOut, but also because these baselines don't scale to high number of vertices. Additionally, despite the lack of theoretical guarantees, we evaluate a new version of Fast ZoomOut which uses functional map approximation (8) on the approximated functional space \(\overline{\mathcal{F}}\) introduced in section 5.1 (Ours + Fast ZO). Table 2 shows the values of the evaluation metrics on the SHREC19 dataset where the accuracy curves can be found on Figure 6, and Figure 5 shows an example of a map computed on two dense meshes. We see that all methods but R+ZO produce similar metrics although processing times vary significantly. In contrast, the fastest method R+ZO produces locally constant maps as seen on Figure 5, which results in poor coverage and smoothness metrics. While our results are similar to ZoomOut and Fast ZoomOut, we stress that our results were obtained at a fraction of the processing time of ZoomOut, and come with theoretical upper bounds and control parameters on approximations which Fast ZoomOut does not have. \begin{table} \begin{tabular}{c|c c c} \hline \hline methods & _Accuracy_ & _Coverage_ & _Smoothness_ \\ \hline Init & 60.18 & 26.5 \% & 9.5 \\ \hline GT & – & 33.0 \% & 10.43 \\ ZO & **26.84** & **61.5** \% & 6.2 \\ R + ZO & 28.57 & 18.0 \% & 15.0 \\ Ours w/o radius & 71.35 & 29 \% & 52.2 \\ Ours + Fast ZO & 29.5 & 59.7 \% & 6.4 \\ \hline Ours & 27.78 & 56.7 \% & **5.6** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation of different methods on the complete SHREC19 dataset. Blue highlights the best two methods. Figure 5: Qualitative results on the SHREC19 dataset. Although processing time differ heavily, there is no significant difference between our method and results from ZoomOut. However, remeshing the surface before ZoomOut results in locally constant correspondences. Sub-sample accuracyOne Figure 7, we provide a result using texture transfer after applying our scalable ZoomOut on a pair of real scans of humerus bones obtained using a CT scanner [1]. This Figure shows how our algorithm obtain sub-sample accuracy, as the transferred texture remains smooth even though samples are quite sparse on each shape. We display similar results using texture transfer on the SHREC19 dataset on Figure 8 and in Appendix H, which provides further details on the shapes. Adaptive RadiusWhile results on Table 2 highlight the efficiency of the adaptive radius scheme, we additionally evaluate how this heuristic allows to improve the estimation \(\Delta=\|\mathbf{C}-\mathbf{\widehat{C}}\|\) presented in section 5. For this we simply compute \(\mathbf{\widehat{C}}\) and \(\mathbf{\widehat{C}}\) with \(K=20\) for all initial maps of the SHREC19 dataset, and evaluate the norms of the estimation errors \(\Delta\) which we provide in Table 3. In this experiment we notice our method improves the baseline by two orders of magnitude. ## 7 Conclusion, Limitations and Future Work In this paper we introduced a new scalable approach for computing correspondences between non-rigid shapes, represented as possibly very dense meshes. Our method is based on the efficient approach for estimating the Laplace-Beltrami eigenbasis [15] using optimization of coefficients of local extension functions built from a sparse set of samples. Key to our approach is careful analysis of the relation between functional spaces on the samples and those on the original dense shapes. For this, we extend this approach proposed in [15] and demonstrate how better behaved local functions can be obtained with very little additional effort. We use this construction to define a functional map approximation that only relies on information stored at the samples, and provide theoretical guarantees for this construction. Finally, we use these insights to propose a scalable variant of the ZoomOut algorithm [17], which allows to compute high-quality functional and point-to-point maps between very dense meshes at the fraction of the cost of the standard approach. Although our method achieves high-quality results, it still has several limitations. First, it relies heavily on the mesh structure, and is not directly applicable to other representations, such as point clouds. Second, our method depends on a critical hyperparameter, which is the number of samples. We have observed that 3000 samples perform well on a very wide range of settings, but it would be interesting to investigate the optimal number, depending on the size of the spectral basis. Furthermore, we use Poisson sampling as advocated in [15], which gives good results in practice. However, the optimal choice of the sampling procedure, depending on the geometric properties of shapes under consideration, would be an equally interesting venue for investigation. Lastly, an out-of-core implementation, capable of handling meshes with 10's of millions to billions of vertices, while possible in principle, would be an excellent practical future extension of our approach. AcknowledgmentsThe authors thank the anonymous reviewers for their valuable comments and suggestions. Parts of this work were supported by the ERC Starting Grant No. 758800 (EXPROTEA) and the ANR AI Chair AIGRETTE.
2310.13568
Local symmetry groups for arbitrary wavevectors
We present an algorithm for the determination of the local symmetry group for arbitrary k-points in 3D Brillouin zones. First, we test our implementation against tabulated results available for standard high-symmetry points (given by universal fractional coordinates). Then, to showcase the general applicability of our methodology, we produce the irreducible representations for the ``non-universal high-symmetry" points, first reported by Setyawan and Curtarolo [Comput. Mater. Sci. 49, 299 (2010)]. The present method can be regarded as a first step for the determination of elementary band decompositions and symmetry-enforced constraints in crystalline topological materials.
Emanuele Maggio, Andriy Smolyanyuk, Jan M. Tomczak
2023-10-20T15:06:19Z
http://arxiv.org/abs/2310.13568v1
# Local symmetry groups for arbitrary wavevectors ###### Abstract We present an algorithm for the determination of the local symmetry group for arbitrary \(\mathbf{k}\)-points in 3D Brillouin zones. First, we test our implementation against tabulated results available for standard high-symmetry points (given by universal fractional coordinates). Then, to showcase the general applicability of our methodology, we produce the irreducible representations for the "non-universal high-symmetry" points, first reported by Setyawan and Curtarolo [Comput. Mater. Sci. 49, 299 (2010)]. The present method can be regarded as a first step for the determination of elementary band decompositions and symmetry-enforced constraints in crystalline topological materials. ## I Introduction Topological materials have entered the centre stage of both theoretical [1; 2; 3; 4; 5] and experimental [6; 7; 8; 9; 10] investigations in condensed matter physics, thanks to possible applications they offer [11; 12; 13; 14] and to the vast array of conceptual challenges they pose. Limiting the discussion to theoretical aspects, we can summarise the most fundamental question as follows: given a specific material (_i.e._ a crystalline solid with a definite chemical composition and stoichiometry) and its (weakly interacting) band structure, is it possible to continuously connect it to an atomic limit? In other words: when can one represent the electronic structure of the material in terms of its chemical constituents and when do single particle dispersions emerge that define globally a (topologically) non-trivial state of matter? Generalising the original approach by Dyson [15], efforts for classifying systems of free electrons with and without time-reversal symmetry have come to fruition for systems without discrete spatial symmetries [16]. Towards a more general classification, discrete spatial symmetries pose considerable difficulties to incorporate in full generality [17], yet, they appear to protect the topological nature of systems featuring an inversion centre [18], mirror reflections [19; 20] or non-symmorphic symmetry elements [21; 22; 23; 24; 25]. Further, to study topological semimetals, the concept of band crossing has been often used, which implies identifying a certain symmetry (that must be preserved along the energy band) and any obstruction of the band connectivity at high symmetry \(\mathbf{k}\)-points in a way that preserves said symmetry [26; 27]. On the other hand, for the class of Weyl semimetals, the application of symmetry criteria is problematic, since, owing to the degeneracies of the electronic wavefunction with spin-orbit coupling, such band crossings can occur at any point in reciprocal space [28]. Recently, in chiral crystals, the interplay at high symmetry points between structural symmetries and time inversion has been investigated [29], thus allowing the identification of symmetry-enforced band crossings also for these systems [30]. Additionally, a connection has been highlighted between topological invariants, which correspond to certain values of symmetry indicators, and Berry phases evaluated over closed loops in the Brillouin zone [28]. Systematic approaches to the identification of symmetry indicators for topological materials have been developed recently [31; 32; 33; 34; 35; 36; 37]: a common theme of these methods is how the materials' symmetries are taken into account through their action on the single particle wavefunctions and the corresponding energy dispersion bands. Specifically, the overarching rationale for identifying a topological material is through an obstruction of the decomposition of energy band dispersions in reciprocal space into representations of the (topologically trivial) atomic insulator, that is, to de termine its elementary band decomposition. This approach banks on a seminal idea, due to Zak [38, 39], where space group representations are induced starting from either those of high symmetry points in reciprocal space, or Wyckoff positions in direct space. For a topologically trivial material, it is surely possible to decompose one such representation in terms of the elementary bands, and an obstruction to do so, identifies a topological band structure. The key ingredient for either the identification of symmetry indicators or the decomposition into elementary bands is the local symmetry group of the \({\bf k}\)-points considered, a. k. a. the little group of the wavevector. These groups have long been tabulated [40] for a conventional set of \({\bf k}\)-points, and recently [41] a digitalised version of the old tables has been produced. The same tables are also accessible on the Bilbao Crystallographic Server [42]. In a detailed study Setyawan and Curtarolo [43] report additional high symmetry \({\bf k}\)-points [44] necessary to identify continuous dispersion paths in reciprocal space. More recent efforts [45, 46] have further relaxed the definition for the high symmetry \({\bf k}\)-points and provided a common frame for different crystallographic conventions. For all the newly tabulated \({\bf k}\)-points, the coordinates (reported in Figs. 3-10 in the primitive Wigner-Seitz cell) come to depend on the lattice parameters [43, 45]. We refer to these \({\bf k}\)-points as "non-universal" to contrast them with the conventional set of high symmetry points for which the coordinates can be represented by universal fractions independent of the lattice parameters. It must be pointed out that these "non-universal" \({\bf k}\)-points have the same stabiliser group as the high symmetry line they occupy and they can be chosen as line representatives if also the line lies on the Brillouin zone boundary; on the other hand, if such "non-universal" \({\bf k}\)-points are the only intersection with the Brillouin zone boundary, the corresponding local symmetry group will have to include additional translations that act trivially on \({\bf k}\)-points inside the Brillouin zone, hence resulting in a different local symmetry group than the rest of the high-symmetry line. In Figure 1, we schematically exemplify the distinction between "non-universal" and universal \({\bf k}\)-points: while for the shown body-centred tetragonal Brillouin zone for lattice constants \(a>c\), _e.g._, the P-point \([\frac{1}{4},\frac{1}{4},\frac{1}{4}]\) is a universal rational momentum-coordinate, the "non-universal" Z-point [\(\eta\), \(\eta\), \(-\eta\)] explicitly depends on the lattice parameters as \(\eta=(1+c^{2}/a^{2})/4\). In all the above approaches the high symmetry points are identified through the action of space group operations that leave it invariant (up to the addition of a reciprocal lattice vector), _i.e._ via the \({\bf k}\)-point stabiliser \(G_{\bf k}\). Yet, as we have already highlighted above, the key ingredient through which \({\bf k}\)-points enter the classification of electronic band structures is the irreducible representation of the little group of the wavevector, \(G_{\bf k}^{*}\). However, an evaluation of the latter poses a challenge for the usual method (see section II.2 for a summary) at "non-universal" k-points, since also the corresponding little group of the wavevector turns out to depend on structural details. On the Bilbao Crystallographic server this issue is avoided by reporting only the stabiliser group of the symmetry line, which implies that the irreducible representations will not contain the translational elements that act on the Brillouin zone boundary. The characters corresponding to these translations can be obtained easily in the case of a split extension, but they are less straightforward to evaluate for non-symmorphic space groups. While neglecting them might suffice for most practical applications, in this work we outline and implement a theory that treats universal and "non-universal" \({\bf k}\)-points on an equal footing by proposing an algorithm that can identify the little group for an arbitrary \({\bf k}\)-point. Specifically, we apply the method to the lattice parameter-dependent high-symmetry points and present the character tables defining their irreducible representations; the explicit matrix representations are also reported in the supplementary information. The paper is organized as follows: we summarise in section II.2 some background about the conventional method to construct \(G_{\bf k}^{*}\), and in section II.3 we instead devise a more general approach for identifying group extensions, without any dependence from the lattice parameters. Our algorithm allows to consider all high symmetry points for the 230 space groups in 3 dimensions. Details of our computational implementations are reported in section III and we finally list the groups \(G_{\bf k}^{*}\) and their character tables in section IV. ## II Theory ### Background In this section we provide for completeness some background for the ensuing presentation in section II.3, and we refer to Refs. [47; 48] for a more detailed discussion. To ease the notation, we drop the subscript \(\mathbf{k}\) from the groups introduced previously, hence we deal in general with an Abelian group \(M\) that is a normal subgroup of \(G^{*}\), whereas \(G\cong G^{*}/M\) need not be a subgroup of \(G^{*}\). We assume that there is a homomorphism \(\pi:G^{*}\to G\) that makes the following sequence of groups \[e\to M\to G^{*}\xrightarrow{\pi}G\to e \tag{1}\] exact. That means that \(\ker\{\pi\}=M\), hence we can find a transversal (_i.e._ a set of coset representatives) for \(G\) in \(G^{*}\), \(t:G\to G^{*}\), such that \(t(e)=e\), with \(e\) the identity element (of the appropriate group). The elements of \(G^{*}\) can then be written as \((t(x),m)\) for \(x\in G\) and \(m\in M\). For any two elements \(x,y\in G\) one has \(\pi(t(xy))=xy=\pi(t(x))\pi(t(y))=\pi(t(x)t(y))\), hence there is a unique element \(\mu(x,y)\in M\) such that \[t(xy)\mu(x,y)=t(x)t(y). \tag{2}\] The elements \(\mu\) of \(M\) are 2-cocycles, but also known as Schur's multipliers and represent the "obstructions" in \(G^{*}\) that do not allow \(G\) to be a subgroup of \(G^{*}\). Since the little group of \(\mathbf{k}\) (here: \(G^{*}\)) comprises of both point group operations and translations, its identification is equivalent to the construction of the correct group extension of \(M\) by \(G\), where \(M\) can now be identified with the subgroup of lattice translations. We find it more convenient to stick to the multiplicative notation for elements of \(M\), instead of using the common additive notation. A defining feature of the group \(G\) is to leave the lattice \(M\) invariant, and if we indicate the action of an element \(x\in G\) on \(m\in M\) by \(m^{x}=m^{t(x)}\), we have a homomorphism from \(G\) to the group of transformations of \(M\) into itself \(\alpha:G\rightarrow\text{Aut}(M),x\mapsto x^{\alpha}\), which will be required in section II.3. ### Summary of the standard approach The method of choice in solid state physics for the construction of the little group of the wavevector is due to Herring [49; 50] and it has been applied to all 230 space groups in Ref. [40] for the restricted list of \(\mathbf{k}\)-points with rational coordinates. The method produces an extension of the stabiliser of the \(\mathbf{k}\)-point by introducing the map \(s:M\rightarrow\mathbb{C}\), \(s:=\text{exp}[2\mathrm{i}\pi\mathbf{k}\cdot\mu]\) that effectively identifies non-trivial translations; the value of \(s\) is then adjoined to each element of \(G\) to label the entries of the new group \(G^{*}\), whose multiplication table is generated following the point group multiplication rule for the first index and the product rule in Eq. (2) with the definition above for the second index [50]. The action of these translations on the point group operations is trivial for symmorphic space groups, meaning that the resulting little group of the wavevector is the direct product of the stabiliser with the cyclic group generated by the relevant translations. For non-symmorphic space groups this may not be the case and \(G^{*}\) is in general a group with a more complex structure, with non-trivial relations involving point group operations and translations. The group's order can also be determined from knowledge of the \(\mathbf{k}\)-point alone [51]: \(|G^{*}|=p|G|\) with \(p=\text{lcm}(p_{1},p_{2},p_{3})\) for a \(\mathbf{k}\)-point whose coordinates are the rational numbers \(\mathbf{k}=[\frac{q_{1}}{p_{1}},\frac{q_{2}}{p_{2}},\frac{q_{3}}{p_{3}}]\). For the "non-universal" \(\mathbf{k}\)-points such an approach is clearly not viable, since the order of the resulting group would depend on the crystallographic lattice constants, rather than be uniquely determined by the symmetries at play for that particular wavevector. In the following section we suggest an alternative construction for the Fig. 1: Schematic representation of the classification for \(\mathbf{k}\)-points on the Brillouin zone boundary. Here we provide a consistent implementation for the two cases depicted. identification of the little group of the wavevector, thus filling a conspicuous gap in the recent literature. ### Central Extension's Automorphism group To overcome the limitations stated above, we propose to leverage an idea due to Wells [52; 53] which allows to characterise the group extension \(G^{*}\) by studying the transformations that map \(G^{*}\) onto itself, that is by studying the group \(\text{Aut}(G^{*})\). In particular, for transformations \(\theta\in\text{Aut}(M)\) and \(\phi\in\text{Aut}(G)\), there might not be a transformation \(\gamma\in\text{Aut}(G^{*})\) that induces the pair \((\theta,\phi)\). A pair \((\theta,\phi)\in\text{Aut}(M)\times\text{Aut}(G)\) is compatible if it fulfils the condition \(\theta x^{\alpha}\theta^{-1}=\left(x^{\phi}\right)^{\alpha}\) for all \(x\in G\), in other words, \((\theta,\phi)\) are compatible if they preserve the conjugation action of \(G\) on \(M\). Compatible pairs form a group [47; 53]\(\text{Comp}(G^{*},M)\) contained in \(\text{Aut}(M)\times\text{Aut}(G)\). So, if there is an automorphism \(\gamma\) of \(G^{*}\) that keeps \(M\) fixed (as a set, so elements of \(M\) could be permuted among each other) then one could define the map \(\tau:\text{Aut}_{M}(G^{*})\rightarrow\text{Aut}(M)\times\text{Aut}(G)\) as \(\tau(\gamma)=(\theta,\phi)\), which is crucial for the definition of the following exact sequence constructed by Wells [52]: \[e\to Z_{\alpha}^{1}\left(G,M\right)\xrightarrow{\psi}\text{Aut}_{M} \left(G^{*}\right)\xrightarrow{\tau}\text{Comp}(G^{*},M)\to H_{ \alpha}^{2}(G,M)\to e. \tag{3}\] The sequence above connects the groups that we have just constructed with the second cohomology group \(H^{2}\) and with the group of 1-cocycles \(Z^{1}\). In particular, since the sequence is exact, one has that \(\mathrm{im}\{\psi\}=\ker\{\tau\}\), thus it sufficient to study the image and the kernel of \(\tau\) to fully characterise the transformations of \(G^{*}\) that we are interested in. Since the sequence above is exact, these transformations are all mapped into the identity element of \(H_{\alpha}^{2}\). More specifically, the following theorem allows to explicitly construct the 2-cocycles that specify our little group of the wavevector [52]: if \(\gamma\in\text{Aut}_{M}(G^{*})\) then there is a triplet \((\theta,\phi,\chi)\in\text{Aut}(M)\times\text{Aut}(G)\times V/M\) such that \[\gamma\left((t(x),m)\right)=(\phi(x),\chi(x)\theta(m)) \tag{4}\] \[\theta\left(m^{x}\right)=\theta(m)^{\phi(x)}\] (5) \[\mu\left(\phi(x),\phi(y)\right)\theta\left(\mu(x,y)^{-1}\right)= \left(\chi(x)^{-1}\right)^{\phi(y)}\chi(y)^{-1}\chi(xy). \tag{6}\] The proof can be found in Ref. [53], but let us comment briefly on the quantities involved: \(\chi(x)\) is a map from \(G\) to translations modulo \(M\) (\(V\) denotes the vector space of translations defined in the usual sense), it will depend in general on the choice made for the coset representative \(t(x)\) of the point group elements \(x\in G\), but once this choice is made \(\chi(x)\) is unique and it is defined as \(\gamma((t(x),e))=(\phi(x),\chi(x))\). Clearly, \(\chi\) will also be acted on by \(\theta\), but this action can be basically recast as a different choice for \(t(x)\), hence we are interested in. Since the sequence above is exact, these transformations are all mapped into the identity element of \(H_{\alpha}^{2}\). More specifically, the following theorem allows to explicitly construct the 2-cocycles that specify our little group of the wavevector [52]: if \(\gamma\in\text{Aut}_{M}(G^{*})\) then there is a triplet \((\theta,\phi,\chi)\in\text{Aut}(M)\times\text{Aut}(G)\times V/M\) such that \[\gamma\left((t(x),m)\right)=(\phi(x),\chi(x)\theta(m)) \tag{7}\] \[\theta\left(m^{x}\right)=\theta(m)^{\phi(x)}\] (8) \[\mu\left(\phi(x),\phi(y)\right)\theta\left(\mu(x,y)^{-1}\right)= \left(\chi(x)^{-1}\right)^{\phi(y)}\chi(y)^{-1}\chi(xy). \tag{9}\] The proof can be found in Ref. [53], but let us comment briefly on the quantities involved: \(\chi(x)\) is a map from \(G\) to translations modulo \(M\) (\(V\) denotes the vector space of translations defined in the usual sense), it will depend in general on the choice made for the coset representative \(t(x)\) of the point group elements \(x\in G\), but once this choice is made \(\chi(x)\) is unique and it is defined as \(\gamma((t(x),e))=(\phi(x),\chi(x))\). Clearly, \(\chi\) will also be acted on by \(\theta\), but this action can be basically recast as a different choice for \(t(x)\), hence we omit this dependence in the equations above. In practice, \(\chi(x)\) can either be chosen to be identically zero if the space group is symmorphic and the Bravais lattice is primitive, or it is a known function otherwise. Eq. (5) is a restatement of the compatible pair condition, whereas Eq. (6) provides a transformation law for the 2-cocycles under the action of a compatible pair: the left hand side of Eq. (6) is a 2-cocycle (let's call it \(\mu^{(\theta,\phi)}(x,y)\)). Then, the condition \(\gamma\in\ker\{\tau\}\) corresponds to the case \(\mu^{(\theta,\phi)}(x,y)=e\), _i.e._ the extension \(G^{*}\) is a direct product of the stabiliser of the wavevector with a group of translations, whereas for \(\gamma\in\mathrm{im}\{\tau\}\) one has \(\mu^{(\theta,\phi)}(x,y)\neq e\) and the calculation of the little group of the wavevector can proceed in analogy with Herring's method. When \(\gamma\in\ker\{\tau\}\), Eq. (6) is identically zero and a generator \(t\) for the translation group is needed; to this end we use Hopf's formula as reported in Ref. [54]. The element \(t\) will then belong to \((G^{*})^{\prime}\cap M\), where \((G^{*})^{\prime}\) is the commutator subgroup of \(G^{*}\). If the extension is Abelian, \((G^{*})^{\prime}\) is trivial and no such translation exists. In this case the local symmetry group of \(\mathbf{k}\) coincides with its stabiliser. On the other hand, when \(t\neq e\), we are left with the task of determining the order of such a translation group, that is the integer \(p\) such that \(t^{p}=e\). While in the Herring's method such a choice is made (heuristically) as summarised in section II.2, for the case of "non-universal" \(\mathbf{k}\)-points we make the estimate for \(|G^{*}|=p|G|\), with \(p\) the smallest prime factor of the point group order \(|G|\), thus removing the lattice parameters' dependence in the \(\mathbf{k}\)-point coordinates: we denote this "abridged" wavevector \(\tilde{\mathbf{k}}\). In this way, we are separating out the effect of the non-trivial translations on the stabiliser of the \(\mathbf{k}\)-point from the trivial translations that depend on the specifics of the lattice parameters. This choice is motivated by the fact that an upper bound on the order of \(G^{*}\) is given by the order of the semidirect product \(M\rtimes G\), which is just the product of the orders of the two groups \(M\) and \(G\). The constraint on \(|M|\) in this case originates from the requirement that \(|M|\) and \(|G|\) ought not to be coprime: if that were the case the resulting second cohomology group \(H^{2}(G,M)\) would be trivial and the group extension \(G^{*}\) would split. On the other hand, with our choice we ensure that a more complex structure of the extension group could be captured (since \(H^{2}(G,M)\) is not trivial) while keeping the order of the resulting group minimal. In section IV we will provide an example when the minimal choice for the group order is too restrictive and values of \(p^{n}\) with \(n>1\) natural, have to be considered instead. Our reasoning is, perhaps, better explained with an example: let us consider the X-point for the Orthorhombic body-centred lattice, with coordinates \([-\xi,\xi,\xi]\) (see Figure 6). By setting the lattice parameters to \(a=\frac{1}{5},\ b=\frac{1}{4}\ c=\frac{1}{3}\), one gets \(\xi=\frac{17}{50}\), hence the phase factor \(s=\exp[2\mathrm{i}\pi\mathbf{k}\cdot m]\) will be equal to 1 only for multiples of 50. A translation group of order 50 contains a subgroup of order 2 and a subgroup of order 25 (the order of a subgroup must divide the order of the group), with the latter acting trivially on \(G\), because \(G\) can not contain five-fold rotations (owing to the crystallographic restriction) and thus its order \(|G|\) will be coprime to \(5^{2}\). The translation subgroup of order 2, on the other hand, can act non-trivially on \(G\) and the resulting extension \(G^{*}\) is what we tabulate. In order for the \(s\) coefficients to be able to reflect the periodicity of the translation subgroup acting non-trivially, we evaluate them using the "abridged" wavevector \(\tilde{\mathbf{k}}=\frac{1}{p}[1,1,1]\), for the example at hand, and where \(p=2\) in this example, has been introduced previously. In general, the construction of the abridged vector replaces only the lattice dependent coordinates with the factor \(\frac{1}{p}\). To give an overall summary of the approach we employ (which is discussed in refs. [47, 53]) we can then say that in order to construct an extension \(G^{*}\) the knowledge of the 2-cocycles \(\mu(x,y)\) is required owing to Eq. (2): to get a handle on these translations it is useful to study the behaviour of the resulting extension (in Eq. (1)) under automorphisms. A special class of such automorphisms is \(\mathsf{Comp}(G^{*},M)\): the different (non-isomorphic) extensions \(G^{*}\) correspond to the orbits of \(\mathsf{Comp}(G^{*},M)\) on \(H^{2}_{\alpha}(G,M)\) (for a proof see SS 2.7.4 in ref. [47]). The knowledge of \(H^{2}_{\alpha}(G,M)\) is thus not necessary, as we only need to characterise the behaviour of \(\gamma\in\mathsf{Aut}_{M}(G^{*})\) with respect to it. To this end, one employs the exact sequence in Eq. (3), with the map \(\tau\) playing an important role, since it allows to enumerate the two cases that can occur: if \(\gamma\in\ker\{\tau\}=\mathrm{im}\{\psi\}\), then the automorphism \(\gamma\) takes values in \(Z^{1}_{\alpha}\), implying \(\chi(xy)=\chi(x)^{y}\chi(y)\) and thus Eq. (6) is equal to the identity. In this case the corresponding extension \(G^{*}\) splits and we have suggested a scheme for the evaluation of such direct product in the previous paragraph. For the case when \(\gamma\in\mathrm{im}\{\tau\}\) one has that the function \(\chi(x)\) forms a 2-coboundary and Eq. (6) provides already a recipe for the construction of the Schur's multipliers as the function \(\chi(x)\) is a known function over the entire space-group. ## III Computational methods In our code, we tabulate the generators of the 230 space groups in the conventional unit cell settings in accordance with the International Tables for Crystallography (vol. A) [55]. Consistently, we identify the \(\mathbf{k}\)-point coordinates in the Wigner-Seitz reciprocal lattice unit cell of the additional wavevectors reported in Ref. [43]. For completeness we report in Figure 3-10 these Brillouin zones and the coordinates of the wavevectors with respect to the primitive basis vectors (also reproduced in the pictures). The algorithm then proceeds to construct the stabiliser of the wavevector and hence its little group, following either of the methods reported in section II. In a limited number of instances, the local symmetry group generated can be bigger than our theoretical estimate outlined previously: this happens when the non-trivial translation \(\mu^{(\theta,\phi)}\) is collinear with the centering vectors in the non-primitive unit cell. For non-symmorphic space groups, the action of the centering translations can be non trivial and the resulting order of the extension exceeds the estimate by a factor proportional to the number of centering vectors present in the resulting group, typically by a factor two, thus leading to a local symmetry group of order \(p^{2}|G|\). In order to better investigate the group structure (see the discussion in section IV) the algorithm can list all subgroups of index \(n\) thanks to a one-to-one correspondence with (standardised and complete) coset tables with \(n\) rows and the subgroups of the given group; for a detailed exposition of the backtrack search strategy employed in the algorithm's implementation we refer to chapter 5 in Refs. [56, 47] or to Ref. [57]. Once that the little group of the wavevector has been constructed we determine its irreducible representations (over the complex numbers) by computing its character table. To this end we have implemented the Dixon algorithm [58, 59, 60, 47], which makes use of modular arithmetic to efficiently diagonalise the class constant matrices. In order to simultaneously diagonalise the class constant matrices, the order in which the individual matrices are fed into the algorithm plays a role, since it is possible for a given matrix to have eigenvalues with multiplicity bigger than one and such degeneracy can be resolved by prompting only a specific choice for the next matrix (see SS 7.7 in ref. [47]). Furthermore, the expression for the class matrices themselves depends on the choice for the group's generators [50]. Selection at random for the generating elements and for the permutation order of the matrices typically allows to retrieve the full character table within a few attempts; should that not be the case, we proceed by computing the characters of the group Abelianisation and then enforce congruences among characters of high dimensional irreducible representations still missing, as suggested in Ref. [61]. In Figure 2 we report a flow chart for our implementation. We cross-check our implementation against the tabulated results for universal \(\mathbf{k}\)-points in Ref. [40]. Additionally, we also check that the character table preserves the group multiplication rule, that is the computed characters are actually an homomorphism \(\zeta:G^{*}\to\mathbb{C}\). This is enforced by requiring the convolution of characters to fulfill the orthogonality condition [62]: \[\sum_{g\in G^{*}}\overline{\zeta_{r}}(g)\zeta_{s}(hg)=\frac{\delta_{rs}\zeta_{ r}(h)|G^{*}|}{\zeta_{r}(e)} \tag{7}\] for each element \(h\in G^{*}\) and for all irreducible representations \(r,s\), overbar denotes complex conjugation in the expression above. Our algorithm automatically checks for Eq. (7) when generating the character table, besides the usual orthogonality relations, which are special cases of the equation above. We point out that Eq. (7) involves to globally verify the group structure, and it can be tested only laboriously taking as input the character tables provided in Ref. [40] since the corresponding abstract groups are tabulated therein. Finally we touch on the computational methods to obtain the actual representation matrices starting from the character table. The algorithm that we are about to discuss is largelymutated from the articles by Blokker [63, 64] and we also refer to the classic ref. [62] for an in-depth discussion of the theoretical aspects. Denoting \(\Gamma^{r}\) the regular representation of a group \(G\) one can form the sum of elements in the \(i^{\mathrm{th}}\) conjugacy class, which we call \(\mathfrak{C}_{i}\). Then the operators \(\mathfrak{p}_{j}\) are uniquely determined idempotent operators defined in the group algebra \(\mathbb{C}G\): \[\mathfrak{p}_{j}=\frac{d_{j}}{|G|}\sum_{i=1}^{n}\overline{\zeta_{j}}(g_{i}) \mathfrak{C}_{i} \tag{8}\] where \(n\) is the number of conjugacy classes, \(d_{j}=\zeta_{j}(e)\) is the representation degree and \(g_{i}\) is a representative element for the \(i^{\rm th}\) class. The set of class functions is spanned by the centre of the group algebra \(Z[\mathbb{C}G]\), hence our objective is to form a representation for \(Z[\mathbb{C}G]\) by projecting the regular representation over the basis \(\mathbf{t}_{i}\) for the range of the operator \(\mathfrak{p}_{i}\): \[\Gamma_{i}^{Z}(g)=\mathbf{t}_{i}^{\dagger}\Gamma^{r}(g)\mathbf{t}_{i}\] the matrices \(\Gamma_{i}^{Z}\) will in general be reducible representations of degree \(d_{i}^{2}\) containing the irreducible representation \(\Gamma_{i}\) only \(d_{i}\) times. The associated eigenvalues will then have multiplicity \(d_{i}\) and the corresponding eigenvectors form an orthonormal basis \(\mathbf{B}_{g}^{(i)}\) for the subspace of \(Z[\mathbb{C}G]\) associated with the group element \(g\in G\). Since the projection operators commute with the group elements, to obtain the other entries of the basis one can act with the remaining elements of \(G\) to span the whole centre of the group algebra. Finally, the irreducible representations matrices are obtained by a similarity transform: \[\Gamma_{i}(g)=(B_{g}^{(i)})^{\dagger}\Gamma_{i}^{Z}(g)B_{g}^{(i)}. \tag{9}\] In order to induce the space group representations, a further constraint has to be imposed on the irreducible representations obtained in Eq. (9): \(\zeta(\mu)\neq\zeta(e)\), that is, the lattice translation \(\mu\) must not act trivially by having its character belonging to the kernel of the irreducible representation. All matrix representations for the local symmetry groups computed in this work are reported in the supporting information, and the selection of the allowed representations is left to the final user. ## IV Results and discussion We identify the (non-trivial) little group of the wavevectors corresponding to those first obtained Figure 2: Flow chart for the algorithm computing the character tables. A preliminary set of operations is carried out at the beginning of the code, which include the evaluation of conjugacy classes and maximum allowed dimension for irreducible representations (Irreps) \(d_{max}\). If the group is not Abelian, the Dixon algorithm tries to compute the higher dimensional (HD) Irreps, _i.e._ Irreps with degree bigger than 1. If the number of iterations \(n_{t}\) exceeds a threshold (\(N_{max}\)), the program attempts to compute HD characters using modularity constraints in the search. in Ref. [43] for all space groups having the relevant Bravais lattice. These groups are listed in tables 1 to 6, for completeness we also report the corresponding Brillouin zones and wavevectors coordinates in Figs. 3 to 10, as we follow the standard crystallographic convention of the International Tables of Crystallography. In particular, for the orthorhombic lattice we consider only the so-called standard setting with \(a<b<c\) out of the six settings that are allowed by symmetry in orthorhombic systems [55]. Any non-standard choice for the lattice vectors orientation simply rotates the Brillouin zone in reciprocal space, leaving its shape unaffected, and so are the local symmetry groups at the "non-universal" wavevectors. For each of the space groups considered we report the generators of the Abelian little groups (tables XXXVIII-XLIII, the notation \(\mathbf{0}\) therein indicates the null translation) and character tables for wavevectors that have a non-Abelian little group (tables VII-XXXVII, the notation \(\zeta_{n}\) therein indicates the \(n\)-th root of unity): we decided to individually tabulate the character table for each of these groups, even though two (or more) of them might be isomorphic. This is to make explicit the connection between the classes of the abstract group and the symmetry operation of the specific little group at hand. In the previous section we have mentioned the instance of our algorithm identifying a local symmetry group of order exceeding the simple estimate provided in section 2, which occurs when the inclusion of centering vectors (that belong to the same coset as the identity element once the lattice translations have been factored out the space group) is necessary when non-symmorphic symmetry elements are present. As an example, consider the A wavevector for space group #63: if no restrictions are imposed on the local symmetry group order, the program identifies, starting from a point symmetry of order 4, a group of order 16, featuring two centering vectors \(t_{1}=(\frac{1}{2},\frac{1}{2},0)\) and \(t_{2}=(\frac{1}{2},\frac{1}{2},1)\). The translations \(t_{1}\) and \(t_{2}\) are non-equivalent, as it can be checked by evaluating the Herring phase factors, and there is no group operation that conjugates them, as such they must belong to singleton conjugacy classes; since the group does not contain any symmetry element of order 8, one can readily identify the group in question as the Pauli group \(\mathbb{Z}_{4}\circ\mathsf{D_{8}}\). To further verify the correctness of the group structure presented above we compute the presentation for the group \(\mathbb{Z}_{4}\circ\mathsf{D_{8}}\): this consists of the generating elements: 1. \((xyz|t_{2})\), 2. \((xyz|110)\), 3. \((x\overline{yz}|000)\), 4. \((xy\overline{z}|00\frac{1}{2})\), together with a set of relators (computed following ref. [65]) that enforce constraints on the group structure. Among these relators, particularly crucial is the relationship 434 = 32 (where the numbers refer to the generators as listed above and the group product is the Seitz rule), thanks to this relator one can identify the two translations \((xyz|001)\) and \((xyz|110)=t_{H}\): this is a particular instance of the fact that the Herring mapping for the construction of the extension \(G^{*}\) is not necessarily consistent with the Seitz rule in Eq. (2) being taken as the group operation, as pointed out in Refs. [66, 67]. This happens since for an entry \([x,s]\) there might be more than one translation \(m\in M\) that are identified by the same value of the Herring phase factor \(s\). The quaternion group \(\mathsf{Q_{8}}\) can hence be constructed from the two elements \(c=(xy\overline{z}|\frac{1}{2}\frac{1}{2}\frac{1}{2})\) and \(b=(x\overline{y}z|00\frac{1}{2})\); in particular one can verify that \(c^{4}=e,b^{2}=c^{2}=t_{H}\) and that the action of one element on the other is non-trivial with \(bcb^{-1}=c^{-1}\). Another subgroup that can be readily identified is \(\mathsf{D_{8}}\), by choosing the elements \(a=(xy\overline{z}|00\frac{1}{2})\) and \(b\) as above. By direct calculation one can observe that \(a^{2}=b^{4}=e\) and \(aba=b^{-1}\), as long as the Herring translation is now taken to be \(t_{H}=(xyz|001)\). The overall symmetry group \(\mathbb{Z}_{4}\circ\mathsf{D_{8}}\) contains both groups \(\mathsf{Q_{8}}\) and \(\mathsf{D_{8}}\) as normal subgroups of index two and only by considering this larger symmetry group one has a complete picture of the symmetries at play. For space group #63, these can thus be rationalised as arising from the presence of two equivalent translation vectors (along the \(z\)-direction and along the \(x+y\)-direction) unrelated by the space group operations. During the submission stage, we became aware of a recent publication [68] that addresses a related problem of evaluating the action of non-symmorphic symmetry operations on generic wavevectors with the aid of projective representations. Since symmetries in direct space (considered in this work) and in reciprocal space (considered in ref. [68]) can be put in correspondence with one another, projective representations and the identification of the appropriate extension with the Herring method are concurrent for the determination of the local symmetry group's irreducible representations. In conclusion, we have proposed an algorithm for the evaluation of the local symmetry group for arbitrary \(\mathbf{k}\)-points, including those whose coordinates explicitly depend on lattice parameters and that for that reason could not be dealt with using the consolidated approach by Herring. We think that our code, that we are planning to release to the general public, could be a useful development to be integrated in the induction of space group representations and can help strengthen the connection between symmetry indicators and their evaluation as Berry phases along closed loops in the Brillouin zone [28], along which "non-universal" \(\mathbf{k}\)-points can be found. ###### Acknowledgements. The authors acknowledge support from the Austrian Science Fund (FWF) through project BandIIT P 33571. Calculations were performed in part on the Vienna Scientific Cluster (VSC). The authors acknowledge TU Wien Bibliothek for financial support through its Open Access Funding Programme.
2302.14016
Regularity of CR maps into uniformly pseudoconvex hypersurfaces and applications to proper holomorphic maps
We study regularity properties of CR maps in positive codimension valued in pseudoconvex manifolds which carry a nontrivial Levi foliation. We introduce an invariant which can be used to deduce that any sufficiently regular CR map from a minimal manifold into such a foliated target is either generically smooth or geometrically highly constrained, and to show generic smoothness of sufficiently regular CR transversal CR maps between pseudoconvex hypersurfaces. As an application, we discuss boundary regularity of proper holomorphic maps into bounded symmetric domains.
Josef Greilhuber, Bernhard Lamel
2023-02-27T18:15:34Z
http://arxiv.org/abs/2302.14016v1
Regularity of CR maps into uniformly pseudoconvex hypersurfaces and applications to proper holomorphic maps ###### Abstract. We study regularity properties of CR maps in positive codimension valued in pseudoconvex manifolds which carry a nontrivial Levi foliation. We introduce an invariant which can be used to deduce that any sufficiently regular CR map from a minimal manifold into such a foliated target is either generically smooth or geometrically highly constrained, and to show generic smoothness of sufficiently regular CR transversal CR maps between pseudoconvex hypersurfaces. As an application, we discuss boundary regularity of proper holomorphic maps into bounded symmetric domains. 2010 Mathematics Subject Classification: 32H40,32H02,32M15 ## 1. Introduction This paper is devoted to the study of regularity of CR maps into smooth Levi-degenerate hypersurfaces foliated by complex manifolds and the application of these results to boundary regularity of proper holomorphic maps in positive codimension. The positive codimensional case is much more challenging than the equidimensional case in many regards, but also has some specific features which make natural answers to regularity problems for mappings a bit different. We refer the reader to the discussion in [19], where the authors point out some of the salient points, and summarize here those which are important for our approach. We consider a CR map \(h\colon M\to M^{\prime}\) with \(M\subset\mathbb{C}^{N}\) a CR submanifold, and \(M^{\prime}\subset\mathbb{C}^{N^{\prime}}\) a hypersurface (we shall simply refer to \(M\) as the source, and \(M^{\prime}\) as the target). All structures and manifolds in this paper are assumed to be smooth unless explicitly stated otherwise; our technique and our results extend to other categories as well, as we will outline after stating and discussing our results in the smooth setting first. For the purpose of this discussion, the following observations are important. First of all, the typical conclusion of a regularity statement in higher codimension is that of _generic smoothness_ of the map given some a priori regularity, i.e. smoothness on a dense, open subset. We cannot drop the a priori regularity across a certain threshhold, as for example Low [22] and Stensones [26] showed. The typical a priori bound that we are going to use are linear in the codimension of the map considered, which is typical for all known results except for the notable exception of mappings between spheres of "small" codimension, see e.g. Huang [12]. Automatic generic smoothness of all CR maps with such an a priori regularity follows if the target is of D'Angelo finite type (that is, if it does not contain any formal holomorphic curves), as shown in [20]. However, the condition that \(M^{\prime}\) is of finite type is definitely _not necessary_ if one excludes certain typical examples of non-smooth CR maps. For example, since there always exist a nowhere smooth CR function \(\varphi\) (actually, with arbitrary finite smoothness prescribed) near a strictly pseudoconvex points, if the target contains a complex curve \(\gamma(\zeta)\), one obtains a nowhere smooth CR map \(\gamma\circ\varphi\). However, as was already observed in [20] in the case where the target is the tube over the light cone, in many geometrically interesting situations, this type of behavior is the only exceptional example. Our first main theorem describes one such situation. In order to formulate it, we need to introduce an invariant measuring the (non)degeneracy of a foliation by complex manifolds. Consider a foliation \(\eta\) of \(M^{\prime}\subseteq\mathbb{C}^{N^{\prime}}\) by complex manifolds \(\eta_{p}\), where \(\eta_{p}\) denotes the leaf of \(\eta\) containing \(p\). To each \(p\in M^{\prime}\), we associate \[\nu_{p}:=\max_{0\neq V_{p}\in T_{p}\eta}\dim_{\mathbb{C}}\ker\left(\bar{L}_{p} \to\mathbb{P}_{T\mathbb{C}^{N^{\prime}}/T\eta}(\bar{L}_{p}V)\right)-\dim_{ \mathbb{C}}\eta.\] Here \(\bar{L}_{p}V\) denotes the componentwise derivative at \(p\) of an (arbitrary) smooth extension \(V\) of \(V_{p}\), and we project onto the quotient space \(T\mathbb{C}^{N^{\prime}}\), where \(T\eta\subset T\mathbb{C}^{N^{\prime}}|_{M}\) denotes the tangent bundle of \(\eta\), i.e. \(T_{p}\eta=T_{p}\eta_{p}\). It turns out that this yields a well defined invariant \(\nu_{p}\) because as we shall show in section 3.1 the map associating to \(\bar{L}\in T^{0,1}M^{\prime}\) and \(V\in\Gamma(T\eta)\) the section \(\mathbb{P}_{T\mathbb{C}^{N^{\prime}}/T\eta}(\bar{L}V)\) of the quotient bundle \(T\mathbb{C}^{N^{\prime}}/T\eta\) is tensorial. **Theorem 1**.: _Let \(M^{\prime}\subset\mathbb{C}^{N^{\prime}}\) be a uniformly pseudoconvex hypersurface with Levi foliation \(\eta\), satisfying \(\nu_{p^{\prime}}=0\) for all \(p^{\prime}\in M^{\prime}\), and let \(M\) be a connected minimal CR submanifold. Then any \(C^{N^{\prime}-1}\)-regular CR map \(h:M\to M^{\prime}\) is either generically smooth, or it maps \(M\) entirely into a single leaf of the foliation, i.e. \(h(M)\subseteq\eta_{h(p)}\) for any \(p\in M\)._ If \(\nu\) is nonzero, our second main theorem still guarantees automatic regularity if the number of positive Levi eigenvalues of the source manifold \(M\) is large enough and \(h\) is assumed to be CR transversal (which is automatically satisfied in many applications). **Theorem 2**.: _Let \(M^{\prime}\subset\mathbb{C}^{N^{\prime}}\) be a uniformly pseudoconvex hypersurface, and \(M\subset\mathbb{C}^{N}\) a pseudoconvex hypersurface with at least \(n_{+}\) positive Levi eigenvalues. Then any CR-transversal CR map \(h:M\to M^{\prime}\) of regularity \(C^{N^{\prime}-n_{+}}\cap C^{2}\) is generically smooth near any point \(p\) which satisfies \(\nu_{h(p)}<n_{+}\)._ We note that under a different set of assumptions (which are, as we will see later, not directly related to our invariant \(\nu\)) Xiao [29, Thm. 1] obtains everywhere regularity: to be precise, in his case, he considers maps from strongly pseudoconvex hypersurfaces into uniformly pseudoconvex hypersurfaces of the same signature which are \(2\)-nondegenerate; the point is that such maps are automatically \(2\)-nondegenerate in the sense of [16], so that one can apply theorems from [15, 17]. We discuss the connection with our result later in section 4.1, and point out here that \(\nu=0\) implies, in particular, \(2\)-nondegeneracy of the target; our assumptions, however, do not imply that that the maps we are considering are \(2\)-nondegenerate. Let us furthermore point out the low codimension results in the paper of Kossovskiy, Xiao, and the second author [14]. As an application of Theorem 2 we obtain the following boundary regularity result for proper holomorphic maps: **Corollary 1**.: _Let \(\Omega\subseteq\mathbb{C}^{N}\) and \(\Omega^{\prime}\subseteq\mathbb{C}^{N^{\prime}}\) be domains, and \(M\subset\partial\Omega\), \(M^{\prime}\subset\partial\Omega^{\prime}\) be two hypersurfaces contained in the respective domains' smooth boundary part. Assume that \(\Omega^{\prime}\) is uniformly pseudoconvex at \(M^{\prime}\), and \(M\) is pseudoconvex with at least \(n_{+}\) positive Levi eigenvalues. Then every holomorphic map \(H:\Omega\to\Omega^{\prime}\) which extends as a \(C^{N^{\prime}-n_{+}}\cap C^{2}\)-regular map to \(M\) and maps \(M\) into \(M^{\prime}\), is generically smooth (on \(\bar{\Omega}\)) near any \(p\in M\) satisfying \(\nu_{H(p)}<n_{+}\)._ In Section 5 we discuss in detail proper holomorphic maps into pseudoconvex domains as an application of Theorem 2, and prove a theorem which deals with maps into boundaries of classical symmetric domains. By a careful study of the geometry of the smooth boundary components, we can calculate the invariant \(\nu\) in each case, and obtain the following theorem, which significantly extends the results obtained by Xiao in [29]: the source manifolds we can consider are not required to be strictly pseudoconvex any longer. The price we have to pay is that we have to assume higher a priori regularity and only obtain generic smoothness. **Theorem 3**.: _Let \(M\subset\mathbb{C}^{N}\) be a \(C^{\infty}\)-smooth pseudoconvex hypersurface, and assume that its Levi form has exactly \(n_{+}\) positive eigenvalues everywhere. Denote by \(M^{\prime}\) the smooth part of the boundary of a classical symmetric domain \(\Omega\subseteq\mathbb{C}^{N^{\prime}}\). Then every CR-transversal CR map \(h:M\to M^{\prime}\) of regularity \(C^{N^{\prime}-n_{+}}\) is smooth on a dense open subset of \(M\), given that_ 1. \(\Omega=D_{I}^{m,n}\) _for_ \(m,n\geq 2\) _and_ \(n_{+}\subseteq\{m+n-3,m+n-2\}\)_,_ 2. \(\Omega=D_{II}^{m}\) _for_ \(m\geq 4\) _and_ \(n_{+}\subseteq\{2m-7,\ldots,2m-4\}\)_,_ 3. \(\Omega=D_{III}^{m}\) _for_ \(m\geq 2\) _and_ \(n_{+}=m-1\) _or_ 4. \(\Omega=D_{IV}^{m}\) _for_ \(m\geq 2\)_,_ \(M\) _is minimal and_ \(n_{+}\leq m-2\)_._ Let us note that Theorem 3 in particular applies to the setting of (appropriate) _boundary values of proper mappings between classical symmetric domains_. The conditions on the number of positive Levi eigenvalues given in Theorem 3 are sharp in the following sense: In the case of \(D_{I}^{m,n}\), \(D_{II}^{m}\) and \(D_{III}^{m}\), there are pseudoconvex hypersurfaces \(M\) satisfying \(n_{+}=m+n-4\), \(n_{+}=2m-8\) and \(n_{+}=m-2\), respectively, such that there exist nowhere smooth, but arbitrarily often continuously differentiable CR-transversal CR embeddings \(h:M\to M^{\prime}\). On the other hand, there exists no CR transversal map \(h:M\to M^{\prime}\) at all if the number of positive Levi eigenvalues of \(M\) exceeds the upper limit given in Theorem 3, which is just the number of positive Levi eigenvalues of \(M^{\prime}\). **Remark 1**.: We first remark that one can obtain results in the real-analytic category with exactly the same assumptions. The conclusion in this setting is that the map \(h\) extends to a holomorphic map in a full neighbourhood of an open, dense subset of the source manifold. For this, one uses the result of Mir [23] on real-analytic regularity. We also remark that if both source and target are real-algebraic, one can use the proof of the algebraicity result of Coupet, Meylan, and Sukhov [6] to conclude that the map \(h\) is real-algebraic (this conclusion is global in nature). **Remark 2**.: An interesting observation in the algebraic case is that Theorem 3 and Corollary 1 yield a complete list of pairs of classical symmetric domains \((\Omega_{1},\Omega_{2})\) such that every proper holomorphic map \(H:\Omega_{1}\to\Omega_{2}\), which extends to \(\partial\Omega_{1}\) with sufficient initial regularity, and does not map \(\partial\Omega_{1}\) entirely into the non-smooth part of \(\partial\Omega_{2}\), is necessarily algebraic. **Remark 3**.: We finally note that all of our results above apply as well in the case where the source manifold is not embedded, but rather an "abstract" CR structure with the microlocal extension property; in that case, we have to apply the results of [21] instead of [20]. We tried to avoid this more technical aspect in the presentation here, the reader is invited to make the obvious changes to the formulations if needed. This paper developed from the first author's master's thesis [11], where (slightly weaker) versions of Theorem 1, Theorem 2 and Theorem 3 are first proven. ## 2. Preliminaries ### CR manifolds and the Levi foliation In this section, we will recall some basic notions and fix notation. We will be considering smooth CR submanifolds of complex Euclidean space, which we will denote by \(M\subset\mathbb{C}^{N}\) or \(M^{\prime}\subset\mathbb{C}^{N^{\prime}}\), respectively, and CR maps \(h:M\to M^{\prime}\). We will write \(T^{0,1}M=\mathbb{C}TM\cap T^{0,1}\mathbb{C}^{N}\), and denote by \(J\) the standard complex structure operator. A continuously differentiable map \(h:M\to M^{\prime}\) is called a CR map if it preserves the CR structure of its domain, i.e. if \(h_{*}T^{0,1}_{q}M\subseteq T^{0,1}_{h(q)}M^{\prime}\) for all \(q\in M\). If we denote by \(\iota\) the embedding of \(M^{\prime}\) into \(\mathbb{C}^{N^{\prime}}\), an equivalent characterization is that \(\iota\circ h:M\to\mathbb{C}^{N^{\prime}}\) is a CR map, which just means that each coordinate component of \(h\) is a CR function. We recall that the Levi form of a hypersurface \(M\subset\mathbb{C}^{N^{\prime}}\) is defined by \(\mathcal{L}(\bar{L},\bar{\Gamma})=\frac{1}{2i}[\bar{L},\Gamma]\mod T^{0,1}M \oplus T^{1,0}M\); given a defining function \(\rho\) for \(M\), we define \(\Theta=i(\partial\rho-\bar{\partial}\rho)\), and refer to \(\mathcal{L}_{\Theta}(\bar{L},\bar{\Gamma})=\frac{1}{2i}\Theta([\bar{L},\Gamma])\) as a scalar Levi form for \(M\). Our target manifolds will be _uniformly pseudoconvex hypersurfaces_, i.e. real hypersurfaces of \(\mathbb{C}^{N^{\prime}}\) with positive semidefinite Levi form, and a constant number of zero and positive eigenvalues everywhere, respectively. It will turn out that these are foliated by complex manifolds. In this paper, a _foliation_\(\eta\) of an \(n\)-dimensional (real) manifold \(M\) is a collection \(\{\eta_{q}:q\in M\}\) of \(k\)-dimensional immersed submanifolds, where \(q\in\eta_{q}\), which partitions \(M\), i.e. \(\eta_{p}\) and \(\eta_{q}\) are either disjoint or identical for any two \(p,q\in M\), and such that for any \(p\in M\), there exists a neighborhood \(O\) of \(p\) and coordinates \(\phi:O\to\mathbb{R}^{n}\) such that for any \(q\in O\), _the connected component_ of \(\eta_{q}\cap O\) containing \(q\) is just given by the coordinate plane \((\phi_{1}(p),\dots,\phi_{n-k}(p),\cdot,\dots,\cdot)\cap\phi(O)\). The bundle \(T\eta:=\bigcup_{q\in M}T_{q}\eta_{q}\) of tangent spaces to leaves then forms a smooth integrable distribution on \(TM\). We will also consider the bundle \(T^{0,1}\eta:=\bigcup_{q\in M}T^{0,1}_{q}\eta_{q}\) of CR tangent spaces to leaves, and always write \(T_{q}\eta:=T_{q}\eta_{q}\) and \(T^{0,1}_{q}\eta:=T^{0,1}_{q}\eta_{q}\) for simplicity. If the rank of the Levi form of a CR manifold \(M\) is constant in a neighborhood \(U\) of a point \(p\in M\), there exists a foliation \(\eta\) of \(U\) by complex manifolds, such that the Levi null space at any \(q\in U\) is precisely given by the CR tangent space at \(q\) to the the leaf of the foliation through \(q\), henceforth denoted by \(T^{0,1}_{q}\eta\). This foliation, discovered in the hypersurface case by Sommer [25] and proven to exist in general CR submanifolds by Freeman [10] is thus called the _Levi foliation_. **Theorem 4**.: _Let \(M\subset\mathbb{C}^{N}\) be a CR manifold, and suppose that its Levi form has constant rank. Then there is a foliation \(\eta\) of \(U\) by complex manifolds, such that the Levi null spaces \(\mathcal{N}_{q}\subseteq T^{0,1}_{q}M\) for \(q\in U\) are given by \(T^{0,1}_{q}\eta\)._ Proof.: Let \(N_{q}=\{\frac{1}{2}(\bar{L}_{q}+L_{q}),\bar{L}_{q}\in\mathcal{N}_{q}\}\). Because the rank of the Levi null space is constant across \(M\), the union \(N=\bigcup_{q\in M}N_{q}\) yields a smooth real distribution on \(M\). By Frobenius' theorem, integrability of this distribution is equivalent to the submodule \(\Gamma_{q}(N)\) of germs of sections of \(N\) at \(q\) being closed under taking Lie brackets, for every \(q\in M\). Since for any \(\bar{L},\bar{\Gamma}\in\Gamma(T^{0,1}M)\) we have \[[\tfrac{1}{2}(\bar{L}+L),\tfrac{1}{2}(\bar{\Gamma}+\Gamma)]=\tfrac{1}{4}[\bar {L},\bar{\Gamma}]+\tfrac{1}{4}[L,\Gamma]+\tfrac{1}{4}[L,\bar{\Gamma}]+\tfrac{1 }{4}[\bar{L},\Gamma]=-\Im\left(\mathcal{L}(\bar{L},\bar{\Gamma})\right),\] a given germ of a vector field \(\frac{1}{2}(\bar{L}+L)\in\Gamma_{q}(T^{c}M)\) is a section of \(N\) if and only if \([\tfrac{1}{2}(\bar{L}+L),\Gamma_{q}(T^{c}M)]\subseteq\Gamma_{q}(T^{c}M)\). Taking two sections \(V,W\in\Gamma_{q}(N)\), we see thus that \([V,W]\subseteq[\Gamma_{q}(N),\Gamma_{q}(T^{c}M)]\subseteq\Gamma_{q}(T^{c}M)\), and by the Jacobi identity, \[[[V,W],\Gamma_{q}(T^{c}M)] \subseteq[V,[W,\Gamma_{q}(T^{c}M)]]-[W,[V,\Gamma_{q}(T^{c}M)]]\] \[\subseteq[V,\Gamma_{q}(T^{c}M)]-[W,\Gamma_{q}(T^{c}M)]\subseteq \Gamma_{q}(T^{c}M),\] showing that \(\Gamma_{q}(N)\) is indeed closed under taking Lie brackets. Therefore, the Levi foliation exists, and since \(N_{q}\subset T_{q}\mathbb{C}^{N}\) is a complex subspace for any \(q\in M\), the leaves of this foliation are complex manifolds. ### Irregular CR maps and formal holomorphic foliations Even though we are interested in the regularity of mappings, our results are obtained in a contrapositive way: We show that the existence of irregular maps forces some geometric property (namely, the existence of complex varieties, see Theorem 5 below). As a guiding principle, we therefore review a couple of natural instances in which irregular maps exist. We begin by considering CR functions, in a slight adaptation of [4, Theorem 2.7]. **Example 1**.: _Let \(M\subset\mathbb{C}^{N}\) be a strongly pseudoconvex CR hypersurface and \(p\in M\). Then there exists a neighborhood \(O\subseteq\mathbb{C}^{N}\) of \(p\) such that for each \(k\in\mathbb{N}_{\geq 1}\) there is a \(C^{k}\)-smooth CR function \(\phi:O\cap M\to\mathbb{C}\) which is nowhere smooth on \(O\cap M\)._ As an immediate conseqence, there exist nowhere smooth CR maps from \(M\) into \(M^{\prime}\) if the target manifold \(M^{\prime}\) contains a complex curve \(\Gamma\). Indeed, any parametrization \(t\mapsto\gamma(t)\) of \(\Gamma\) is a smooth CR immersion of \(\mathbb{C}\) into \(M^{\prime}\), hence \(\gamma\circ\phi:M\to M^{\prime}\) provides a nowhere smooth CR function of regularity \(C^{k}\). We obtain another, more general set of examples from targets of the form \(M^{\prime}=\hat{M}\times\mathbb{C}\subset\mathbb{C}^{N+1}\) and CR functions \(\hat{h}:M\to\hat{M}\). Here, the map \((\hat{h},\phi):M\to\hat{M}\times\mathbb{C}\) is a CR map, since each of its components is a CR map, and it is nowhere smooth because \(\phi\) is. In [20], Lamel and Mir prove a result in the other direction, essentially stating that near a generic point, any nowhere smooth CR map formally exhibits the structure of these latter examples. ### The formal foliation theorem Before we state the main technical theorem that we are going to use, we introduce some necessary concepts. A _formal holomorphic submanifold_\(\Gamma\) of dimension \(r\) at a point \(p\in\mathbb{C}^{N^{\prime}}\) is simply a formal power series \(\Gamma\in\mathbb{C}[\![t_{1},\dots,t_{r}]\!]^{N^{\prime}}\), \(\Gamma=\sum_{\alpha\in\mathbb{N}^{r}}\gamma_{\alpha}t^{\alpha}\) satisfying \(\gamma_{0}=p\) and \(\operatorname{rk}\left(\Gamma^{\prime}(0)\right)=r\). It is _tangential to infinite order_ to a set \(S\subseteq\mathbb{C}^{N^{\prime}}\) if for any germ of a \(C^{\infty}\)-smooth function \(\rho\) vanishing on \(S\), the composition of \(\Gamma\) with the Taylor series of \(\rho\) at \(p\) vanishes to infinite order. If \(M\) is a CR manifold and \((\Gamma_{q})_{q\in M}\) is a family of such formal holomorphic submanifolds, we call this family a _CR family_ if each of its coefficients is a CR map \(M\to\mathbb{C}^{N^{\prime}}\). It turns out that the structural property of the target which forces smoothness of CR maps is the number of different directions into which successive CR derivatives of gradients of defining functions can point. This motivates the introduction of the following numerical invariants. For a CR map \(h:M\to\mathbb{C}^{N^{\prime}}\), let \[r_{0}(p) :=\dim_{\mathbb{C}}\left\langle\left\{\rho_{w}\circ h(p):\rho \in\mathscr{I}_{h(M)}(h(p))\right\}\right\rangle,\] \[r_{k}(p) :=\dim_{\mathbb{C}}\left\langle\left\{\bar{L}_{1}\dots\bar{L}_{j} (\rho_{w}\circ h)(p):\rho\in\mathscr{I}_{h(M)}(h(p)),\bar{L}_{1},\dots,\bar{L} _{j}\in\mathcal{V}_{p}(M),0\leq j\leq k\right\}\right\rangle,\] where we write \(\mathcal{V}_{p}(M)\) for the set of germs of CR vector fields at \(p\), and \(\mathscr{I}_{S}(p)\) for the ideal of germs of smooth functions at \(p\) which vanish on a given set \(S\). The _complex gradient_\(\rho_{w}=\left(\frac{\partial\rho}{\partial w_{1}},\dots,\frac{\partial\rho}{ \partial w_{N^{\prime}}}\right)\) is considered here as a vector in \(\mathbb{C}^{N^{\prime}}\). The function \(q\mapsto r_{k}(q)\) is integer valued and lower semicontinuous as it is given by the rank of a collection of continuously varying vectors. Of course, \(r_{k}(p)\) is only defined if \(h\in C^{k}\), since \(\rho_{w}\circ h\) is only as regular as \(h\) is. To extract a global invariant of \(h\), let \(r_{k}\) be the maximum value such that \(r_{k}(p)\geq r_{k}\) on a dense open subset of \(M\). We are now in a position to state the formal foliation theorem of Lamel and Mir (Theorem 2.2 in [20]). **Theorem 5**.: _Let \(M\subset\mathbb{C}^{N}\) be a \(C^{\infty}\)-smooth minimal CR submanifold, \(k,l\in\mathbb{N}\) with \(0\leq k\leq l\leq N^{\prime}\) and \(N^{\prime}-l+k\geq 1\) be given integers and \(h:M\to\mathbb{C}^{N^{\prime}}\) be a CR map of class \(C^{N^{\prime}-l+k}\). Assume that \(r_{k}\geq l\) and that there exists a non-empty open subset \(M_{1}\) of \(M\) where \(h\) is nowhere \(C^{\infty}\). Then there exists a dense open subset \(M_{2}\subseteq M_{1}\) such that for every \(p\in M_{2}\), there exists a neighborhood \(V\subseteq M_{2}\) of \(p\), an integer \(r\geq 1\) and a \(C^{1}\)-smooth CR family of formal complex submanifolds \((\Gamma_{\xi})_{\xi\in V}\) of dimension \(r\) through \(h(V)\) for which \(\Gamma_{\xi}\) is tangential to infinite order to \(h(M)\) at \(h(\xi)\), for every \(\xi\in V\)._ The rank \(r\) of the family of holomorphic manifolds in the statement of this theorem merely serves as a reminder that in concrete cases, one can hope for a rank of more than one. Since there is no condition given when this might occur, for black-box applications of this theorem we will have to be satisfied with CR families of holomorphic curves with nonvanishing derivative, which can always be obtained by simply restricting \(\Gamma_{q}=\sum_{\alpha\in\mathbb{N}^{r}}\gamma_{\alpha}(q)t^{\alpha}\) to \(t=(t_{1},0,\ldots,0)\). Let us remark that if \(h\) is not \(C^{\infty}\)-smooth on a dense open subset of \(M\), there exists an open subset \(O\subseteq M\) such that \(h\) is nowhere \(C^{\infty}\)-smooth on \(O\). The reason is simply that the set of all points \(p\in M\) such that \(h\) is \(C^{\infty}\)-smooth on a neighborhood of \(p\) is open. If this set is not dense, then the complement of its closure is a non-empty open subset of \(M\), where, by definition, \(h\) is nowhere \(C^{\infty}\)-smooth. Another interesting point to note is that while the formal complex manifolds obtained from Theorem 5 are tangential to infinite order to the image \(h(M)\), infinite tangency to a non-smooth set is not nearly as strong as one might think at first sight. As a toy example, take a nowhere smooth, but \(C^{1}\) function \(\phi:\mathbb{R}\to\mathbb{R}\) and consider its graph \(S:=\{(x,\phi(x)),x\in\mathbb{R}\}\subset\mathbb{R}^{2}\). Then any function \(\rho\in C^{\infty}(\mathbb{R}^{2})\) vanishing on \(S\) must already vanish to infinite order there by the following argument: If either \(\rho_{x}\) or \(\rho_{y}\) did not vanish at a point \((x,\phi(x))\), the implicit function theorem would yield a smooth parametrization of \(S\) near that point, which does not exist. Thus both \(\rho_{x}\) and \(\rho_{y}\) vanish on \(S\), and the argument may proceed at infinitum. The \(y\)-Axis is therefore tangential to infinite order to \(S\) in the sense of Theorem 5, while not even being tangential to first order in the usual sense. However, if \(h(M)\subseteq M^{\prime}\) for some smooth manifold \(M^{\prime}\), then tangency to infinite order to \(h(M)\) clearly implies tangency to infinite order to \(M^{\prime}\). To apply Theorem 5, we need \(0\leq k\leq l\leq N^{\prime}\) such that \(r_{k}\geq l\). It is always possible to choose \(k=l=0\), but if \(h\) maps \(M\) into a CR submanifold \(M^{\prime}\subseteq\mathbb{C}^{N^{\prime}}\), a slight improvement holds (Lemma 6.1 in [20]). **Lemma 1**.: _Let \(M\subset\mathbb{C}^{N}\) be a \(C^{\infty}\)-smooth CR submanifold and \(h:M\to\mathbb{C}^{N^{\prime}}\) be a continuous CR map. If there exists a \(C^{\infty}\)-smooth CR submanifold \(M^{\prime}\subset\mathbb{C}^{N^{\prime}}\) such that \(h(M)\subseteq M^{\prime}\), then \(r_{0}\geq N^{\prime}-n^{\prime}\), where \(n^{\prime}=\dim_{CR}M^{\prime}\). In particular, if \(M^{\prime}\) is maximally real, then \(r_{0}=N^{\prime}\)._ If it is guaranteed that enough CR directions tangential to \(h(M)\) exist along which \(M^{\prime}\) behaves like a Levi nondegenerate manifold, we can say more about the first derivatives of gradients, yielding a bound on \(r_{1}\). We record here for later use a result similar to Lemma 6.2. in [20]. **Lemma 2**.: _Consider a \(C^{\infty}\)-smooth CR submanifold \(M\subset\mathbb{C}^{N}\), a \(C^{\infty}\)-smooth real hypersurface \(M^{\prime}\subset\mathbb{C}^{N^{\prime}}\) and a continuously differentiable CR map \(h:M\to M^{\prime}\) mapping \(p\in M\) to \(p^{\prime}\in M^{\prime}\). If \(h\) is immersive at \(p\) and a scalar Levi form \(\mathcal{L}_{\Theta}\) of \(M^{\prime}\) restricts to a nondegenerate Hermitian form on \(h_{*}T_{p}^{0,1}M\), then \(r_{1}\geq\dim_{CR}M+1\) on a neighborhood of \(p\)._ Proof.: Since we are in a purely local setting, we may assume that \(\mathcal{L}_{\Theta}\) arises from a defining function \(\rho\) of \(M^{\prime}\), such that for any two CR vectors \(\bar{\Gamma}=\sum_{j=1}^{N^{\prime}}\bar{\Gamma}_{j}\frac{\partial}{\partial \bar{w}_{j}}|_{p^{\prime}}\) and \(\bar{L}=\sum_{k=1}^{N^{\prime}}\bar{L}_{k}\frac{\partial}{\partial w_{k}}|_{p^{ \prime}}\) we have \[\mathcal{L}_{\Theta}(\bar{\Gamma},\bar{L})=\sum_{j,k=1}^{N^{\prime}}\frac{ \partial^{2}\rho}{\partial w_{j}\partial\bar{w}_{k}}(p^{\prime})\Gamma_{j} \bar{L}_{k}.\] By definition \(\bar{L}\rho_{w}=\sum_{j=1}^{N^{\prime}}\bar{L}_{k}\frac{\partial^{2}\rho}{ \partial w_{j}\partial\bar{w}_{k}}(p^{\prime})\), so using the standard scalar product on \(\mathbb{C}^{N^{\prime}}\) we can express \(\mathcal{L}_{\Theta}(\bar{\Gamma},\bar{L})=\big{(}(\bar{\Gamma}_{1},\ldots, \bar{\Gamma}_{N^{\prime}})|\bar{L}\rho_{w}\big{)}_{\mathbb{C}^{N^{\prime}}}\). Nondegeneracy of the restricted Levi form on \(h_{*}T_{p}^{0,1}M\) precisely means that the map \(h_{*}\bar{L}\mapsto\mathcal{L}_{\Theta}(\cdot,h_{*}\bar{L})\) is an isomorphism of \(h_{*}T_{p}^{0,1}M\) and the space of antilinear functionals on \(h_{*}T_{p}^{0,1}M\). Since \(h\) is immersive, \(h_{*}\) is an isomorphism between \(T_{p}^{0,1}M\) and \(h_{*}T_{p}^{0,1}M\). The map associating to each \(\bar{L}\in T_{p}^{0,1}M\) the antilinear functional \(\mathcal{L}_{\Theta}(\cdot,h_{*}\bar{L})=\big{(}\cdot|\bar{L}(\rho_{w}\circ h )\big{)}_{\mathbb{C}^{N^{\prime}}}\) is thus an isomorphism, in particular implying that \(\dim_{\mathbb{C}}\{\bar{L}(\rho_{w}\circ h):\bar{L}\in T_{p}^{0,1}M\}=\dim_{CR}M\). Furthermore, the complex gradient \(\rho_{w}(p^{\prime})\) itself is linearly independent of \(\bar{L}(\rho_{w}\circ h)\) for any nonzero \(\bar{L}\in T_{p^{\prime}}^{0,1}M^{\prime}\) by the following argument. For any \(\bar{\Gamma}=\sum_{j=1}^{N^{\prime}}\bar{\Gamma}_{j}\frac{\partial}{\partial w _{j}}|_{p^{\prime}}\in T_{p^{\prime}}^{0,1}M^{\prime}\), tangency implies that \[\Gamma\rho=\sum_{j=1}^{N^{\prime}}\Gamma_{j}\frac{\partial\rho}{\partial w_{j }}(p^{\prime})=\big{(}(\bar{\Gamma}_{1},\ldots,\bar{\Gamma}_{N^{\prime}})|\rho _{w}(p^{\prime})\big{)}_{\mathbb{C}^{N^{\prime}}}=0.\] Thus \(\rho_{w}(p^{\prime})\) lies in the orthogonal complement of \(\Big{\{}(\bar{\Gamma}_{1},\ldots,\bar{\Gamma}_{N^{\prime}}):\bar{\Gamma}\in T _{p^{\prime}}^{0,1}M^{\prime}\Big{\}}\) while \(\bar{L}(\rho_{w}\circ h)\) does not, showing linear independence. This implies \(r_{1}(p)\geq\dim_{CR}M+1\) and since \(r_{1}\) is lower semicontinuous and integer valued, \(r_{1}\geq\dim_{CR}M+1\) holds on a neighborhood of \(p\) as claimed. As we will have to treat non-immersive maps as well, let us note the following simple, but slightly clunky consequence of the previous proof. **Corollary 2**.: _If for a \(C^{\infty}\)-smooth CR submanifold \(M\subset\mathbb{C}^{N}\), a \(C^{\infty}\)-smooth real hypersurface \(M^{\prime}\subset\mathbb{C}^{N^{\prime}}\) and a continuously differentiable CR map \(h:M\to M^{\prime}\) mapping \(p\in M\) to \(p^{\prime}\in M^{\prime}\) there exists a CR submanifold \(S\subset M\) containing \(p\), such that the restricted map \(h|_{S}\) satisfies the hypothesis of Lemma 2, then \(r_{1}\geq\dim_{CR}S+1\) on a neighborhood of \(p\) in \(M\)._ Proof.: Take a basis \((\bar{L}_{j})_{j=1}^{\dim_{CR}S}\) of \(T_{p}^{0,1}S\). Retracing the proof of Lemma 2, we see that for any defining function \(\rho\) of \(M\), the vectors \(\rho_{w}(p^{\prime})\) and \(\bar{L}_{j}(\rho_{w}\circ h),1\leq j\leq\dim_{CR}S\) are linearly independent, hence \(r_{1}(p)\geq\dim_{CR}S+1\). But \(r_{1}(q)\) is lower semicontinuous in \(q\) on \(M\), hence \(r_{1}\geq\dim_{CR}S+1\) in a neighborhood of \(p\) as claimed. ## 3. The invariant \(\nu\) and the proof of theorem 1 As an example of a hypersurface foliated by complex manifolds, where an unconditional regularity result must necessarily fail, Lamel and Mir consider the _tube over the light cone_\(M^{\prime}:=\{(z_{1},\ldots,z_{N^{\prime}-1},z_{N^{\prime}}):\Re(z_{1})^{2}+ \cdots+\Re(z_{N^{\prime}-1})^{2}=\Re(z_{N^{\prime}})^{2},\ z_{N^{\prime}}\neq 0\}\). They obtain the following result (Corollary 2.6 in [20]). **Theorem 6**.: _Let \(M\subset\mathbb{C}^{N}\) be a \(C^{\infty}\)-smooth minimal CR submanifold and \(M^{\prime}\subseteq\mathbb{C}^{N^{\prime}}\) be the tube over the light cone. Then every CR map \(h:M\to M^{\prime}\), of class \(C^{N^{\prime}-1}\) and of rank \(\geq 3\), is \(C^{\infty}\)-smooth on a dense open subset of \(M\)._ The proof given in [20] and [19] makes quite ingenious use of the simple structure of \(M^{\prime}\), and is thus not easily adaptable to more general settings. In this section, we shall carefully define the invariant \(\nu\) mentioned in the introduction (5), and show how it can be used to generalize the observation of theorem 6 to the more general situation of pseudoconvex hypersurfaces whose Levi form is of constant rank. We will later see that this class of examples covers not only the tube over the light cone, but also the smooth part of the boundary of all classical irreducible symmetric domains. Mappings into such targets will be discussed in section 6. ### Maps into uniformly pseudoconvex hypersurfaces In view of the Levi foliation (Theorem 4), Theorem 5 might allow for nowhere smooth maps into a uniformly pseudoconvex hypersurface, since there at least exist complex manifolds tangential to infinite order to the target manifold, contrary to the simpler case of manifolds of D'Angelo finite type. Indeed, any formal complex manifold tangential to infinite order to \(M^{\prime}\) is necessarily tangential to the Levi foliation. **Lemma 3**.: _Let \(M^{\prime}\subset\mathbb{C}^{N^{\prime}}\) be a uniformly pseudoconvex hypersurface with its Levi foliation \(\eta\), and let \(p^{\prime}\in M^{\prime}\). Suppose there exists a formal complex curve \(\Gamma=p^{\prime}+t\gamma_{t}+\frac{t^{2}}{2}\gamma_{tt}+\dots\) tangential to second order to \(M^{\prime}\) at \(p^{\prime}\). Then \(\gamma_{t}\in T_{p^{\prime}}\eta\)._ Proof.: A formal complex curve \(\Gamma=p^{\prime}+t\gamma_{t}+\frac{t^{2}}{2}\gamma_{tt}+\dots\) is tangential to second order to \(M^{\prime}\) if and only if the curve \(\tilde{\gamma}(t)=p^{\prime}+t\gamma_{t}+\frac{t^{2}}{2}\gamma_{tt}\) arising from the truncated power series is. Choosing a positive semidefinite scalar Levi form \(\mathcal{L}_{\Theta}\) arising from a defining function \(\rho\), we have that \(\mathcal{L}_{\Theta}(\frac{1}{2}(\gamma_{t}+iJ\gamma_{t}),\frac{1}{2}(\gamma_ {t}+iJ\gamma_{t}))=\frac{d^{2}}{dtdt}|_{t=0}\rho\circ\tilde{\gamma}=0\), since \(\rho\circ\tilde{\gamma}\) vanishes to second order. But by Theorem 4, the null space of \(\mathcal{L}_{\Theta}|_{p^{\prime}}\) is given by \(T_{p^{\prime}}^{0,1}\eta\), implying that \(\gamma_{t}\in T_{p^{\prime}}\eta\). Our main technical tool will be a tensorial quantity measuring obstructions to the existence of CR sections of \(T\eta\). We denote by \(T^{\perp}\eta:=\bigcup_{p\in M^{\prime}}(T_{p}\eta_{p})^{\perp}\) the bundle of orthogonal complements in \(T\mathbb{C}^{N^{\prime}}\) of tangent spaces to leaves. **Lemma 4**.: _Let \(M^{\prime}\) be a manifold endowed with a foliation \(\eta\) by complex manifolds. There exists a tensor field \(R\in\mathcal{V}(M^{\prime})^{*}\otimes\Gamma(T\eta)^{*}\otimes\Gamma(T^{\perp }\eta)\) such that for every \(\bar{L}\in\mathcal{V}(M^{\prime})\) and \(\psi\in\Gamma(T\eta)\), we have \(\mathbb{P}_{T^{\perp}\eta_{p}}(\bar{L}|_{p}\psi)=R_{p}(\bar{L}|_{p},\psi|_{p})\). For any \(V_{p}\in T_{p}\eta_{p}\), the kernel of \(R_{p}(\cdot,V_{p})\) contains \(T_{p}^{0,1}\eta_{p}\)._ Proof.: Define \(R(\bar{L},\psi)=\mathbb{P}_{T^{\perp}\eta}(\bar{L}\psi)\). Evidently, \(R\) is \(C^{1}\)-linear in the first slot, as directional derivatives always are. For two sections \(V\) and \(W\) of \(T\eta\) and \(f\in C^{1}(M)\), we have that \(\bar{L}|_{p}(V+fW)=\bar{L}|_{p}V+f\bar{L}|_{p}W+(\bar{L}|_{p}f)W\). The last term is canceled by the projection onto \(T_{p}^{\perp}\eta_{p}\), thus \(R\) is also \(C^{1}\)-linear in the second slot, hence \(R\) is a tensor. Consider now \(V_{p}\in T_{p}\eta_{p}\). We may construct a section \(V\in\Gamma(T\eta)\) satisfying \(V|_{p}=V_{p}\), which is holomorphic on \(\eta_{p}\) and smooth on \(M^{\prime}\). First, we choose a holomorphic parametrization \(\phi\) for \(\eta_{p}\), extend \(\phi_{*}^{-1}V_{p}\) to a constant vector field \(\tilde{V}\) and note that \(\phi_{*}\tilde{V}\) is holomorphic, since \(D\phi\) has holomorphic components and \(\tilde{V}\) is constant. To obtain a vector field, we then simply extend the result smoothly to \(M^{\prime}\). But now, \(\bar{L}|_{p}V=0\) if \(\bar{L}\in\Gamma(T^{0,1}\eta)\), since \(V\) is holomorphic on \(\eta_{p}\) and \(\bar{L}|_{p}\) only takes derivatives along \(\eta\). Therefore, \(T_{p}^{0,1}\eta\subseteq\ker R_{p}(\cdot,V_{p})\). Next we need to deal with the issue that the CR map \(h\) of interest is neither assumed to be immersive nor \(C^{\infty}\)-smooth. After carefully verifying that nothing goes wrong, we will obtain the following result on obstructions to the existence of nowhere smooth CR maps. **Proposition 1**.: _Consider a uniformly pseudoconvex hypersurface \(M^{\prime}\) with its Levi foliation \(\eta\), a CR manifold \(M\) and a \(C^{1}\)-smooth CR map \(h:M\to M^{\prime}\) mapping a point \(p\in M\) to \(p^{\prime}\in M^{\prime}\). Suppose there exists a \(C^{1}\)-smooth CR family of formal complex curves \((\Gamma_{q})_{q\in O}\) defined on a neighborhood \(O\subseteq M\) of \(p\) such that \(\Gamma_{q}\) is tangential to second order to \(M^{\prime}\) at \(h(q)\) for each \(q\in O\). Then \(\gamma_{t}(p)\in T_{p^{\prime}}\eta_{p^{\prime}}\), and \(h_{*}\tilde{T}_{p}^{0,1}M\subseteq\ker R_{p^{\prime}}(\cdot,\gamma_{t}(p))\)._ Proof.: By Lemma 3, we know that at each point \(q\in O\), \(\gamma_{t}(q)\in T_{h(q)}\eta_{h(q)}\), since \(\Gamma_{q}\) is a formal complex curve tangential to second order to \(M^{\prime}\) at \(h(q)\). Consider now \(\bar{L}\in\mathcal{V}(M)\) such that \(h_{*}\bar{L}|_{p}\neq 0\). Choosing a two-dimensional real submanifold \(S\subseteq O\) such that \(\bar{L}|_{p}\) is tangential to \(S\), the derivative of \(h|_{S}\) has full rank at \(p\), and hence \(h|_{S}\) is a local embedding around \(p\). We may thus extend \(\gamma_{t}\circ h^{-1}|_{h(S)}\), defined on \(h(S)\), to a section \(\tilde{\gamma}_{t}\in\Gamma(T\eta)\) defined on an open neighbourhood of \(p^{\prime}\). Since \(\gamma_{t}\) and \(\tilde{\gamma}_{t}\circ h\) agree on \(S\) and \(\bar{L}_{p}\) only takes derivatives along \(S\), it follows that \[R_{p^{\prime}}(h_{*}\bar{L}|_{p},\gamma_{t}(p))=\mathbb{P}_{T_{p^{\prime}}^{ \perp}\eta_{p^{\prime}}}\left(h_{*}\bar{L}|_{p}\tilde{\gamma}_{t}\right)= \mathbb{P}_{T_{p^{\prime}}^{\perp}\eta_{p^{\prime}}}\left(\bar{L}|_{p}(\tilde {\gamma}_{t}\circ h)\right)=\mathbb{P}_{T_{p^{\prime}}^{\perp}\eta_{p^{\prime} }}\left(\bar{L}|_{p}\gamma_{t}\right)=0,\] implying that \(h_{*}\bar{L}|_{p}\in\ker R_{p^{\prime}}(\cdot,\gamma_{t}(p))\). In order to apply Theorem 5 to our situation, we are going to use the numerical quantity \(\nu\) already mentioned in the introduction, which measures the size of \(\ker R(\cdot,V)\), as well as a method of computing it. **Lemma 5**.: _Let \(M^{\prime}\) be a uniformly pseudoconvex hypersurface with Levi foliation \(\eta\). For \(p^{\prime}\in M^{\prime}\), we define_ \[\nu_{p^{\prime}}=\max_{0\neq V\in T_{p^{\prime}}\eta}\dim_{\mathbb{C}}\ker R_{ p^{\prime}}(\cdot,V)-\dim_{\mathbb{C}}\eta.\] _Then \(\nu\) is a biholomorphic invariant, and the function \(q\to\nu_{q}\) is nonnegative, integer valued and upper semicontinuous._ Proof.: For biholomorphism invariance, let \(H:\mathbb{C}^{N^{\prime}}\to\mathbb{C}^{N^{\prime}}\) be a local biholomorphism near \(p^{\prime}\), and consider \(H(M^{\prime})\). Its Levi foliation is \(H(\eta)\). For all \(\bar{L}\in\mathcal{V}(M^{\prime})\) and \(V\in\Gamma(T\eta)\), we have \((H_{*}\bar{L})(H_{*}V)=H_{*}(\bar{L}V)\) as the Jacobian of \(H\) is holomorphic again and thus commutes with taking antiholomorphic derivatives. Furthermore, projecting \(H_{*}(T\eta^{\perp})\) onto \(TH(\eta)^{\perp}\) is an isomorphism as \(H_{*}(T\eta^{\perp})\) is another complement to \(H_{*}(T\eta)=TH(\eta)\). Therefore \(R\) transforms under biholomorphism via pre- and postcomposition with linear isomorphisms, so \(\nu\) is invariant. Here we chose \(T\eta^{\perp}\) instead of the more natural, isomorphic bundle \(T\mathbb{C}^{N^{\prime}}/T\eta\) because it simplifies later calculations. Clearly \(\nu_{q}\) is integer-valued, and since \(T_{q}^{0,1}\eta\subseteq\ker R(\cdot,V)\), it is nonnegative. Let \(\dim_{\mathbb{C}}\eta=:K\). To show upper semicontinuity, we need to show that for all \(p^{\prime}\) in \(M^{\prime}\) and \(l\in\mathbb{N}\), \(\nu_{p^{\prime}}\leq l\) implies \(\nu_{q}\leq l\) for all \(q\) in a neighborhood of \(p^{\prime}\). We observe first that for \(l\in\mathbb{N}\), \(\dim_{\mathbb{C}}\ker R_{p^{\prime}}(\cdot,V)\leq K+l\) if and only if \(\mathrm{rk}R(\cdot,V)\geq N^{\prime}-1-K-l\). This condition is equivalent to some \((N^{\prime}-1-K-l)\)-minor of the matrix representation of \(R(\cdot,V)\) with respect to a choice of smooth local frames of \(T^{0,1}M^{\prime}\) and \(T^{\perp}\eta\) being nonzero. By homogeneity, we have \(\nu_{p^{\prime}}\leq l\) if and only if for each \(V\in\mathbb{S}^{2K-1}\subseteq T_{p^{\prime}}\), a (possibly different) such minor of \(R(\cdot,V)\) does not vanish. Hence, \(\nu_{p^{\prime}}\leq K+l\) if and only if the square sum \(\sum_{j}|m_{j}|^{2}\) over all such minors does not vanish on \(\{p^{\prime}\}\times\mathbb{S}^{2K-1}\). By compactness of the sphere, there is a neighborhood \(O\) of \(p^{\prime}\) such that \(\sum_{j}|m_{j}|^{2}\) does not vanish on \(O\times\mathbb{S}^{2K-1}\), showing that \(\nu_{q}\leq l\) on \(O\), which implies upper semicontinuity. To calculate \(\nu_{p^{\prime}}\), the following setup will be helpful. **Lemma 6**.: _Let \(M^{\prime}\) be a pseudoconvex hypersurface foliated by complex manifolds of dimension \(K\). Consider a point \(p^{\prime}\in M^{\prime}\) and an \((N^{\prime}-K)\)-dimensional complex manifold \(\Sigma\) through \(p^{\prime}\) such that \(T_{p^{\prime}}\eta\oplus T_{p^{\prime}}\Sigma=T_{p^{\prime}}\mathbb{C}^{N}\). If \(S:=M^{\prime}\cap\Sigma\) is strongly pseudoconvex, then the Levi form of \(M^{\prime}\) has exactly \(K\) zero eigenvalues on an open neighborhood of \(p^{\prime}\), making \(M^{\prime}\) a uniformly pseudoconvex hypersurface and \(\eta\) its Levi foliation._ _Furthermore, as in Lemma 4, the map \(R^{S}\in\mathcal{V}(S)^{*}\otimes\Gamma(T\eta|_{S})^{*}\otimes\Gamma(T^{\perp} \eta|_{S})\) given by \(R^{S}(\bar{L},V)=\mathbb{P}_{T^{\perp}\eta}(\bar{L}V)\) is a tensor, and \(\max_{0\neq V\in T_{p^{\prime}}\eta}\dim_{\mathbb{C}}\ker R_{p^{\prime}}^{S}( \cdot,V)=\nu_{p^{\prime}}\)._ Proof.: Let \(\Theta\) be a characteristic form on \(M^{\prime}\) such that the respective scalar Levi form \(\mathcal{L}_{\Theta}(\bar{L}_{1},\bar{L}_{2})=\frac{1}{2i}\Theta([L_{1},\bar{L}_ {2}])\) is positive semidefinite. If \(S\) is strongly pseudoconvex at \(p^{\prime}\), then \(\mathcal{L}_{\Theta}|_{T^{0,1}_{p^{\prime}}S}\) is strictly positive definite, hence, by elementary linear algebra, \(\mathcal{L}_{\Theta}\) has at least \(N^{\prime}-1-K\) positive eigenvalues and as the \(K\)-dimensional leaf \(\eta_{p^{\prime}}\) through \(p^{\prime}\) is a complex manifold and therefore Levi-flat, the other \(K\) eigenvalues have to be zero. Consider now \(R_{p^{\prime}}(\bar{L},V)\) for \(\bar{L}\in\mathcal{V}(M)\) and \(V\in T_{p^{\prime}}\eta_{p^{\prime}}\). Decompose \(\bar{L}|_{p^{\prime}}=U+W\) for \(U\in T^{0,1}_{p^{\prime}}S_{p^{\prime}}\) and \(W\in T^{0,1}_{p^{\prime}}\eta_{p^{\prime}}\). As proven in Lemma 4, \(R_{p^{\prime}}(W,V)=0\), hence \(R_{p^{\prime}}(\bar{L},V)=0\) iff \(R_{p^{\prime}}(U,V)=R^{S}_{p^{\prime}}(U,V)=0\). This implies that \(\ker R_{p^{\prime}}(\cdot,V)=\ker R^{S}_{p^{\prime}}(\cdot,V)\oplus T^{0,1}_{ p^{\prime}}\eta_{p^{\prime}}\), proving the second claim. ### The case \(\nu=0\) If the kernel of \(R\) is of minimal dimension even at a single point, Proposition 1 implies Theorem 1, fully generalizing the result on the tube over the light cone; before we give the proof, we first show that the tube over the light cone satisfies \(\nu=0\). **Example 2**.: _Let \(M^{\prime}\subseteq\mathbb{C}^{N^{\prime}}\) be the tube over the light cone. It is foliated by complex lines, at any point \(p^{\prime}\in M^{\prime}\) with \(\Re(p^{\prime})\neq 0\) the Levi form of \(M^{\prime}\) has exactly one zero eigenvalue, and \(\nu_{p^{\prime}}=0\)._ Proof.: Recall that the tube over the light cone is defined as the set of points \(z\in\mathbb{C}^{N^{\prime}}\) such that \(\Re(z_{1})^{2}+\ldots\Re(z_{N^{\prime}-1})^{2}=\Re(z_{N^{\prime}})^{2}\). It is a smooth real hypersurface where \(\Re(z_{N^{\prime}})\neq 0\), and foliated by complex lines \(q+t\left(\Re(q_{1}),\ldots,\Re(q_{N^{\prime}-1}),\Re(q_{N^{\prime}})\right)\), \(q\in M^{\prime}\). Indeed, let us check that \[\Re\left(q_{1}+t\Re(q_{1})\right)^{2}+\cdots+\Re\left(q_{N^{ \prime}-1}+t\Re(q_{N^{\prime}-1})\right)^{2}\] \[=\left(1+\Re(t)\right)^{2}\Re(q_{1})^{2}+\cdots+\Re(q_{N^{\prime }-1})^{2}\] \[=\left(1+\Re(t)\right)^{2}\Re(q_{N^{\prime}})^{2}=\Re\left(q_{N^{ \prime}}+t\Re(q_{N^{\prime}})\right)^{2}.\] The hypersurface \(M^{\prime}\) is pseudoconvex, since the tube over the interior of the light cone is convex. The hypersurface \(\Sigma=\{z\in\mathbb{C}^{N^{\prime}}:z_{N^{\prime}}=p^{\prime}_{N^{\prime}}\}\) through \(p^{\prime}\in M^{\prime}\) is transversal to \(\eta_{p^{\prime}}\) and intersects \(M^{\prime}\) in \(S=\{z\in\mathbb{C}^{N^{\prime}}:z_{N^{\prime}}=p^{\prime}_{N^{\prime}},\Re(z_ {1})^{2}+\cdots+\Re(z_{N^{\prime}-1})^{2}=\Re(p^{\prime}_{N^{\prime}})^{2}\}\), which is a strongly pseudoconvex CR submanifold of \(\Sigma\) because it is a tube over a strongly convex real manifold. To obtain the setup of Lemma 6, it now suffices to calculate \(R^{S}_{p^{\prime}}(\bar{L}|_{p^{\prime}},V|_{p^{\prime}})\) for a single section \(V\) of \(T\eta\) (since \(T\eta\) is one-dimensional). Take \(V(q)=(\Re(q_{1}),\ldots,\Re(q_{N^{\prime}-1}),\Re(q_{N^{\prime}}))\) for \(q\in S\), and consider a CR vector \(\bar{L}|_{p^{\prime}}\in T^{0,1}_{p^{\prime}}S\). Since \(\bar{L}|_{p^{\prime}}\Re(q_{N^{\prime}})=0\), \(L|_{p^{\prime}}V\in T_{p^{\prime}}\Sigma\) Figure 1. The setup from Lemma 6. and because \(T_{p^{\prime}}\Sigma\) and \(T_{p^{\prime}}\eta\) lie in general position, \(R^{S}_{p^{\prime}}(\bar{L}|_{p^{\prime}},V|_{p^{\prime}})=0\) if and only if \(\bar{L}|_{p^{\prime}}V=0\), i.e. \(\bar{L}|_{p^{\prime}}(\Re(q_{j}))=0\) for \(j=1,\ldots,N^{\prime}\). But since \(\bar{L}_{p^{\prime}}q_{j}=0\), this is the case if and only if \(\bar{L}_{p^{\prime}}\bar{q}_{j}=0\) as well, hence \(\bar{L}|_{p^{\prime}}\in T^{0,1}_{p^{\prime}}S\cap T^{1,0}_{p^{\prime}}S=\{0\}\), which proves that \(\nu_{p^{\prime}}=0\). Proof of Theorem 1.: If \(h\) is not generically smooth, there exists an open set \(U_{0}\subseteq M\) where \(h\) is nowhere smooth. Lemma 1 always yields \(r_{0}\geq 1\), thus we may apply Theorem 5, with \(k=0\) and \(l=1\), to obtain a point \(p\in M\), a neighborhood \(U\subseteq U_{0}\) of \(p\) and a continuously differentiable CR family of formal complex curves \((\Gamma_{\xi})_{\xi\in U}\) such that \(\Gamma_{\xi}\) is tangential to \(M^{\prime}\) to infinite order at \(h(\xi)\). For any \(q\in U\) we have \(\nu_{h(q)}=0\), i.e. \(\ker R(\cdot,\gamma_{t}(q))=T^{0,1}_{h(q)}\eta\), hence by Proposition 1 we find \(h_{*}T^{0,1}_{q}M\subseteq T^{0,1}_{h(q)}\eta\). Near points \(q\in U\) where \(h\) is regular enough, this means that \(h^{-1}(\eta_{h(q)})\) integrates the complex tangent bundle and thus, by minimality of \(M\), has to contain an open neighborhood of \(q\). First, take a small open set \(O^{\prime}\subseteq h(U)\) where coordinates adapted to the foliation may be chosen, and hence the restricted foliation \(\eta|_{O^{\prime}}\) is equipped with a manifold structure. Let \(\pi:O^{\prime}\to\eta|_{O^{\prime}}\) denote the projection onto the foliation, given by \(\pi(q)=\eta_{q}\cap O^{\prime}\). Since the rank of a continuously differentiable map is lower semicontinuous, we can find an open subset \(\tilde{U}\subseteq h^{-1}(O^{\prime})\) where \(\pi\circ h\) is of constant rank. By the rank theorem, \(h^{-1}\circ\pi^{-1}(\eta_{h(q)})=h^{-1}(\eta_{h(q)})=:E_{q}\) is a submanifold of \(\tilde{U}\), and \(V\in T_{q}M\) satisfies \((\pi\circ h)_{*}V=0\) if and only if \(V\in T_{q}E_{q}\). But as \(h_{*}T^{0,1}_{q}M\subseteq T^{0,1}_{h(q)}\eta\), we infer that \(T^{0,1}_{q}M\subseteq\mathbb{C}T_{q}E_{q}\) for any \(q\in\tilde{U}\), which by minimality implies that \(E_{q}\) is an open neighborhood of \(q\) in \(\tilde{U}\) already. We have thus shown that an nonempty open subset of \(U\) is mapped into a single leaf \(\eta_{h(q)}\) of the foliation of \(M^{\prime}\). It is left to prove that actually, all of \(M\) will be mapped into \(\eta_{h(q)}\). Let \(\mathcal{K}\) denote the closure of the set of all points \(q\in M\) which possess an open neighborhood that is mapped entirely into \(\eta_{h(q)}\). We will show that \(\mathcal{K}\) is open. For \(p\in\mathcal{K}\), take a neighborhood \(O^{\prime}\subseteq\mathbb{C}^{N^{\prime}}\) of \(h(p)\) where _the connected component_ of \(\eta_{h(q)}\cap O^{\prime}\) containing \(h(p)\) is given as the vanishing set of a holomorphic map \(\pi:O^{\prime}\to\mathbb{C}^{K}\). Then \(\pi\circ h:h^{-1}(O^{\prime})\to\mathbb{C}^{K}\) is a CR map, and since \(p\in\mathcal{K}\), \(\pi\circ h\) must vanish on open sets arbitrarily close to \(p\). But as any CR function on a connected minimal submanifold which vanishes on an open subset already vanishes identically (a consequence e.g. of [3, Theorem III.3.13]), all of \(h^{-1}(O^{\prime})\) must be mapped entirely into \(\eta_{h(q)}\) and thus \(p\) lies in the interior of \(\mathcal{K}\). Now, \(\mathcal{K}\) is both open and closed, and thus connectedness of \(M\) implies \(\mathcal{K}=M\). ## 4. CR transversal maps and proof of Theorem 2 If we want to treat positive \(\nu_{p^{\prime}}\), we have to assume more about the map and the source manifold; for example, if \(M^{\prime}=M\times\mathbb{C}\) for a strictly pseudoconvex \(M\subset\mathbb{C}^{N}\), then \(\nu=N-1\) everywhere, and our usual example \(z\mapsto(z,\varphi(z))\) for some finitely smooth but nonsmooth CR function \(\varphi\) will yield nonsmooth CR maps. The approach we take here is based on the fact that if \(h_{*}T^{0,1}_{p}M\) and \(T_{p^{\prime}}\eta_{p^{\prime}}\) intersect trivially, then we can allow \(\nu_{p^{\prime}}\) to be greater than zero provided that \(h_{*}T^{0,1}_{p}M\) has enough dimensions. In particular, this occurs if \(M\) is a uniformly pseudoconvex hypersurface with sufficiently many positive Levi eigenvalues, and \(h\) satisfies a commonly considered nondegeneracy condition, that of _CR-transversality_. **Definition 1** (CR-transversality).: A CR map \(h:M\to M^{\prime}\) between hypersurfaces \(M\) and \(M^{\prime}\) is called CR-transversal at \(p\in M\) if \(T^{0,1}_{h(p)}M^{\prime}+T^{1,0}_{h(p)}M^{\prime}+h_{*}\mathbb{C}T_{p}M=\mathbb{ C}T_{h(p)}M^{\prime}\). If \(M\) is actually strongly pseudoconvex and \(h:M\to M^{\prime}\) is CR-transversal, then \(h_{*}T_{p}^{0,1}M\) has maximal dimensions and intersects \(T_{p^{\prime}}^{0,1}\eta_{p^{\prime}}\) trivially. This is very well known and a key component in the proof of many regularity results (see e.g. [29]. We summarize for later use the following statement: **Lemma 7**.: _Consider a pseudoconvex hypersurface \(M^{\prime}\), a strongly pseudoconvex hypersurface \(M\) and a \(C^{2}\)-smooth CR-transversal CR map \(h:M\to M^{\prime}\) mapping \(p\in M\) to \(p^{\prime}\in M^{\prime}\). Then \(h\) is an immersion, \(\dim h_{*}T_{p}^{0,1}M=\dim_{CR}M\), and \(h_{*}T_{p}^{0,1}M\cap\mathcal{N}_{p^{\prime}}=\{0\}\), where \(\mathcal{N}_{p^{\prime}}\subseteq T_{p^{\prime}}^{0,1}M^{\prime}\) denotes the null space of the Levi form of \(M^{\prime}\) at \(p^{\prime}\)._ With this fact in mind, it is clear that strict pseudoconvexity of \(M\) and CR-transversality of \(h\) come together to imply that there are many CR directions available along \(h(M)\) where some obstructions to the existence of CR families of infinitely tangential formal complex curves, encoded in \(R\), might exist. If \(M\) is just pseudoconvex, these CR directions may be obtained by considering a strongly pseudoconvex slice of \(M\) of maximal dimension; this observation is the basis of Theorem 2. Proof of Theorem 2.: Take an open neighborhood \(U\subseteq M^{\prime}\) of \(p^{\prime}\) where \(\nu_{q^{\prime}}<n_{+}\) for all \(q^{\prime}\in U\). Let \(\Sigma\subseteq\mathbb{C}^{N}\) be a complex manifold of dimension \(1+n_{+}\) such that \(T_{p}\Sigma+T_{p}^{c}M=T_{p}\mathbb{C}^{N}\), and such that the Levi form of \(M\) is positive definite when restricted to \(T_{p}^{0,1}M\cap T_{p}^{0,1}\Sigma\). Since both transversality and positive definiteness are open conditions, after possibly shrinking \(U\), the intersection \(S:=\Sigma\cap M\cap h^{-1}(U)\) is a strongly pseudoconvex CR submanifold of \(M\) with \(n_{+}=\dim_{CR}S\). Note that \(T_{q}S\) for \(q\in S\) always contains a transversal tangent vector, hence the restricted map \(h|_{S}\) is CR transversal if and only if \(h\) is. By Lemma 7, \(h|_{S}\) is an immersion, \(\dim h_{*}T_{q}^{0,1}S=n_{+}\) and \(h_{*}T_{q}^{0,1}S\cap T_{h(q)}^{0,1}\eta=\{0\}\) for any \(q\in S\), where \(\eta\) denotes the Levi foliation of \(M^{\prime}\), and thus \(T^{0,1}\eta\) is the Levi null space. This is precisely the situation of Corollary 2, hence \(r_{1}\geq 1+\dim_{CR}S=1+n_{+}\), after possibly restricting \(U\) again. Assume now that \(h\) was nowhere smooth on an open set \(\tilde{O}\subset h^{-1}(U)\). Then Theorem 5, with \(k=1\), \(l=1+n_{+}\), would yield \(q\in\tilde{O}\) (mapped to \(q^{\prime}\in M^{\prime}\)) and a continuously differentiable CR family of formal complex curves \((\Gamma_{\xi})_{\xi\in\tilde{O}_{1}}\) defined on a neighborhood \(\tilde{O}_{1}\subseteq\tilde{O}\) of \(q\). But now Proposition 1 implies \(h_{*}T_{q}^{0,1}M\subseteq\ker R_{q^{\prime}}(\cdot,\gamma_{t}(q))\), thus we find that \[K+\nu_{q^{\prime}} \geq\dim_{\mathbb{C}}\ker R_{q^{\prime}}(\cdot,\gamma_{t}(q))\geq \dim_{\mathbb{C}}\left(T_{q^{\prime}}^{0,1}\eta+h_{*}T_{q}^{0,1}M\right)\] \[\geq\dim_{\mathbb{C}}\left(T_{q^{\prime}}^{0,1}\eta+h_{*}T_{q}^{ 0,1}S\right)=K+n_{+},\] which contradicts the assumption \(\nu_{q^{\prime}}<n_{+}\). Therefore \(h\) must be \(C^{\infty}\)-smooth on a dense open subset of \(O:=h^{-1}(U)\). ### Connection to \(2\)-nondegeneracy In this section, we discuss the relationship of the invariant \(\nu\) to finite nondegeneracy. We first recall that a CR submanifold \(M\in\mathbb{C}^{N}\) is called \(k\)-nondegenerate, for \(k\in\mathbb{N}\), at a point \(p\in M\) if \[T_{p}^{0,1}\mathbb{C}^{N}=\left\langle\{\bar{L}_{1}\bar{L}_{j}\rho_{w}:\rho \in\mathscr{I}_{M}(p),\bar{L}_{1},\ldots,\bar{L}_{j}\in\mathcal{V}_{p}(M),0 \leq j\leq k\in]\right\rangle;\] equivalently, its identity map satisfies \(r_{k}(p)=N\), where the \(r_{k}\) are defined in section 2.3. It turns out that \(1\)-nondegeneracy is equivalent to Levi-nondegeneracy, by essentially the same argument as in the proof of Lemma 2 in section 3.1. For a pseudoconvex hypersurface which is not strictly pseudoconvex, the next step is \(2\)-nondegeneracy. **Proposition 2**.: _Let \(M\subset\mathbb{C}^{N}\) be a uniformly Levi-degenerate hypersurface with \(N-K-1\) nonzero Levi eigenvalues and Levi foliation \(\eta\), and for \(p\in M\), consider \(\nu_{p}\) as in Lemma 5. Then the following are equivalent:_ * \(M\) _is_ \(2\)_-nondegenerate at_ \(p\)_,_ * _There exists_ no _germ of a section_ \(V\in\Gamma_{p}(T^{1,0}\eta)\) _such that_ \([\bar{L}_{1},[\bar{L}_{2},V]]|_{p}\in T^{1,0}_{p}M\oplus T^{0,1}_{p}M\) _for all_ \(\bar{L}_{1},\bar{L}_{2}\in\Gamma_{p}(T^{0,1}M)\)_,_ * \(\nu_{p}<N-K-1\)_, i.e._ \(\nu_{p}\) _is not maximal._ Proof.: First, we note that by the product rule and the fact that two smooth defining functions of hypersurfaces differ by a smooth nonzero factor, it is sufficient to consider a single defining function \(\rho\). To work in a covariant setting, we consider \(T^{\prime}M=\langle dz_{1}|_{M},\ldots,dz_{N}|_{M}\rangle\subset\mathbb{C}T^{ *}M\), the space of \((1,0)\)-forms on \(M\). Let the Lie derivative of a \((1,0)\)-form \(\omega\) with respect to a CR vector field \(\bar{L}\) be denoted by \(\mathcal{T}_{\bar{L}}\omega\). By suitably extending \(\bar{L}\) and \(\omega\) to a neighborhood of \(p\) in \(\mathbb{C}^{N}\), we compute \[\left(\mathcal{T}_{\bar{L}}\omega\right)_{j} =\mathcal{T}_{\bar{L}}\omega(\frac{\partial}{\partial z_{j}})=d \omega(\bar{L},\frac{\partial}{\partial z_{j}})+\frac{\partial}{\partial z_{j }}\omega(\bar{L})=d\omega(\bar{L},\frac{\partial}{\partial z_{j}})\] \[=\bar{L}\omega(\frac{\partial}{\partial z_{j}})-\frac{\partial}{ \partial z_{j}}\omega(\bar{L})-\omega([\bar{L},\frac{\partial}{\partial z_{j} }])=\bar{L}\omega_{j},\] using Cartan's magic formula and the fact that \(\omega\) annihilates the \(0,1\)-vector fields \(\bar{L}\) and \([\bar{L},\frac{\partial}{\partial z_{j}}]\). Hence, in this setting, taking Lie derivatives means just taking derivatives component-wise. For a defining function \(\rho\), consider now \(i\partial\rho=i\sum_{j}\rho_{z_{j}}dz_{j}\). This form differs from the real contact form \(\Theta=i(\partial\rho-\bar{\partial}\rho)\) only by a multiple of \(d\rho\), which vanishes along \(M\). By the previous calculation, \(M\) is \(2\)-nondegenerate if and only if \(\partial\rho\), \(\mathcal{T}_{\bar{L}_{1}}\rho\) and \(\mathcal{T}_{\bar{L}_{2}}\mathcal{T}_{\bar{L}_{1}}\rho\) together span all of \(T^{\prime}_{p}M\), where \(\bar{L}_{1}\) and \(\bar{L}_{2}\) range across all germs of CR vector fields at \(p\). The hypersurface \(M\) is \(2\)-_degenerate_ at \(p\) if and only if there exists a nonzero vector \(V\in\mathbb{C}T_{p}M\) such that \(\partial\rho(V)=0\), \((\mathcal{T}_{\bar{L}_{1}}\partial\rho)(V)=0\) and \((\mathcal{T}_{\bar{L}_{2}}\mathcal{T}_{\bar{L}_{1}}\partial\rho)(V)=0\). The first condition ensures that \(V\in T^{1,0}_{p}M+T^{0,1}_{p}M\), and since \(T^{\prime}M\) annihilates \(T^{0,1}M\), we may assume without loss of generality that \(V\in T^{1,0}_{p}M\). Next, we calculate \[0=(\mathcal{T}_{\bar{L}_{1}}\partial\rho)(V)=\tfrac{1}{i}(\mathcal{T}_{\bar{L} _{1}}\Theta)(V)=\tfrac{1}{i}d\Theta(\bar{L}_{1},V)=2\mathcal{L}_{\Theta}(\bar {L}_{1},\bar{V}),\] for all \(\bar{L}_{1}\in T^{0,1}M\), hence \(V\in T^{1,0}\eta\). We extend \(V\) to a local section of \(T^{1,0}\eta\). Then, the third condition yields \[0 =i(\mathcal{T}_{\bar{L}_{2}}\mathcal{T}_{\bar{L}_{1}}\partial \rho)(V)=\left(\mathcal{T}_{\bar{L}_{2}}d\Theta(\bar{L}_{1},\cdot)\right)=d \left(d\Theta(\bar{L}_{1},\cdot)\right)(\bar{L}_{2},V)\] \[=\bar{L}_{2}d\Theta(\bar{L}_{1},V)-Vd\Theta(\bar{L}_{1},\bar{L}_{2 })-d\Theta(\bar{L}_{1},[\bar{L}_{2},V])\] \[=-\bar{L}_{1}\Theta([\bar{L}_{2},V])+[\bar{L}_{2},V]\Theta(\bar{ L}_{1})+\Theta([\bar{L}_{1},[\bar{L}_{2},V]])=\Theta([\bar{L}_{1},[\bar{L}_{2},V]]).\] Thus, \(M\) is \(2\)-degenerate at \(p\) if and only if there exists a section \(V\in\Gamma_{p}(T^{1,0}\eta)\) such that \([\bar{L}_{1},[\bar{L}_{2},V]]|_{p}\in T^{1,0}_{p}M\oplus T^{0,1}_{p}M\), for all \(\bar{L}_{1},\bar{L}_{2}\in\Gamma_{p}(T^{0,1}M)\). Writing \(V=\sum_{j}V^{j}\frac{\partial}{\partial z_{j}}\) and \(\bar{L}_{2}=\sum_{j}\bar{L}^{j}_{2}\frac{\partial}{\partial z_{j}}\), we find that \([\bar{L}_{2},V]=\sum_{j}(\bar{L}_{2}V^{j})\frac{\partial}{\partial z_{j}}-\sum_ {j}(V\bar{L}^{j}_{2})\frac{\partial}{\partial z_{j}}\). As \([\bar{L}_{1},[\bar{L}_{2},V]]\in T^{1,0}_{p}M\oplus T^{0,1}_{p}M\) for any \(\bar{L}_{1}\in\mathcal{V}_{p}\), the \((1,0)\)-part of \([\bar{L}_{2},V]\) must lie in the Levi null space of \(M\), i.e. \(\sum_{j}(\bar{L}_{2}V^{j})\frac{\partial}{\partial z_{j}}|_{p}\in T^{1,0}_{p}\eta\). Almost tautologically, this is the case if and only if \(V_{r}:=(V^{1},\ldots,V^{j})\in T\eta\subseteq T\mathbb{C}^{N}\) satisfies \(\bar{L}_{2}V_{r}\in T\eta\) for all CR vector fields \(\bar{L}_{2}\), i.e. if and only if \(\nu_{p}=N-K-1\) The calculation of \(\nu\) for boundaries of classical symmetric domains in section 6 thus also provides a way of concluding that these hypersurfaces are \(2\)-nondegenerate. In [29], Xiao proves that a merely \(C^{2}\)-smooth CR map from a strongly pseudoconvex hypersurface with \(n_{+}\) positive Levi eigenvalues into a \(2\)-nondegenerate uniformly pseudoconvex hypersurface of precisely \(n_{+}\) positive Levi eigenvalues must be \(C^{\infty}\)-smooth _everywhere_. The point is that, under these conditions on the Levi eigenvalues, the image of the source manifold already contains all relevant vector fields to conclude that the map itself is \(2\)-nondegenerate in the sense of Lamel [18], and thus as regular as the source and target manifolds themselves. The invariant \(\nu_{p}\) contains more subtle information than mere \(2\)-nondegeneracy of the target manifold, as is evinced by the sharp bounds on the number of Levi eigenvalues of the source manifold achieved in Theorem 3. One could hope that the condition \(\nu_{p^{\prime}}<n_{+}\) from Theorem 2 already suffices to conclude that the CR map \(h\) itself is \(2\)-nondegenerate (at least on a dense open subset), but this very likely true only in Xiao's special case. It is not at all easy to find interesting examples of this behaviour, as everywhere finitely nondegenerate, pseudoconvex hypersurfaces of Levi number exceeding \(2\) are extremely scarce and notoriously hard to construct; we refer the reader to the discussion in Baouendi, Ebenfelt, and Zaitsev's paper [2]. ## 5. Applications to holomorphic maps In this section, we give the proof of Corollary 1 in which we apply our results on CR transversal CR maps between smooth hypersurfaces in complex Euclidean space to holomorphic maps which extend to CR maps between the smooth part of their source and target domains' boundaries. Before presenting the proof, we need to collect some preliminary results. It is a well known fact that such holomorphic maps give rise to CR transversal boundary maps, as long as the target domain satisfies a suitable convexity condition (cf. [9] for strongly pseudoconvex or [29] for convex target domains). Indeed, mere pseudoconvexity of the target suffices to guarantee CR transversality of the boundary map. **Proposition 3**.: _Let \(\Omega\subseteq\mathbb{C}^{N}\), \(\Omega^{\prime}\subseteq\mathbb{C}^{N^{\prime}}\) be domains and let \(M\subseteq\partial\Omega\), \(M^{\prime}\subseteq\partial\Omega^{\prime}\) be smooth real hypersurfaces contained in the smooth parts of \(\partial\Omega\) and \(\partial\Omega^{\prime}\), respectively. Suppose that \(\Omega^{\prime}\) is pseudoconvex at \(M^{\prime}\)._ _Then any holomorphic map \(F:\Omega\to\Omega^{\prime}\) which extends to a map of regularity \(C^{1,\epsilon}\) on \(\Omega\cup M\) and maps \(M\) into \(M^{\prime}\) is CR transversal along \(M\)._ This proposition as well as its proof parallel Proposition 9.10.5. in [1], but as the latter result is only stated for equidimensional mappings with smooth boundary extension, a proof for Proposition 3 shall nevertheless be presented. The proof hinges on the following observation by Diederich & Fornaess [7, p. 133, Remark b]. **Lemma 8**.: _Let \(\Omega\subseteq\mathbb{C}^{N}\) be a pseudoconvex domain and \(p\in\partial\Omega\) be a point in the smooth part of its boundary. Take any \(\eta\in(0,1)\). Then there exists a neighborhood \(U\) of \(p\) and a defining function \(\rho\) of \(\Omega\) on \(U\) such that \(-(-\rho)^{\eta}\) is strictly plurisubharmonic on \(\Omega\cap U\)._ As paper [7] is mainly interested in global properties of pseudoconvex domains, the proof of Lemma 8 is merely hinted at in a remark. For a full proof, see [1, Thm. 2.2.17]. Proof of Proposition 3.: Suppose that \(F:\Omega\to\Omega^{\prime}\) extends to a CR map that is not CR-transversal at a point \(p\in M\). Choose \(0<\delta<\epsilon\), and let \(\eta=\frac{1+\delta}{1+\epsilon}\). By Lemma 8, there exists a neighborhood \(U\) of \(F(p)\) and a defining function \(\rho\in C^{\infty}(U,\mathbb{R})\) for \(M^{\prime}\) such that \(U\cap\Omega^{\prime}=\rho^{-1}(-\infty,0)\) and such that \(-(-\rho)^{\eta}\) is strictly plurisubharmonic on \(U\cap\Omega^{\prime}\). By assumption, the normal derivative of \(\rho\circ F\) vanishes at \(p\), hence \(\rho\circ F\) has a critical point at \(p\). By Holder continuity of the derivative, \(\rho\circ F(z)=\mathcal{O}(|z-p|^{1+\epsilon})\) near \(p\). But this implies \(-(-\rho)^{n}\circ F(z)=\mathcal{O}(|z-p|^{1+\delta})\) on \(F^{-1}(U\cap\Omega^{\prime})\), and thus the normal derivative of \(-(-\rho)^{n}\circ F\) at \(p\) vanishes as well. Since \(-(-\rho)^{n}\circ F\) as a pull-back of a subharmonic function along a holomorphic map is subharmonic as well, and since \(-(-\rho)^{n}\circ F\) clearly has a local maximum at \(p\), the normal derivative of \(\rho^{n}\circ F\) at \(p\) is nonzero by the Hopf lemma, a contradiction. A holomorphic map inherits the regularity of its induced boundary map, immediately allowing the transferral of Theorem 2 to holomorphic maps. Proof of Corollary 1.: As \(\Omega^{\prime}\) is pseudoconvex at \(M^{\prime}\), Proposition 3 yields CR transversality of the boundary map \(h:=H|_{M^{\prime}}\). Thus, the hypothesis of Theorem 2 is met, and \(h\) is \(C^{\infty}\)-smooth on a dense open subset \(O\subseteq M\) of a neighborhood of \(p\) in \(M\). By Theorem 7.5.1. in [1], \(H\) then extends to a \(C^{\infty}\)-smooth map on \(O\cup\Omega\). **Remark 4**.: In particular, if \(H:\Omega\to\Omega^{\prime}\) is a _proper_ holomorphic map and extends to a \(C^{N^{\prime}-n+}\)-smooth map on \(\bar{\Omega}\), it maps \(M\) into the topological boundary of \(\Omega^{\prime}\). If a point \(p\in M\) is known to be mapped to some \(p^{\prime}\in M^{\prime}\), an open neighborhood \(U\subseteq M\) of \(p\) is then also mapped into \(M^{\prime}\), and the hypothesis of Corollary 1 is satisfied. ## 6. Maps into boundaries of classical symmetric domains Before we discuss the CR geometry of the boundaries of the classical symmetric domains that we need to apply our results, let us recall some basic facts. We call a bounded domain \(\Omega\subset\mathbb{C}^{N}\) a _bounded symmetric domain_ if it exhibits a biholomorphic involution \(h_{p}:\Omega\to\Omega\) for every point \(p\in\Omega\) which has \(p\) as an _isolated_ fixed point and which satisfies \(Dh(p)=-\mathbb{I}_{N}\) (cf. [27]). A bounded domain \(\Omega\) may be equipped with the _Bergman metric_, a Hermitian metric with the property that each biholomorphism on \(\Omega\) is an isometry. Considered together with this metric, a bounded symmetric domain \(\Omega\) becomes a special case of a _Hermitian symmetric space_, i.e. a manifold equipped with a Hermitian metric such that each point is an isolated fixed point of some involutive isometry. It can be shown that the group of isometries of such manifolds acts transitively, therefore they can be expressed as the coset space of the the stabilizer group of \(\Omega\), defined as the group of isometries leaving a chosen point fixed, in the full isometry group of \(\Omega\) (cf. [8]). This allows the classification of bounded symmetric domains by Lie group techniques. According to [27], any bounded symmetric domain is biholomorphic to a direct product of _irreducible_ bounded symmetric domains. Irreducible bounded symmetric domains fall into four series of classical symmetric domains as well as two exceptional cases (as classified by Cartan, cf. [5]). The study of regularity of proper holomorphic maps into classical symmetric domains, and consequently of CR maps into their boundaries, has been taken up by Xiao in [29]; for important applications of maps between classical symmetric domains, we refer the reader to e.g. Kim and Zaitsev's paper on rigidity of these maps [13]. We will adopt Xiao's naming convention for the classical symmetric domains, which differs from Cartan's original numbering only in swapping domains of the third and fourth kind. Finally, let us briefly recall the singular value decomposition from linear algebra. A matrix \(A\in\mathbb{C}^{m\times n}\), \(m\leq n\) may always be decomposed as \(A=U\Sigma V^{*}\), where 1. \(U\in\mathbb{C}^{m\times m}\) is a unitary matrix, forming a basis of eigenvectors for \(AA^{*}\), 2. \(\Sigma\in\mathbb{C}^{m\times n}\) is a diagonal matrix with nonnegative entries, and 3. \(V\in\mathbb{C}^{n\times n}\) is another unitary matrix, forming a basis of eigenvectors for \(A^{*}A\). The diagonal entries of \(\Sigma\), \(0\leq\sigma_{1}\leq\cdots\leq\sigma_{m}\) are called the _singular values_ of \(A\). They are given by the square roots of the eigenvalues of the (Hermitian, positive semidefinite) matrix \(AA^{*}\), or equivalently by the square roots of the \(m\) largest eigenvalues of \(A^{*}A\). The largest singular value of \(A\) yields the operator norm of \(A\) with respect to the standard scalar product on \(\mathbb{C}^{m}\) and \(\mathbb{C}^{n}\). The matrix \(V\) of _right singular vectors_ may be freely chosen among the orthonormal eigenvector bases of \(A^{*}A\), which then fixes \(\Sigma U=AV\), and therefore those columns of \(U\) corresponding to nonzero singular values, the _left singular vectors_. #### 6.0.1. Classical domains of the first kind We will denote the examples in the first series by \(D_{I}^{m,n}\) for \(1\leq m\leq n\). According to Cartan [5], they may be realized as \[D_{I}^{m,n}=\{Z\in\mathbb{C}^{m\times n}:\mathbb{I}_{m}-ZZ^{*}\text{ is strictly positive definite.}\}\] The condition \(\mathbb{I}_{m}-ZZ^{*}>0\) is equivalent to the largest singular value of \(Z\) being strictly bounded by one, i.e. \(\|Z\|<1\), where \(\|\cdot\|\) always denotes the usual Euclidean matrix norm (or vector norm, respectively). The boundary of \(D_{I}^{m,n}\) is thus given by the set of matrices of norm \(1\), equivalently, by those matrices which have \(1\) as their largest singular value. This set is a smooth manifold where exactly one singular value is \(1\). To see this, consider the characteristic polynomial \(P(\lambda)=\det(\lambda\mathbb{I}_{m}-ZZ^{*})\) of \(ZZ^{*}\), which has a simple zero at \(1\) by assumption. Now \(\rho(Z):=\det(\mathbb{I}_{m}-ZZ^{*})\) has nonvanishing gradient, since \[\rho(Z+\mu Z)=\det(\mathbb{I}_{m}-|1+\mu|^{2}ZZ^{*})=|1+\mu|^{2m}P(|1+\mu|^{-2})\] has nonvanishing derivative, providing us with a defining equation. Let us denote this smooth piece of the boundary by \(M_{I}^{m,n}\). Because \(M_{I}^{m,n}\) bounds the convex region \(D_{I}^{m,n}\), it is a pseudoconvex real hypersurface. The singular value decomposition will translate to a foliation of \(M_{I}^{m,n}\) by complex (in fact, complex linear) manifolds, setting \(M_{I}^{m,n}\) up as an interesting example case for applying Theorem 2. The following result should be compared to Proposition 1.2 in [29], where only strongly pseudoconvex hypersurfaces in \(\mathbb{C}^{m+n-1}\) are considered. **Proposition 4**.: _Let \(m\geq n\geq 2\) and \(M\) be a pseudoconvex hypersurface with at least \(n_{+}\geq m+n-3\) positive Levi eigenvalues. Then every CR-transversal CR map \(h\) of regularity \(C^{mn-n_{+}}\) from \(M\) into \(M_{I}^{m,n}\) is generically smooth._ In the course of our proof, we will be utilizing the boundary orbit theorem, which states that the Lie group of biholomorphic automorphisms of \(D_{I}^{m,n}\) also acts transitively on \(M_{I}^{m,n}\) by ambient biholomorphisms (see e.g. [28] or [24, proof of Lemma 2.2.3]). This allows us to analyze \(M_{I}^{m,n}\) around points which are particularly easy to understand from the matrix model alone, namely the rank one matrices in \(M_{I}^{m,n}\). Indeed, suppose \(h:M\to M_{I}^{m,n}\) is nowhere smooth on a neighborhood \(O\subset M\) of a point \(p\in M\). Any matrix \(ab^{*}\) for vectors \(a\in\mathbb{C}^{m}\), \(b\in\mathbb{C}^{n}\) of unit norm is contained in \(M_{I}^{m,n}\), since it has a lone singular value \(1\). By the boundary orbit theorem, there exists a biholomorphic map \(F_{h(p)}\) defined on a neighborhood of \(h(p)\) mapping \(h(p)\) to \(ab^{*}\in M_{I}^{m,n}\) and \(M_{I}^{m,n}\) into itself. Then \(\tilde{h}:=F_{h(p)}\circ h\) is a CR-transversal CR map taking \(p\) to \(ab^{*}\), which is nowhere smooth on \(O\) as well. At \(ab^{*}\), we check directly that the prerequisites to apply Theorem 2 are fulfilled. **Lemma 9**.: _Let \(a\in\mathbb{C}^{n},b\in\mathbb{C}^{m}\) be unit vectors. Around \(ab^{*}\), the pseudoconvex hypersurface \(M_{I}^{m,n}\) is foliated by \((n-1)\times(m-1)\)-dimensional complex (linear) manifolds. Its Levi form has exactly \(m+n-2\) positive eigenvalues, and \(\nu_{ab^{*}}=m+n-4\)._ If \(\tilde{h}\) was nowhere smooth around \(p\), this would contradict Theorem 2, as \(\nu_{ab^{*}}=m+n-4<n_{+}\). This proves Proposition 4. Proof of Lemma 9.: Let \(\Sigma\subset\mathbb{C}^{m\times n}\) be the set of \(m\times n\) matrices of rank (exactly) \(1\), which is an \((m{+}n{-}1)\)-dimensional holomorphic manifold containing \(ab^{*}\). In linear coordinates such that \(a=(1,0,\ldots,0)^{T}\in\mathbb{C}^{n}\) and \(b=(1,0,\ldots,0)^{T}\in\mathbb{C}^{m}\), \(\Sigma\) is parametrized holomorphically by \[(z_{1},\ldots,z_{m},w_{2},\ldots,w_{n})\mapsto(z_{1},\ldots,z_{m})^{T}(1,w_{2} \ldots,w_{n})\] around \(ab^{*}=(1,0,\ldots,0)^{T}(1,0,\ldots,0)\). To see explicitly that this map is one-to-one near \(ab^{*}\), for a matrix \(Z\in\Sigma\), let \(w\) be the (unique) intersection of \((\ker Z)^{\perp}\) and \(b+\langle b\rangle^{\perp}\). Then \(w^{*}=(1,w_{2},\ldots,w_{n})\) and \(Z(w)/\|w\|^{2}=(z_{1},\ldots,z_{m})^{T}\). The hypersurface \(S:=\Sigma\cap M_{I}^{m,n}\) of rank one matrices with norm \(1\) is strongly pseudoconvex. Indeed, because \(\|uv^{*}\|=\|u\|\|v\|\), a defining equation for \(S\) is given by \[\rho(z_{1},\ldots,z_{m},w_{2},\ldots,w_{n})=(|z_{1}|^{2}+\cdots+|z_{m}|^{2})(1 +|w_{2}|^{2}+\cdots+|w_{n}|^{2})-1=0,\] with (real) Hessian \(2\mathbb{I}_{2(m+n-1)}\) at \(ab^{*}\), implying that \(S\) is actually strongly convex. The singular value decomposition expresses any matrix \(A\in M_{I}^{m,n}\) as \(uv^{*}+B\), where \(u\) and \(v\) are unit singular vectors (unique up to simultaneous multiplication by \(\lambda\in\mathbb{S}^{1}\)) corresponding to the lone singular value \(1\), and the uniquely determined matrix \(B\in\mathbb{C}^{m\times n}\) satisfies \(Bv=0\), \(u^{*}B=0\) and \(\|B\|<1\). Conversely, every matrix \(uv^{*}+B\) of this type lies in \(M_{I}^{m,n}\). The set of all \(B\in\mathbb{C}^{m\times n}\) with \(u^{*}B=0\) and \(Bv=0\) is an \((m-1)\times(n-1)\)-dimensional vector space, and thus the affine planes \[\eta_{uv^{*}}:=\{uv^{*}+B:B\in\mathbb{C}^{m\times n},u^{*}B=0,Bv=0\}\] for \(uv^{*}\in S\) provide the desired foliation \(\eta\) of \(M_{I}^{m,n}\) near \(ab^{*}\). The tangent bundle \(T\eta\) at \(S\) is just given by \(T_{uv^{*}}\eta=\{B\in\mathbb{C}^{m\times n}:Bv=0,u^{*}B=0\}\). Having established the setup from Lemma 6, all that remains is to compute the tensor \(R^{S}\) at \(ab^{*}\). Take \(B_{0}\in T_{ab^{*}}\eta\), \(B_{0}\neq 0\). If we define \(B(Z)\) for \(Z\in\mathbb{C}^{m\times n}\) by \[B(Z)=(\mathbb{I}_{m}-ZZ^{*})B_{0}(\mathbb{I}_{n}-Z^{*}Z),\] then \(B(uv^{*})\) provides a section of \(T\eta\) along \(S\) satisfying \(B(ab^{*})=B_{0}\), since \[u^{*}B(uv^{*}) =u^{*}(\mathbb{I}_{m}-uu^{*})B_{0}(\mathbb{I}_{n}-vv^{*})=0,\] \[B(uv^{*})v =(\mathbb{I}_{m}-uu^{*})B_{0}(\mathbb{I}_{n}-vv^{*})v=0\text{ and}\] \[B(ab^{*}) =(\mathbb{I}_{m}-aa^{*})B_{0}(\mathbb{I}_{n}-bb^{*})=\mathbb{I}_ {m}B_{0}\mathbb{I}_{n}=B_{0}.\] Returning to \(S\), we work out that \(\mathfrak{V}:=\{a\beta^{*}+\alpha b^{*}:\alpha\in\left\langle a\right\rangle^ {\perp}\subset\mathbb{C}^{m},\beta\in\left\langle b\right\rangle^{\perp}\subset \mathbb{C}^{n}\}\) is the complex tangent space of \(S\) at \(ab^{*}\). To show that \(\mathfrak{V}\) is tangential, we take two curves \(\gamma_{1}:(-\varepsilon,\varepsilon)\to\mathbb{S}^{2m-1}\) and \(\gamma_{2}:(-\varepsilon,\varepsilon)\to\mathbb{S}^{2n-1}\) through \(a\) and \(b\), respectively, satisfying \(\dot{\gamma}_{1}(0)=\alpha\) and \(\dot{\gamma}_{2}(0)=\beta\). Now \(\gamma_{1}\gamma_{2}^{*}\) defines a curve in \(S\), and \(\frac{d}{dt}|_{t=0}\left(\gamma_{1}(t)\gamma_{2}(t)^{*}\right)=a\beta^{*}+ \alpha b^{*}\). The space \(\mathfrak{V}\) is parametrized in a complex linear way by \((\alpha,\bar{\beta})\mapsto a\bar{\beta}^{T}+\alpha b^{*}\), where \((\alpha,\bar{\beta})\) lies in the \((m{+}n{-}2)\)-dimensional complex subspace of \(\mathbb{C}^{m+n}\) defined by \(a^{*}\alpha=0\), \(\bar{\beta}^{T}b=\beta^{*}b=0\). To check that this map is indeed injective, test \(\alpha b^{*}+a\beta^{*}\) from right and left with \(b\) and \(a^{*}\), respectively, to obtain \(\alpha\) and \(\bar{\beta}\) again. Since the complex tangent space of \(S\) has only \(\dim_{\mathbb{C}}\Sigma-1=m+n-2\) dimensions, \(T_{ab^{*}}^{c}S=\mathfrak{V}\) follows. Consider a CR vector \(\bar{L}|_{ab^{*}}=\frac{1}{2}(X+iJX)\) for \(X\in T_{ab^{*}}^{c}S\) and write \(X=a\beta^{*}+\alpha b^{*}\). Then the holomorphic curve \(\gamma(t)=(a+t\alpha)(b+\bar{t}\beta)^{*}\) is tangential to \(\bar{L}|_{ab^{*}}\) at \(t=0\). Observing that both \(\|a+t\alpha\|^{2}=1+|t|^{2}\|\alpha\|^{2}\) and \(\|b+\bar{t}\beta\|^{2}\) are constant to first order, we obtain \[\bar{L}|_{ab^{*}}B =\frac{d}{d\bar{t}}\Big{|}_{t=0}B\circ\gamma(t)=\frac{d}{d\bar{t}} \Big{|}_{t=0}\left(\mathbb{I}_{m}-\gamma(t)\gamma(t)^{*}\right)B_{0}\left( \mathbb{I}_{n}-\gamma(t)^{*}\gamma(t)\right)\] \[=\frac{d}{d\bar{t}}\Big{|}_{t=0}\left(\mathbb{I}_{m}-\|b+\bar{t} \beta\|^{2}(a+t\alpha)(a+t\alpha)^{*}\right)B_{0}\left(\mathbb{I}_{n}-\|a+t \alpha\|^{2}(b+\bar{t}\beta)(b+\bar{t}\beta)^{*}\right)\] \[=-a\alpha^{*}B_{0}-B_{0}\beta b^{*}.\] Recall that the scalar product in \(\mathbb{C}^{m\times n}\) may be written as \((A|B)=\operatorname{tr}(A^{*}B)\). By commuting matrices inside the trace we see that for any \(Z\) in \(T_{ab^{*}}\eta\), \[\operatorname{tr}\left((\bar{L}|_{ab^{*}}B)^{*}Z\right)=-\operatorname{tr}(B_ {0}^{*}\alpha a^{*}Z+b\beta^{*}B_{0}^{*}Z)=-\operatorname{tr}(B_{0}^{*}\alpha a ^{*}Z)-\operatorname{tr}(B_{0}^{*}Zb\beta^{*})=0,\] since \(a^{*}Z=Zb=0\). This means that \(R_{ab^{*}}^{S}(\bar{L}|_{ab^{*}},B_{0})=-a\alpha^{*}B_{0}-B_{0}\beta b^{*}\), because the projection onto \(T^{\perp}\eta\) is already taken care of, and \(\bar{L}|_{ab^{*}}\in\ker R_{ab^{*}}^{S}(\cdot,B_{0})\) if and only if \(a\alpha^{*}B_{0}+B_{0}\beta b^{*}=0\). Testing this with \(a^{*}\) and \(b\) from left and right, respectively, we obtain \(\alpha\in\ker B_{0}^{*}\) and \(\beta\in\ker B_{0}\). Since \(B_{0}^{*}a=0\) and \(B_{0}b=0\) already, both kernels have codimension at least one in \(\langle a\rangle^{\perp}\) and \(\langle b\rangle^{\perp}\), respectively, thus \(\dim_{\mathbb{C}}\ker R_{ab^{*}}^{S}(\cdot,B_{0})\leq m+n-4\), implying \(\nu_{ab^{*}}=m+n-4\). Proposition 4 gives all dimensions where a statement this simple is meaningful and possible. If \(M\) has more than \(m+n-2\) positive Levi eigenvalues, there is no CR-transversal map from \(M\) to \(M_{I}^{m,n}\), since, by Lemma 7, the target manifold would need to have at least as many positive Levi eigenvalues as the source. If \(M\) has less than \(m+n-3\) positive Levi eigenvalues, there are nowhere smooth CR-transversal CR maps into \(M_{I}^{m,n}\) of arbitrarily high regularity. **Example 3**.: _Let \(\hat{S}\) be the strongly pseudoconvex hypersurface given by the \((m-1)\times(n-1)\) matrices of rank one and norm \(1\). Then \(\hat{S}\) has \(m+n-4\) positive Levi eigenvalues. Take a \(C^{k}\), but nowhere \(C^{\infty}\)-smooth CR function \(\phi\) on \(\hat{S}\) with \(|\phi|<1\). Then \(h(Z)=\begin{pmatrix}Z&0\\ 0&\phi\end{pmatrix}\) gives a nowhere smooth CR-transversal CR map \(h:\hat{S}\to M_{I}^{m,n}\) of regularity \(C^{k}\)._ Proof.: Regularity is obvious from the component-wise definition. CR-transversality always holds for the graph map of a CR function, i.e. \(h:M\to M\times\mathbb{C}\), \(h(p)=(p,\phi(p))\), since \(T^{c}(M\times\mathbb{C})\cong T^{c}M\times\mathbb{C}\), \(T(M\times\mathbb{C})\cong TM\times\mathbb{C}\) and \(\mathbb{P}_{TM}\circ h_{*}\cong\operatorname{id}\) together imply that any transversal vector \(v\in TM\setminus T^{c}M\) maps into a transversal vector again. That \(h(Z)\in M_{I}^{m,n}\) follows from the singular value computations in the proof of Lemma 9. #### 6.0.2. Classical domains of the second kind These classical symmetric domains, denoted by \(D_{II}^{m}\), \(m\geq 2\), are given as the sets of _skew symmetric_ complex \(m{\times}m\) matrices with norm less than \(1\). Equivalently, \[D_{II}^{m}=\left\{Z\in\mathbb{C}^{m\times m}:Z^{T}=-Z,\mathbb{I}_{m}-Z^{*}Z>0 \right\}.\] Every nonzero singular value of a skew symmetric matrix \(Z\) occurs with even multiplicity. Suppose \(u\) is a right singular vector corresponding to a singular value \(\sigma\), which is equivalent to \(Z^{*}Zu=\sigma^{2}u\). Then \(v:=\sigma^{-1}\overline{Zu}\) is another right singular vector corresponding to \(\sigma\), since it follows from \(Z^{*}=\bar{Z}^{T}=-\bar{Z}\) that \(Z^{*}Zv=-\sigma^{-1}\bar{Z}ZZ\bar{u}=-\sigma^{-1}\bar{Z}\overline{Z}Zu= \sigma^{-1}\bar{Z}\overline{Z^{*}Zu}=\sigma\bar{Z}\bar{u}=\sigma^{2}v\), and \(v^{*}v=\sigma^{-2}u^{T}Z^{T}\bar{Z}\bar{u}=\sigma^{-2}\overline{u^{*}Z^{*}Zu}=1\). Furthermore, \(v\) and \(u\) are orthogonal, and \(u=-\sigma^{-1}\overline{Zv}\): \[\sigma u^{*}v =u^{*}\bar{Z}\bar{u}=(u^{*}\bar{Z}\bar{u})^{T}=u^{*}\bar{Z}^{T} \bar{u}=-u^{*}\bar{Z}\bar{u}\Rightarrow u^{*}v=0,\] \[-\sigma^{-1}\overline{Zv} =-\sigma^{-2}\overline{Z}\overline{Z}\bar{u}=-\sigma^{-2}\bar{Z} Zu=\sigma^{-2}Z^{*}Zu=u.\] The boundary of \(D^{m}_{II}\) is given by those skew symmetric matrices with norm 1. It is a smooth manifold where exactly the largest two singular values are 1. We will denote this smooth piece of the boundary by \(M^{m}_{II}\). Let us postpone checking that \(M^{m}_{II}\) is a manifold to the proof of Lemma 10. **Proposition 5**.: _Let \(m\geq 4\) and \(M\) be a pseudoconvex hypersurface with at least \(n_{+}\geq 2m-7\) positive Levi eigenvalues. Then any CR-transversal CR map \(h\) of regularity \(C^{\frac{m(m-1)}{2}-n_{+}}\) from \(M\) into \(M^{m}_{II}\) is generically smooth._ Completely analogously to the situation of Proposition 4, this follows from the boundary orbit theorem for \(D^{m}_{II}\), which allows us to map each point in \(M^{m}_{II}\) to \(p^{\prime}:=ab^{T}-ba^{T}\) for orthonormal \(a,b\in\mathbb{C}^{m}\) by an automorphism of \(M^{m}_{II}\), and from the following structural properties. **Lemma 10**.: _Let \(a,b\in\mathbb{C}^{m}\) be orthonormal vectors. Around \(p^{\prime}:=ab^{T}-ba^{T}\in M^{m}_{II}\), the pseudoconvex hypersurface \(M^{m}_{II}\) is foliated by \(\frac{(m-2)(m-3)}{2}\)-dimensional complex (linear) manifolds. Its Levi form has exactly \(2m-4\) positive eigenvalues, and \(\nu_{p^{\prime}}=2m-8\)._ Proof.: As the intersection of the linear subspace of skew symmetric matrices with the convex matrix norm unit ball, \(D^{m}_{II}\) is convex and \(M^{m}_{II}\) is a pseudoconvex hypersurface. The set \(\Sigma\) of skew symmetric matrices of rank two is a \((2m{-}3)\)-dimensional complex manifold around \(p^{\prime}\). In coordinates where \(a=(1,0,\dots,0)^{T}\) and \(b=(0,1,0,\dots,0)^{T}\), it is parametrized around \(p^{\prime}\) by \[(z_{3},...,z_{m},w_{2},...,w_{m})\mapsto(1,0,z_{3},...,z_{m})^{T}(0,w_{2},...,w_{m})-(0,w_{2},...,w_{m})^{T}(1,0,z_{3},...,z_{m}).\] To check surjectivity, let \(\bar{u}\) and \(\bar{v}\) be two right singular vectors corresponding to the only nonzero singular value \(\sigma\), chosen such that \(Z\bar{u}=-\sigma v\) and \(Z\bar{v}=\sigma u\). Then \(Z=u(\sigma v)^{T}-(\sigma v)u^{T}\). Since \(a^{*}(ab^{T}-ba^{T})\bar{b}=1\), \(a^{*}Z\bar{b}\neq 0\) near \(p^{\prime}\), implying that at least one of \(a^{*}u\) or \(a^{*}v\) is nonzero. By substituting \((-v,u)\) for \((u,v)\) if necessary, we can arrange \(a^{*}u\neq 0\). Let \(\tilde{u}=u\), \(\tilde{v}=\sigma(v-\frac{a^{*}}{a^{*}u}u)\), then \(a^{*}\tilde{v}=0\) and \(Z=\tilde{u}\tilde{v}^{T}-\tilde{v}\tilde{u}^{T}\). Note that \(a^{*}Z\bar{b}\neq 0\) now implies \(b^{*}\tilde{v}\neq 0\). Let \(z=\frac{1}{a^{*}\tilde{u}}\tilde{u}-\frac{b^{*}\tilde{u}}{(a^{*}\tilde{u})(b^ {*}\tilde{v})}\tilde{v}\) and \(w=(a^{*}\tilde{u})\tilde{v}\). Then we have \(a^{*}z=1\), \(b^{*}z=0\), \(a^{*}w=0\) and \(Z=zw^{T}-wz^{T}\), proving that \(Z\) is in the range of our parametrization. To check that it is an immersion, it suffices to calculate \(\frac{\partial}{\partial z_{j}}(zw^{T}-wz^{T})=e_{j}e_{j}^{T}-e_{2}e_{j}^{T}\), \(3\leq j\leq m\) and \(\frac{\partial}{\partial w_{k}}(zw^{T}-wz^{T})=e_{1}e_{k}^{T}-e_{k}e_{1}^{T}\), \(2\leq k\leq m\), since these are evidently \(\mathbb{C}\)-linearly independent matrices. The set \(S=\Sigma\cap M^{m}_{II}\) of skew symmetric rank two matrices with norm 1 is a strictly pseudoconvex hypersurface in \(\Sigma\). To show this, first note that for orthogonal vectors \(\alpha,\beta\in\mathbb{C}^{m}\), we have \[\|\alpha\beta^{T}-\beta\alpha^{T}\|^{2} =\|\big{(}\alpha\beta^{T}-\beta\alpha^{T}\big{)}^{*}\left(\alpha \beta^{T}-\beta\alpha^{T}\right)\|=\|\|\alpha\|^{2}\bar{\beta}\beta^{T}+\| \beta\|^{2}\bar{\alpha}\alpha^{T}\|\] \[=\|\alpha\|^{2}\|\beta\|^{2}\|\mathrm{diag}(1,1,0,\dots,0)\|=\| \alpha\|^{2}\|\beta\|^{2}.\] The standard Euclidean scalar product on \(\mathbb{C}^{m\times m}\) coincides with the Frobenius scalar product \((A|B)=\mathrm{tr}(A^{*}B)\). For a matrix \(Z=\alpha\beta^{T}-\beta\alpha^{T}\) with orthogonal \(\alpha,\beta\in\mathbb{C}^{m}\), the Frobenius norm works out to \[\sqrt{\mathrm{tr}(Z^{*}Z)}=\sqrt{\mathrm{tr}(\|\alpha\|^{2}\bar{\beta}\beta^{T }+\|\beta\|^{2}\bar{\alpha}\alpha^{T})}=\sqrt{2}\|\alpha\|\|\beta\|.\] Therefore, the Frobenius norm and the matrix norm agree up to a constant on \(\Sigma\), and \(S=\Sigma\cap M^{m}_{II}=\Sigma\cap\sqrt{2}\mathrm{g}^{2m^{2}-1}\) is strongly pseudoconvex, as it is given by the intersection of a complex manifold with a strongly convex hypersurface. The singular value decomposition expresses \(Z\in M^{m}_{II}\) as \(uv^{T}-vu^{T}+B\), where \(\bar{u}\) and \(\bar{v}\) are right singular vectors corresponding to the double singular value 1 satisfying \(Z\bar{u}=-v\) and \(Z\bar{v}=u\), and \(B\) satisfies \(B\bar{u}=B\bar{v}=0\), \(u^{*}B=v^{*}B=0\) and \(\|B\|<1\). By linearity, we have \(B^{T}=-B\), implying that \(B\bar{u}=B\bar{v}=0\) and \(u^{*}B=v^{*}B=0\) are equivalent. In coordinates where \(u=(1,0,0,\ldots,0)\) and \(v=(0,1,0,\ldots,0)\), the conditions \(B=-B^{T}\) and \(B\bar{u}=B\bar{v}=0\) simply mean that \(B\) is a skew symmetric matrix with the first two rows and columns empty. We conclude that the affine planes \(\eta_{uv^{T}-vu^{T}}=uv^{T}-vu^{T}+\{B\in\mathbb{C}^{m\times m}:B=-B^{T},B\bar{ u}=B\bar{v}=0\}\) for \(uv^{T}-vu^{T}\in S\) provide a foliation of \(M^{m}_{II}\) by \(\frac{(m-2)(m-3)}{2}\)-dimensional complex manifolds, and that \(M^{m}_{II}\), as an embedded piece of a vector bundle over \(S\), is indeed a manifold. The complex tangent space \(T^{c}_{p^{\prime}}S\) at \(p^{\prime}=ab^{T}-ba^{T}\) will be given by the complex vector space \(\mathfrak{V}:=\{a\beta^{T}-\beta a^{T}+\alpha b^{T}-b\alpha^{T}:\alpha,\beta \in\langle a,b\rangle^{\perp}\subset\mathbb{C}^{m}\}\). To show tangency, consider the complex curve \(\gamma(t)=(a+t\alpha)(b+t\beta)^{T}-(b+t\beta)(a+t\alpha)^{T}\), with tangent vector \(\gamma_{t}(0)=a\beta^{T}-\beta a^{T}+\alpha b^{T}-b\alpha^{T}\). It is contained in \(\Sigma\) and tangential to \(M^{m}_{II}\), the latter because \(\|\gamma(t)\|^{2}=\|a+t\alpha\|^{2}\|b+t\beta\|^{2}-|(a+t\alpha)^{*}(b+t\beta) |^{2}=\|a\|^{2}\|b\|^{2}+\mathcal{O}(|t|^{2})\), hence \(\gamma_{t}(0)\in T^{c}_{p^{\prime}}S\). Since \(\mathfrak{V}\) is isomorphic to \(\langle a,b\rangle^{\perp}\) by the map \(\gamma_{t}(0)\mapsto\big{(}\gamma_{t}(0)\bar{b},-\gamma_{t}(0)\bar{a}\big{)}\), it has \(2m-4=\dim_{\mathbb{C}R}S\) dimensions, and \(T^{c}_{p^{\prime}}S=\mathfrak{V}\). Given \(B_{0}\in T_{p^{\prime}}\eta\), the map \(B(Z)=(\mathbb{I}_{m}-ZZ^{*})B_{0}(\mathbb{I}_{m}-Z^{*}Z)\) again provides a section of \(T\eta\) along \(S\), since for orthonormal \(u,v\in\mathbb{C}^{m}\), \[B(uv^{T}-vu^{T}) =(\mathbb{I}_{m}-uu^{*}-vv^{*})B_{0}(\mathbb{I}_{m}-\bar{u}u^{T} -\bar{v}v^{T})=-B(uv^{T}-vu^{T})^{T}\] \[B(uv^{T}-vu^{T})\bar{u} =(\mathbb{I}_{m}-uu^{*}-vv^{*})B_{0}(\bar{u}-\bar{u})=0,\text{ and }B(uv^{T}-vu^{T})\bar{v}=0.\] Taking a CR vector \(\bar{L}|_{p^{\prime}}\in T^{0,1}_{p^{\prime}}S\) with real part \(\frac{1}{2}\left(a\beta^{T}-\beta a^{T}+\alpha b^{T}-b\alpha^{T}\right)\) and the curve \(\gamma(t)=(a+t\alpha)(b+t\beta)^{T}-(b+t\beta)(a+t\alpha)^{T}\), we first obtain \[\gamma(t)\gamma(t)^{*} =\|a+t\alpha\|^{2}(b+t\beta)(b+t\beta)^{*}+\|b+t\beta\|^{2}(a+t \alpha)(a+t\alpha)^{*}\] \[-t\bar{t}(\beta^{T}\bar{\alpha})(a+t\alpha)(b+t\beta)^{*}-t\bar{t }(\alpha^{T}\bar{\beta})(b+t\beta)(a+t\alpha)^{*}\] \[=(b+t\beta)(b+t\beta)^{*}+(a+t\alpha)(a+t\alpha)^{*}+\mathcal{O} (|t|^{2}),\] \[\gamma(t)^{*}\gamma(t) =\overline{(b+t\beta)}(b+t\beta)^{T}+\overline{(a+t\alpha)}(a+t \alpha)^{T}+\mathcal{O}(|t|^{2}),\] which simplifies the calculations for \(R^{S}_{p^{\prime}}\) significantly. We obtain \[\bar{L}|_{p^{\prime}}B =\frac{d}{d\bar{t}}\Big{|}_{t=0}B\circ\gamma(t)=\frac{d}{d\bar{t} }\Big{|}_{t=0}\left(\mathbb{I}_{m}-\gamma(t)\gamma(t)^{*}\right)B_{0}\left( \mathbb{I}_{m}-\gamma(t)^{*}\gamma(t)\right)\] \[=\frac{d}{d\bar{t}}\Big{|}_{t=0}\Big{(}\big{(}\mathbb{I}_{m}-(b+t \beta)(b+t\beta)^{*}-(a+t\alpha)(a+t\alpha)^{*}\big{)}B_{0}\] \[\cdot\big{(}\mathbb{I}_{m}-\overline{(b+t\beta)}(b+t\beta)^{T}- \overline{(a+t\alpha)}(a+t\alpha)^{T}\big{)}\big{)}+\mathcal{O}(|t|^{2})\Big{)}\] \[=-b\beta^{*}B_{0}-a\alpha^{*}B_{0}-B_{0}\beta b^{T}-B_{0}\bar{ \alpha}a^{T}.\] By the same calculations as in the proof of Lemma 9, we find that this already gives \(R^{S}_{p^{\prime}}(\bar{L}|_{p^{\prime}},B_{0})=-b\beta^{*}B_{0}-a\alpha^{*}B_{ 0}-B_{0}\beta\bar{b}^{T}-B_{0}\bar{\alpha}a^{T}\), and that \(\bar{L}|_{p^{\prime}}\in\ker R^{S}_{p^{\prime}}(\cdot,B_{0})\) if and only if \(\alpha,\beta\in\ker\bar{B}_{0}\). As a nonzero skew symmetric matrix, \(\bar{B}_{0}\) has at least two nonzero singular values, hence \(\operatorname{codim}_{\mathbb{C}}\ker\bar{B}_{0}\geq 2\). Since \(\bar{B}_{0}a=\bar{B}_{0}b=0\), and \(\alpha,\beta\in\langle a,b\rangle^{\perp}\), we obtain \(\operatorname{codim}_{\mathbb{C}}\ker R^{S}_{p^{\prime}}(\cdot,B_{0})\geq 4\) and thus \(\nu_{p^{\prime}}=2m-8\). As in Proposition 4, there are counterexamples to regularity if \(M\) has exactly \(2m-8\) positive Levi eigenvalues. **Example 4**.: _Let \(\hat{S}\subset M^{m-2}_{II}\) be the strongly pseudoconvex hypersurface of antisymmetric \((m-2)\times(m-2)\) matrices of rank two and norm \(1\). It has \(2m-8\) positive Levi eigenvalues. Given a \(C^{k}\)-smooth, but nowhere \(C^{\infty}\)-smooth CR function \(\phi\) on \(\hat{S}\) strictly bounded by \(1\), the map \(h:\hat{S}\to M^{m}_{II}\) given by_ \[h(Z)=\begin{pmatrix}Z&0&0\\ 0&0&-\phi\\ 0&\phi&0\end{pmatrix}\] _is a \(C^{k}\)-smooth, but nowhere \(C^{\infty}\)-smooth CR-transversal CR function._ #### 6.0.3. Classical domains of the third kind Domains of the third kind \(D^{m}_{III}\) are given by the sets of _symmetric_ complex \(m{\times}m\) matrices with norm less than \(1\). Equivalently, \[D^{m}_{III}=\left\{Z\in\mathbb{C}^{m{\times}m}:Z^{T}=Z,\mathbb{I}_{m}-Z^{*}Z>0 \right\}.\] Here the regularity result obtained from Theorem 2 only holds for \(M\subset\mathbb{C}^{m}\). **Proposition 6**.: _Let \(m\geq 2\) and \(M\) be a pseudoconvex hypersurface with at at least \(n_{+}\geq m-1\) positive Levi eigenvalues. Then every CR-transversal CR map \(h\) of regularity \(C^{\frac{m(m+1)}{2}-n_{+}}\) from \(M\) into \(M^{m}_{III}\) is generically smooth._ Let us note in passing that a nontrivial CR transversal CR map from \(M\) into \(M^{m}_{III}\) can only exist if the number of positive Levi eigenvalues of \(M\) does not exceed \(m-1\), thus this result only truly concerns uniformly pseudoconvex hypersurfaces. Proposition 6 is a consequence of the boundary orbit theorem for \(D^{m}_{III}\), which tells us that every point \(Z\in M^{m}_{III}\) may be mapped to \(aa^{T}\) for a unit vector \(a\in\mathbb{C}^{m}\) by an ambient biholomorphism mapping \(M^{m}_{III}\) into itself. Almost completely analogously to the case of \(M^{m,n}_{I}\), the following structural properties hold. **Lemma 11**.: _Let \(a\in\mathbb{C}^{m}\) be a unit vector. Around \(aa^{T}\in M^{m}_{III}\), the pseudoconvex hypersurface \(M^{m}_{III}\) is foliated by \(\frac{m(m-1)}{2}\)-dimensional complex (linear) manifolds. Its Levi form has exactly \(m-1\) positive eigenvalues, and \(\nu_{aa^{T}}=m-2\)._ Proof.: As the intersection of the convex set of matrices of norm less than \(1\) with the linear subspace of symmetric matrices, \(D^{m}_{III}\) is convex, and thus \(M^{m}_{III}\) is pseudoconvex. Let \(\Sigma\) be the \(m\)-dimensional complex manifold of symmetric matrices of rank \(1\). Near \(aa^{T}\), it is parametrized by \(z\mapsto zz^{T}\) for \(z\in\mathbb{C}^{m}\) with \(\Re(a^{*}z)>0\). To check bijectivity, write \(Z=\sigma uv^{*}\) for singular vectors \(u,v\in\mathbb{C}^{m}\) and the nonzero singular value \(\sigma\). Since \(u\) and \(v\) lie in the one-dimensional kernels of \(ZZ^{*}-\sigma^{2}\mathbb{I}_{m}=Z\bar{Z}-\sigma^{2}\mathbb{I}_{m}\) and \(Z^{*}Z-\sigma^{2}\mathbb{I}_{m}=\bar{Z}Z-\sigma^{2}\mathbb{I}_{m}\), respectively, we infer by Cramer's rule that \(\lambda u=\bar{v}\) for some \(\lambda\in\mathbb{S}^{1}\). Letting \(z:=\sigma^{\frac{1}{2}}\lambda^{-\frac{1}{2}}\bar{v}=\sigma^{\frac{1}{2}} \lambda^{\frac{1}{2}}u\), we find that \(Z=zz^{T}\). The only indeterminacy here - the choice of sign for the root \(\lambda^{\frac{1}{2}}\) - is fixed by requiring \(\Re(z^{*}a)>0\). The real hypersurface \(S\subset\Sigma\) of rank one matrices with norm \(1\) is strongly pseudoconvex. Indeed, as \(\|zz^{T}\|=\|z\|^{2}\), we have that \(z\in\mathbb{S}^{2m-1}\) iff \(zz^{T}\in S\), and the map \(z\mapsto zz^{T}\) provides a holomorphic double cover of \(S\) by \(\mathbb{S}^{2m-1}\), showing that \(S\cong\mathbb{R}P^{2m-1}\). The complex affine planes \(\eta_{ww^{T}}:=\{ww^{T}+B:B\bar{w}=0,B^{T}=B\}\) for \(w\in\mathbb{S}^{2m-1}\) provide a foliation of \(M^{m}_{III}\) near \(aa^{T}\). As in the proof of Lemma 9, the singular value decomposition expresses \(Z\in M^{m}_{III}\) as \(uv^{*}+B\), where \(u\), \(v\) are unit vectors (unique up to simultaneous multiplication by \(\lambda\in\mathbb{S}^{1}\)), and \(B\) satisfies \(B^{*}u=Bv=0\) and \(\|B\|<1\). Since as before, \(u\) and \(v\) lie in the one-dimensional kernels of \(Z\bar{Z}-\mathbb{I}_{m}\) and \(\bar{Z}Z-\mathbb{I}_{m}\), respectively, we may express \(Z=ww^{T}+B\) for \(w\in\mathbb{S}^{2m-1}\), implying \(B^{T}=B\) by linearity. The condition \(Bv=B^{*}u=0\) simplifies to \(B\bar{w}=0\). In coordinates where \(w=(1,0,\ldots,0)^{T}\in\mathbb{C}^{m}\), \(B\bar{w}=0\) just means that the first column is empty, a condition that is clearly linearly independent of \(B^{T}=B\) Therefore, the space defined by \(B^{T}=B\), \(B\bar{w}=0\) is a complex vector space of \(\frac{m(m-1)}{2}\) dimensions for \(w\) near \(a\). Given \(B_{0}\in T_{aa^{T}}\eta\), we prove that \(B(Z)=(\mathbb{I}_{m}-ZZ^{*})B_{0}(\mathbb{I}_{m}-Z^{*}Z)\) provides a section of \(T\eta\) along \(S\). For \(ww^{T}\in S\), \[B(ww^{T})\bar{w} =(\mathbb{I}_{m}-ww^{*})B_{0}(\mathbb{I}_{m}-\bar{w}w^{T})\bar{w} =(\mathbb{I}_{m}-ww^{*})B_{0}(\bar{w}-\bar{w})=0,\] \[B(ww^{T})^{T} =(\mathbb{I}_{m}-(\bar{w}w^{T})^{T})B_{0}^{T}(\mathbb{I}_{m}-(ww^ {*})^{T})=B(ww^{T})\text{ and }\] \[B(aa^{T}) =(\mathbb{I}_{m}-aa^{*})B_{0}(\mathbb{I}_{m}-\bar{a}a^{T})=B_{0}.\] Consider a CR vector \(\bar{L}|_{aa^{T}}\in T^{0,1}_{aa^{T}}S\). Complex tangent vectors \(\alpha\in T^{c}_{a}\mathbb{S}^{2m-1}\) are characterized by \(\alpha^{*}a=0\). Since \(z\mapsto zz^{T}\) is holomorphic and onto, we can just plug a suitable complex tangent \(t\mapsto a+t\alpha\) into this map to obtain a curve \(\gamma(t)=(a+t\alpha)(a+t\alpha)^{T}\) such that \(\bar{L}|_{aa^{T}}=\gamma_{*}\frac{d}{dt}|_{t=0}\). Then, after rewriting \(\gamma(t)=(a+t\alpha)(\bar{a}+\bar{t}\bar{\alpha})^{*}\), we obtain by the exact same calculation as in Lemma 9 that \(R^{S}_{aa^{T}}(\bar{L}|_{aa^{T}},B_{0})=-a\alpha^{*}B_{0}-B_{0}\bar{\alpha}a^ {T}\). By multiplying from the right with \(\bar{a}\), we find that \(\bar{L}|_{aa^{T}}\in\ker R^{S}_{aa^{T}}(\cdot,B_{0})\) if and only if \(B_{0}\bar{\alpha}=0\). Since \(\bar{a}\in\ker B_{0}\), the codimension of the kernel of \(B_{0}\) in \(\langle\bar{a}\rangle^{\perp}\) equals the codimension of the full kernel of \(B_{0}\), hence \(\dim_{\mathbb{C}}\ker R^{S}_{aa^{T}}(\cdot,B_{0})=m-1-\operatorname{codim}_{ \mathbb{C}}\ker B_{0}\leq m-2\), implying \(\nu_{aa^{T}}=m-2\). Here a counterexample for regularity of CR-transversal maps from source manifolds with less than \(m-1\) positive Levi eigenvalues may be constructed in the exact same fashion as in the case of \(M_{I}^{m,n}\). Let us instead consider a slightly different example map into \(M_{III}^{m}\). **Example 5**.: _Let \(\phi\) be a nowhere smooth CR function of regularity \(C^{k}\) on \(\mathbb{S}^{2m-3}\) strictly bounded by \(1\). Then the map \(h:\mathbb{S}^{2m-3}\to M_{III}^{m}\) given by_ \[h(z)=\frac{1}{2}(z_{1},\ldots,z_{m-1},1)^{T}(z_{1},\ldots,z_{m-1},1)+\frac{ \phi(z)}{2}(z_{1},\ldots,z_{m-1},-1)^{T}(z_{1},\ldots,z_{m-1},-1)\] _is a nowhere smooth CR-transversal CR embedding of regularity \(C^{k}\)._ Proof.: We first consider the map \(H:\mathbb{C}^{m-1}_{z}\times\mathbb{C}_{w}\) given by \[H(z,w)=\frac{1}{2}(z_{1},\ldots,z_{m-1},1)^{T}(z_{1},\ldots,z_{m-1},1)+\frac{ w}{2}(z_{1},\ldots,z_{m-1},-1)^{T}(z_{1},\ldots,z_{m-1},-1).\] It is a holomorphic immersion on \(\mathbb{C}^{m-1}\times\mathbb{B}^{1}\subset\mathbb{C}^{m-1}\times\mathbb{C}\), since \[\frac{\partial}{\partial z_{j}}H(z,w) =\frac{1+w}{2}\left(e_{j}\big{(}z_{1},\ldots,z_{m-1},\frac{1-w}{1 +w}\big{)}+\big{(}z_{1},\ldots,z_{m-1},\frac{1-w}{1+w}\big{)}e_{j}^{T}\right),\] \[\frac{\partial}{\partial w}H(z,w) =\frac{1}{2}(z_{1},\ldots,z_{m-1},-1)^{T}(z_{1},\ldots,z_{m-1},-1),\] where \(e_{j}\) denotes the \(j^{th}\) standard unit vector. The matrices \(\frac{\partial}{\partial z_{j}}H(z,w)\) for \(1\leq j\leq m-1\) are linearly independent, since their last columns - given by \(\frac{1-w}{2}e_{j}\) - are. Observing that \(\frac{\partial}{\partial z_{j}}H(z,w)_{m,m}=0\), but \(\frac{\partial}{\partial w}H(z,w)_{m,m}=\frac{1}{2}\neq 0\), we conclude that all partial derivatives of \(H\) are linearly independent, hence \(H\) is immersive. From the adapted singular value decomposition used in the proof of Lemma 11, we see that \(H\) maps \(\mathbb{S}^{2m-3}\times\mathbb{B}^{1}\) injectively into \(M_{III}^{m}\). Considering the graph map \(\Phi:\mathbb{S}^{2m-3}\to\mathbb{S}^{2m-3}\times\mathbb{C}\), \(\Phi(z)=(z,\phi(z))\), which clearly is a \(C^{k}\), but nowhere \(C^{\infty}\)-smooth CR embedding of \(\mathbb{S}^{2m-3}\), we may write \(h=H\circ\Phi\), showing that \(h\) is a \(C^{k}\), but nowhere \(C^{\infty}\)-smooth CR immersion of \(\mathbb{S}^{2m-3}\) into \(M_{III}^{m}\). Note that it is CR-transversal, since \(H\) is transversal to \(M_{III}^{m}\), and \(\Phi\) was CR-transversal. Since \(h\) is an injective immersion of the compact sphere, it is an embedding. #### 6.0.4. Classical domains of the fourth kind Somewhat different from the first three series of classical symmetric domains, the models for these domains, denoted by \(D^{m}_{IV}\) for \(m\geq 2\), are defined by simple quartic inequalities, first given in [5]. \[D^{m}_{IV}=\left\{z\in\mathbb{C}^{m}:z^{*}z<1,1+|z^{T}z|^{2}-2z^{*}z>0\right\}.\] The binding inequality is the second one. Indeed, a point \(z\in\partial D^{m}_{IV}\) satisfying \(z^{*}z=1\) also satisfies \(|z^{T}z|\leq 1\) by Cauchy's inequality, thus \(1+|z^{T}z|^{2}-2z^{*}z\leq 0\). A low-dimensional toy image to have in mind is that of a lens-shaped region defined by \(y^{2}-\frac{1}{4}\left(1-x^{2}\right)^{2}<0\), where we discard the unbounded region by requiring \(x^{2}+y^{2}<1\). The smooth part of the boundary of \(D^{m}_{IV}\), which we will denote by \(M^{m}_{IV}\), is given by those \(z\in\mathbb{C}^{m}\) satisfying \(1+|z^{T}z|^{2}-2z^{*}z=0\) and \(z^{*}z<1\). In fact, \(D^{m}_{IV}\) is biholomorphic to the tube domain over the light cone from Example 2. The tube domain over the future light cone is given by \(\left\{(z_{1},\ldots,z_{m-1},z_{m})\in\mathbb{C}^{m}:\Re(z_{1})^{2}+\cdots+ \Re(z_{m-1})^{2}<\Re(z_{m})^{2},\ \Re(z_{m})>0\right\}\). An explicit biholomorphism between the tube domain over the future light cone and \(D^{m}_{IV}\) is given in [29] as \[(z_{1},\ldots,z_{m-1},z_{m})\mapsto\sqrt{2}i\left(2\frac{z_{1}}{\mathcal{F}(z +\mathbf{i})},\ldots,2\frac{z_{m-1}}{\mathcal{F}(z+\mathbf{i})},\frac{1+ \mathcal{F}(z)}{\mathcal{F}(z+\mathbf{i})}\right),\] where \(\mathbf{i}\) denotes the vector \((0,\ldots,0,i)\in\mathbb{C}^{m}\) and where \(\mathcal{F}(z):=z_{m}^{2}-z_{1}^{2}-\cdots-z_{m-1}^{2}\) for any \(z\in\mathbb{C}^{m}\). Let us nevertheless reprove the regularity result for \(D^{m}_{IV}\) by computing the necessary quantities directly from Cartan's representation. As an example point in \(M^{m}_{IV}\) to base our calculations on, take \(a:=(\frac{1}{2},\frac{i}{2},0,\ldots,0)^{T}\). Here, \(a^{T}a=0\) and \(a^{*}a=\frac{1}{2}\). Contrary to the first three kinds of classical domains, \(M^{m}_{IV}\) will necessarily behave exactly like the tube over the light cone. **Proposition 7**.: _Let \(m\geq 2\) and \(M\) be a minimal CR manifold. Then every CR map \(h\) of regularity \(C^{m-1}\) from \(M\) into \(M^{m}_{IV}\) which is of (real) rank \(\geq 3\) is \(C^{\infty}\)-smooth on a dense open subset of \(M\). If \(h\) is CR transversal and for some \(n_{+}\geq 1\), \(M\) has at least \(n_{+}\) positive Levi eigenvalues almost everywhere, then initial regularity may be dropped to \(C^{m-n_{+}}\)._ This is an immediate consequence of the boundary orbit theorem for \(M^{m}_{IV}\), which allows us to take any point in \(M^{m}_{IV}\) to \((\frac{1}{2},\frac{i}{2},0,\ldots,0)\) by an ambient biholomorphism, and of Theorem 1 (Theorem 2 for CR transversal \(h\) and \(n_{+}\geq 1\)). The relevant structural properties of \(M^{m}_{IV}\) of course do not differ at all from those of the tube over the light cone (Example 2). **Lemma 12**.: _Let \(a\in\mathbb{C}^{m}\) be such that \(a^{T}a=0\) and \(a^{*}a=\frac{1}{2}\). Around \(a\in M^{m}_{IV}\), the pseudoconvex hypersurface \(M^{m}_{IV}\) is foliated by complex lines. Its Levi form has exactly \(m-2\) positive eigenvalues, and \(\nu_{a}=0\)._ Proof.: The complex quadric \(\Sigma\) defined by \(z^{T}z=0\) is a manifold where \(z\neq 0\). Its intersection \(S\) with \(M^{m}_{IV}\) is given by \(S=\{w\in\mathbb{C}^{m},w^{T}w=0,w^{*}w=\frac{1}{2}\}\). As it is the intersection of a complex manifold with the strongly pseudoconvex sphere given by \(w^{*}w=\frac{1}{2}\), it is strongly pseudoconvex itself. Near a point \(w\in S\), the complex line given by \(\eta_{w}(t)=w+t\bar{w}\) is contained in \(S\). This is proven by straightforward calculation. Since \(w^{*}w=\frac{1}{2}\) and \(w^{T}w=0\), we observe \((w^{*}+\bar{t}w^{T})(\bar{w}+\bar{t}w)=\bar{t}\) and similar cancellations, and arrive at \[1+|\eta_{w}^{T}(t)\eta_{w}(t)|^{2}-2\eta_{w}(t)^{*}\eta_{w}(t)\] \[=1+(w+t\bar{w})^{*}\overline{(w+t\bar{w})}(w+t\bar{w})^{T}(w+t \bar{w})-2(w+t\bar{w})^{*}(w+t\bar{w})\] \[=1+(w^{*}+\bar{t}w^{T})(\bar{w}+\bar{t}w)(w^{T}+t\bar{w}^{T})(w+t \bar{w})-2(w^{*}+\bar{t}w^{T})(w+t\bar{w})\] \[=1+\bar{t}t-(1+\bar{t}t)=0.\] It remains to calculate the tensor \(R^{S}(\cdot,\bar{w})\) at \(a\in S\), since the section \(\bar{w}\) already spans \(T\eta\) along \(S\). A vector \(v\in T_{a}^{c}S\) is characterized by \((a+tv)^{T}(a+tv)=\mathcal{O}(|t|^{2})\) and \((a+tv)^{*}(a+tv)=\frac{1}{2}+\mathcal{O}(|t|^{2})\), which is equivalent to \(a^{T}v=a^{*}v=0\). Take a CR vector \(\bar{L}|_{a}\in T_{a}^{0,1}S\) with real part \(\frac{1}{2}v\), and consider the holomorphic curve \(\gamma(t)=a+tv\). Then \(\bar{L}|_{a}\bar{w}=\frac{d}{dt}|_{t=0}\overline{(a+tv)}=\bar{v}\), and we find that \(\bar{v}\in T_{a}^{\perp}\eta_{a}=\langle\bar{a}\rangle^{\perp}\) already, since \(a^{*}v=0\). Therefore \(R_{a}^{S}(\bar{L}|_{a},\bar{a})=\bar{v}\) only vanishes if \(v\) and thus \(\bar{L}|_{a}\) vanish, implying \(\nu_{a}=0\).
2308.01326
Fundamentals of the Oldroyd-B model revisited: Tensorial vs. vectorial theory
The standard derivation of the Oldroyd-B model starts from a coupled system of the momentum equation for the macroscopic flow on the one hand, and Fokker-Planck dynamics for molecular dumbbells on the other. The constitutive equation is then derived via a closure based upon the second moment of the end-to-end vector distribution. We here present an alternative closure that is rather based upon the first moment, and gives rise to an even simpler constitutive equation. We establish that both closures are physically sound, since both can be derived from (different) well-defined non-equilibrium ensembles, and both are consistent with the Second Law of thermodynamics. In contrast to the standard model, the new model has a free energy and a dissipation rate that are both regular at vanishing conformation tensor. We speculate that this might perhaps alleviate the well-known high Weissenberg number problem, i. e. severe numerical instabilities of the standard model at large flow rates. As the new model permits a trivial solution (vanishing conformation tensor, vanishing polymer stress), an extension may be needed, which includes Langevin noise in order to model thermal fluctuations.
Aaron Brunk, Joydip Chaudhuri, Maria Lukacova-Medvidova, Burkhard Duenweg
2023-08-02T15:42:37Z
http://arxiv.org/abs/2308.01326v1
# Fundamentals of the Oldroyd-B model revisited: Tensorial vs. vectorial theory ###### Abstract The standard derivation of the Oldroyd-B model starts from a coupled system of the momentum equation for the macroscopic flow on the one hand, and Fokker-Planck dynamics for molecular dumbbells on the other. The constitutive equation is then derived via a closure based upon the second moment of the end-to-end vector distribution. We here present an alternative closure that is rather based upon the first moment, and gives rise to an even simpler constitutive equation. We establish that both closures are physically sound, since both can be derived from (different) well-defined non-equilibrium ensembles, and both are consistent with the Second Law of thermodynamics. In contrast to the standard model, the new model has a free energy and a dissipation rate that are both regular at vanishing conformation tensor. We speculate that this might perhaps alleviate the well-known high Weissenberg number problem, i. e. severe numerical instabilities of the standard model at large flow rates. As the new model permits a trivial solution (vanishing conformation tensor, vanishing polymer stress), an extension may be needed, which includes Langevin noise in order to model thermal fluctuations. ## I Introduction The Oldroyd-B model [1; 2; 3; 4; 5] is one of the most popular models to describe the rheology of polymer solutions. Except for the incompressibility condition for the flow field, and the momentum conservation equation, it features a constitutive equation for the stress, which is an additional time-dependent equation of motion in order to take into account viscoelasticity, or, in other words, the fact that intramolecular relaxation happens on a time scale that is comparable to the relevant time scales of the flow. Despite its popularity, the model is known to run into severe stability problems when solving the equations numerically and studying Weissenberg numbers that are too large -- recall that the dimensionless Weissenberg number is defined as the product of shear rate and molecular relaxation time. This well-known "high Weissenberg number problem" (HWNP) has been confirmed and discussed in numerous computer simulations, e. g. (to name just one out of many) in Ref. [6]. The problem must still be considered as essentially unsolved. The standard derivation of the model [1] starts from a two-scale description: On the one hand, the flow on the macroscale is described by the Navier-Stokes equation, augmented by an additional polymeric stress. On the other hand, the polymer component is sketched by an extremely simple model -- a set of non-interacting Hookean dumbbells whose dynamics is described by a simple Langevin equation (or equivalent Fokker-Planck equation, FPE) which describes the balance between spring forces, friction forces and stochastic forces. One then observes that the Kramers (or virial) expression for the microscopic polymer-induced stress is directly related to the so-called conformation tensor, i. e. the tensor product of the dumbbell's end-to-end vector with itself. Then, without deep justification, one simply averages the tensor with the time-dependent non-equilibrium distribution function for the end-to-end vector, i. e. the solution of the FPE. On the one hand, this provides a prescription how to obtain the polymer stress on the macroscale from the microscopic configurations and the microscale dynamics. On the other hand, the Fokker-Planck dynamics gives rise to an equation of motion for the such-defined average conformation tensor. Because of the simplicity of the microscopic model, this equation happens to be closed, and is nothing but the constitutive equation of the standard Oldroyd-B model. For good reasons, this procedure of derivation has been termed "closure", in order to express the fact that there is a certain amount of arbitrariness and approximation that underlies the reasoning. In the present paper, we wish to point out that the standard closure corresponds to a _specific choice_ of the underlying statistical-mechanical non-equilibrium ensemble, which is by no means the only choice that is possible. Indeed, from the Mori-Zwanzig formalism [7; 8] we know that the very first step to derive simplified dynamic equations is the identification (or, actually, _choice_) of so-called "slow variables", i. e. observables whose dynamics is assumed to be so slow that they should be taken into account as degrees of freedom on the macroscale. Therefore, it is only the remaining "fast" variables that one should average over. The common derivation of the standard closure does not clearly make that distinction, and this is what makes it conceptually slightly obscure. Note that the identification of slow variables, combined with averaging over the fast ones, may be viewed as the definition of a non-equilibrium _ensemble_. As a matter of fact, we will demonstrate that the standard Oldroyd-B equation may be derived within that conceptual framework by simply postulating that the slow variable is actually the conformation tensor as such. However, other ensembles may be defined as well, simply by picking other slow variables. In the present paper, we wish to analyze the consequences of rather taking the end-to-vector as the ensemble-defining observable, and to demonstrate that this is a valid choice, leading to a new fully consistent theory. There are further differences concerning the number of dumbbells that contribute to the observable; these details will become clear below. The modified ensemble then gives rise to a modified constitutive equation, and is therefore an alternative closure, which we call the "modified Oldroyd-B model". It can be verified that both closures give rise to a macroscopic model that satisfies the Second Law of thermodynamics; however, the free energy is different for the two closures (which is not surprising, given that the ensembles differ). From a practical point of view, it turns out that the modified model is based not on the _second_ moment of the end-to-end vector distribution function (as is the standard version), but rather on its _first_. There seems to be widespread belief in the community that basing the theory on the first moment is fundamentally impossible, because it is believed to vanish identically, for symmetry reasons. The argument is simply that a dumbbell is intrinsically symmetric, such that a simple re-labeling of the two beads would turn the dumbbell's end-to-end vector \(\bar{q}\) into \(-\bar{q}\). However, nothing prevents us from just picking one of the two orientations and then basing the description on this. The symmetry only means that we may as well just pick the other orientation, and nevertheless obtain completely identical results. Indeed, let us assume that we have a small volume element which contains a number of dumbbells. The typical model assumption is that the element is (i) small enough, such that the flow within it may be considered as homogeneous (with constant velocity gradient tensor), but also (ii) large enough, such that the contributing dumbbells may be described in terms of a probability density for the end-to-end vector, \(P(\bar{q})\). Typically, the nonzero velocity gradient will result in some local alignment of dumbbells, and we now wish to map that configuration (say, at some initial time \(t=0\)) onto a probability density \(P(\bar{q})\). Now, going through the molecules and picking their orientation at random will indeed produce a density with vanishing first moment, and that property will then continue to be present in the dynamics. However, we can also adopt a different convention: We pick some (arbitarily chosen) orientation for one particular molecule and then choose the orientation of the other molecules such that they are maximally aligned with the original one. It is important to realize that this has to be done not only within the selected volume element, but also in all the other ones, whose orientations have to be mutually consistent in order to make sure that \(\bar{q}\) can, in the continuum limit, be described as a smooth vector field. It is clear that with such a labeling convention we will get a \(P(\bar{q})\) which does, at \(t=0\), exhibit a nonzero first moment. As the dynamics of \(P\) is assumed to be smooth with respect to time, the nonzero first moment will then also exist for some later times. It should be noted that this argument of course only works in a non-equilibrium (shear-aligned) situation -- in strict thermal equilibrium the first moment is of course indeed always zero, simply because in this situation the attempt to pick the orientations in a mutually aligned fashion will be unsuccessful. The whole discussion of kinetic theory later in the paper should therefore be read with the notion in mind that we do admit a nonzero first moment, by imposing a suitable initial condition, which in turn corresponds to a specific labeling convention of the molecules. An interesting feature of the modified model is that it removes the singularity of the free energy at vanishing conformation tensor, which is present for the standard version. This singularity also occurs in the dissipation rate, indicative of the system's ability to absorb arbitrarily large amounts of energy. The well-known HWNP [6] may perhaps be related to this; therefore one may speculate that the new model might perhaps have a reduced or even non-existent HWNP. For this reason, we think it is worthwhile to subject the new model to thorough numerical tests; this is however deferred to future work. It should be mentioned that we originally realized the ensemble problems, and the ambiguity of the closure, not by the reasoning presented here. Rather, we had set out to derive hydrodynamic equations for viscoelastic phase separation, i. e. the spinodal decomposition of viscoelastic fluids. We did this with a completely different theoretical approach: Starting from essentially the same phantom dumbbell picture that underlies the derivation of the Oldroyd-B model, we subjected the system to coarse-graining, which resulted in a field-theoretic Hamiltonian and a field-theoretic dissipation rate. Combining this with the GENERIC formalism [9; 10; 11], and various simplifying approximations, we arrived at a set of equations, which, when assuming a homogeneous system that does not unmix, reduces exactly to the Oldroyd-B system -- however, _not_ in the standard version but the modified one. We then realized that the difference was due to a difference in the underlying non-equilibrium ensemble, and made some remarks along this line in our publication [12], which describes the derivation in detail. However, a detailed and careful analysis of the issue, and in particular a systematic comparison of the two variants, was missing. The present paper is intended to provide these missing items, in order to put the new variant on a sound theoretical basis. We believe these considerations are relevant not only within the framework of the Oldroyd-B model, but for rheology in general. We will start out from the standard formulation of the Oldroyd-B model, and then generalize it to also include the modified version. This generalized model is then subjected to theoretical scrutiny, where we discuss the conservation of the conformation tensor's positivity and establish the validity of the Second Law. We only then discuss the underlying statistical mechanics, and first outline the general formalism of a non-equilibrium ensemble based upon the notion of slow ob servables. This allows us to construct the free energies of both variants from the partition function, and also their constitutive equations from Fokker-Planck theory, for which we show the Second Law as well. We will conclude with a brief summary. ## II Basic formulation Our starting point are the equations of motion of the Oldroyd-B model. Denoting the fluid flow velocity field with \(\vec{v}(\vec{r},t)\), where \(\vec{r}\) is the spatial coordinate and \(t\) denotes time, we may introduce the operator for the convective derivative via \[D_{t}=\frac{\partial}{\partial t}+\vec{v}\cdot\nabla=\partial_{t}+\vec{v}\cdot \nabla, \tag{1}\] or, in terms of Cartesian components \(\alpha,\beta,\ldots\), for which we assume the Einstein summation convention, \[D_{t}=\partial_{t}+v_{\alpha}\frac{\partial}{\partial r_{\alpha}}=\partial_{ t}+v_{\alpha}\partial_{\alpha}. \tag{2}\] The incompressibility condition for the flow is written as \[\nabla\cdot\vec{v}=0, \tag{3}\] and the momentum conservation equation reads \[\rho D_{t}\vec{v}=-\nabla p+\nabla\cdot\vec{T}, \tag{4}\] or, in components, \[\rho D_{t}v_{\alpha}=-\partial_{\alpha}p+\partial_{\beta}T_{\alpha\beta}. \tag{5}\] Here \(\rho\) is the constant mass density of the fluid, \(p\) is the scalar pressure (which should be viewed as a Lagrange multiplier or "reaction stress" enforcing incompressibility), and \(\vec{T}\) the viscoelastic stress tensor. As \(\vec{T}\) is symmetric, it does not matter if the derivative in \(\nabla\cdot\vec{T}\) acts on the first or the second index. Introducing the velocity gradient tensor \(\vec{\kappa}\) via \[\kappa_{\alpha\beta}=\partial_{\beta}v_{\alpha}, \tag{6}\] we can define the upper convected derivative of a symmetric second-rank tensor as \[\delta_{\vec{T}}\vec{T}=D_{t}\vec{T}-\vec{\kappa}\cdot\vec{T}-\vec{T}\cdot \vec{\kappa}^{\prime T}. \tag{7}\] Here the dot product is to be understood as a conventional matrix product, i. e., in components the definition reads \[\delta_{\vec{T}}T_{\alpha\beta} = D_{t}T_{\alpha\beta}-\kappa_{\alpha\gamma}T_{\gamma\beta}-T_{ \alpha\gamma}\kappa_{\beta\gamma} \tag{8}\] \[= D_{t}T_{\alpha\beta}-T_{\beta\gamma}\partial_{\gamma}v_{\alpha} -T_{\alpha\gamma}\partial_{\gamma}v_{\beta}.\] It should be noted that from a formal point of view one may also define other convected derivatives (for example, the so-called "lower convected derivative" [13]); however, in the standard Oldroyd-B model the upper convected derivative occurs, as a direct consequence of its kinetic-theory derivation. Furthermore, we can define the deformation rate tensor \[\vec{D}=\frac{1}{2}\left(\vec{\kappa}+\vec{\kappa}^{T}\right). \tag{9}\] In particular, we find for the upper convected derivative of the unit tensor \[\delta_{\vec{\lambda}}\overleftarrow{1}=-2\vec{D}. \tag{10}\] Finally, we introduce the solvent viscosity \(\eta_{s}>0\), the polymer viscosity \(\eta_{p}\geq 0\), the total viscosity \(\eta=\eta_{s}+\eta_{p}\), a molecular relaxation time \(\lambda>0\), an associated modulus \(G=\eta_{p}/\lambda\), and the dimensionless ratio \[\Gamma=\frac{\eta_{p}}{\eta_{s}}\geq 0, \tag{11}\] which may be viewed as a parameter which tells us how strong viscoelastic effects are expected to be. Note that these parameters are all macroscopic, with no direct reference to a molecular picture, with the exception of \(\lambda\), which denotes the typical time scale for molecular processes (without specifying details about these processes). With these notational preliminaries, which may be used for essentially any rheological model for polymer solutions, we can now focus on the specific formulation of the Oldroyd-B model. According to Refs. 1 or 5, its constitutive equation can be written in the following compact form: \[\vec{T}+\lambda\delta_{\vec{\lambda}}\overleftarrow{T}=2\eta\vec{D}+2\eta_{s} \lambda\delta_{\vec{D}}. \tag{12}\] For the further development it is useful to slightly re-write the equations. Firstly, we recall the standard constitutive relation for the solvent stress \(\vec{\tau}^{(s)}\), which is that of a Newtonian fluid: \[\vec{\tau}^{(s)}=2\eta_{s}\vec{D}, \tag{13}\] such that, due to incompressibility, \[\nabla\cdot\vec{\tau}^{(s)}=\eta_{s}\nabla^{2}\vec{v}. \tag{14}\] We can thus transform the right-hand side (RHS) of Eq. 12 as follows: \[2\eta\vec{D}+2\eta_{s}\lambda\delta_{\vec{\lambda}}\overleftarrow {D} \tag{15}\] \[= 2\eta_{s}\left(\vec{D}+\lambda\delta_{\vec{\lambda}}\overleftarrow {D}\right)+2\eta_{p}\vec{D}\] \[= \vec{\tau}^{(s)}+\lambda\delta_{\vec{\tau}}\overleftarrow{(s)}- \eta_{p}\delta_{\vec{\lambda}}\overleftarrow{1}\] \[= \vec{\tau}^{(s)}+\lambda\delta_{\vec{\tau}}\overleftarrow{(s)}-G \lambda\delta_{\vec{\lambda}}\overleftarrow{1}.\] As the total stress \(\vec{T}\) is the sum of the polymer stress \(\vec{\tau}^{(p)}\) and the solvent stress \(\vec{\tau}^{(s)}\), we may thus subtract the latter contribution from Eq. 12 and obtain \[\vec{\tau}^{(p)}+\lambda\delta_{\vec{\tau}}\overleftarrow{(p)}=-G\lambda \delta_{\vec{\tau}}\overleftarrow{1}. \tag{16}\] We now decompose \[\vec{\tau}^{(p)}=\vec{\sigma}-G\overleftarrow{1} \tag{17}\] and obtain \[\overleftarrow{\sigma}-G\overleftarrow{1}+\lambda\delta\overleftarrow{\sigma}=0 \tag{18}\] or \[\delta\overleftarrow{\sigma}=-{1\over\lambda}\left(\overleftarrow{\sigma}-G \overleftarrow{1}\right), \tag{19}\] which is thus just a transformed way of writing the Oldroyd-B constitutive equation. In terms of the transformed variables, the momentum equation reads \[\rho D_{t}\vec{v}=-\nabla p+\eta_{s}\nabla^{2}\vec{v}+\nabla\cdot\overleftarrow{ \sigma}. \tag{20}\] In this form, the equations are more amenable to theoretical analysis. We now introduce a dimensionless tensor \(\overleftarrow{C}\), commonly referred to as "conformation tensor", via \[\overleftarrow{\sigma}=G\overleftarrow{C}=\Gamma{\eta_{s}\over\lambda} \overleftarrow{C}. \tag{21}\] With this re-parametrization, the constitutive equation reads \[\delta\overleftarrow{C}=-{1\over\lambda}\left(\overleftarrow{C}- \overleftarrow{1}\right), \tag{22}\] while the momentum equation is transformed to \[\rho D_{t}\vec{v}=-\nabla p+\eta_{s}\nabla^{2}\vec{v}+\Gamma{\eta_{s}\over \lambda}\nabla\cdot\overleftarrow{C}. \tag{23}\] Since the parameters \(\rho\), \(\eta_{s}\), and \(\lambda\) have physical dimensions that are mutually different, we may set them to unity in order to define a natural unit system for the problem. We thus obtain for the non-dimensionalized set of Oldroyd-B equations: \[\nabla\cdot\vec{v} = 0, \tag{24}\] \[D_{t}\vec{v}+\nabla p-\Gamma\nabla\cdot\overleftarrow{C} = \nabla^{2}\vec{v},\] (25) \[\delta\overleftarrow{C} = -\overleftarrow{C}+\overleftarrow{1}. \tag{26}\] In the last two equations, we have grouped the contributions in such a way that all the terms on the left-hand side (LHS) exhibit the same behavior under time reversal, while the terms on the RHS exhibit the opposite behavior. This allows us to clearly identify the terms on the RHS as the _dissipative_ contributions, while those on the LHS are the time derivatives and the _conservative_ contributions. This necessary split-up, which comes directly from time-reversal symmetry, is central for the further development. In what follows, we will discuss a _generalized_ Oldroyd-B model. We wish to keep the momentum equation as-is, because it involves little more than the mere definition of stress. Similarly, we wish to leave the LHS of Eq. 26 unchanged -- the description of the kinematics of \(\overleftarrow{C}\) in terms of the upper convected derivative is widely accepted in the rheology community, and shall not be challenged in the present paper either -- even less so, as it is a clear and unrettable consequence of the kinetic theory outlined below. However, the dissipative part of Eq. 26 shall be generalized to \[\delta\overleftarrow{C}=-\overleftarrow{C}+\alpha_{d}\overleftarrow{1}, \tag{27}\] where we introduce a dimensionless scalar parameter \(\alpha_{d}\) (the subscript \(d\) may be read as "dynamics" or "dissipation"). In other words, we generalize the dissipative part of the constitutive equation to a first-order polynomial in \(\overleftarrow{C}\), where the pre-factor of the linear term is fixed by the requirement of relaxational dynamics, plus the chosen unit system. As we do not wish to discuss models in which the dissipative terms depend non-linearly on \(\overleftarrow{C}\), this is the most general form. ## III Positivity of the conformation tensor It will become clear later that positive (semi-)definiteness of \(\overleftarrow{C}\) plays a crucial role in the theory. Let us therefore assume that at time \(t=0\) we start out with a tensor that is positive (semi-)definite (throughout the volume of the system), and investigate under which conditions this property is conserved under the dynamics. At \(t=0\) the spectral decomposition of \(\overleftarrow{C}\) reads \[\overleftarrow{C}(t=0)=\sum_{k}\lambda_{k}\vec{n}^{(k)}\vec{n}^{(k)T}, \tag{28}\] where \(\lambda_{k}>0\) are the positive eigenvalues, and \(\vec{n}^{(k)}\) the corresponding normalized eigenvectors, \(\vec{n}^{(k)}\cdot\vec{n}^{(k)}=1\). This means we can also write \[\overleftarrow{C}(t=0)=\sum_{k}\vec{p}^{(k)}(t=0)\,\vec{p}^{(k)T}(t=0) \tag{29}\] with \(\vec{p}^{(k)}(t=0)=\lambda_{k}^{1/2}\vec{n}^{(k)}\). We therefore try the ansatz \[\overleftarrow{C}(t)=\sum_{k}\vec{p}^{(k)}(t)\,\vec{p}^{(k)T}(t), \tag{30}\] which is manifestly positive semi-definite, and find \[\delta\overleftarrow{C}+\overleftarrow{C} = \sum_{k}\left[\left(D_{t}\,\vec{p}^{(k)}\right)\vec{p}^{(k)T}+ \vec{p}^{(k)}\left(D_{t}\,\vec{p}^{(k)T}\right)\right. \tag{31}\] \[\left.-\left(\overleftarrow{\kappa}\cdot\vec{p}^{(k)}\right)\vec {p}^{(k)T}-\vec{p}^{(k)}\left(\overleftarrow{\kappa}\cdot\vec{p}^{(k)}\right) ^{T}+\vec{p}^{(k)}\,\vec{p}^{(k)T}\right]\] \[= \sum_{k}\left[\left(D_{t}\,\vec{p}^{(k)}-\overleftarrow{\kappa} \cdot\vec{p}^{(k)}+{1\over 2}\vec{p}^{(k)}\right)\vec{p}^{(k)T}\right.\] \[\left.+\vec{p}^{(k)}\left(D_{t}\,\vec{p}^{(k)}-\overleftarrow{ \kappa}\cdot\vec{p}^{(k)}+{1\over 2}\vec{p}^{(k)}\right)^{T}\right].\] This, however, means that if we propagate the vectors \(\vec{p}^{(k)}\) according to \[D_{t}\,\vec{p}^{(k)}-\overleftarrow{\kappa}\cdot\vec{p}^{(k)}+{1\over 2}\vec{p}^{( k)}=0, \tag{32}\] then Eq. 30 will solve \[\delta\overleftarrow{C}+\overleftarrow{C}=0. \tag{33}\] We thus see that for \(\alpha_{d}=0\) the constitutive equation conserves the property of positive semi-definiteness. The question if in this case also strict positive-definiteness is conserved, such that all eigenvalues, if assumed to be positive at \(t=0\), will remain positive throughout the dynamics, is more subtle. We note that for strict positive-definiteness we need that the vectors \(\vec{p}^{(k)}\) form a basis, meaning that their number must equal the spatial dimension, that they must be nonzero, and that they are linearly independent. It should be noted that we need only linear independence and not mutual orthogonality. As a matter of fact, one should expect that the dynamics will typically not maintain mutual orthogonality, even if it holds at \(t=0\). It is also reasonable to assume (although not straightforward to prove) that linear independence will remain intact during the dynamics. If also each of the vectors never goes through zero, then indeed strict positive definiteness is maintained -- but this is far from obvious for arbitrary flows. Fortunately, however, it will become clear below that for \(\alpha_{d}=0\) positive semi-definiteness is sufficient for the model to be sound. For general \(\alpha_{d}\), we replace the true dynamics by a discretized and thus approximate dynamics that is inspired by numerical time-stepping schemes. We introduce a small time step \(h\) and propagate the system for the duration of one time step as follows: (i) Update the dynamical variables by solving the system \[\nabla\cdot\vec{v} = 0, \tag{34}\] \[D_{t}\vec{v}+\nabla p-\Gamma\nabla\cdot\overleftarrow{C} = \nabla^{2}\vec{v},\] (35) \[\delta\overleftarrow{C}+\overleftarrow{C} = 0, \tag{36}\] for the duration of \(h/2\); (ii) update them according to \[\partial_{t}\vec{v} = 0, \tag{37}\] \[\partial_{t}\overleftarrow{C} = \alpha_{d}\overleftarrow{1}, \tag{38}\] for the duration of \(h\); and (iii) update them for \(h/2\), again following the prescription given in step (i). This scheme is known as Strang splitting, which yields the correct dynamics up to errors of order \(h^{2}\). Now, it is clear from the previous analysis that steps (i) and (iii) do not alter the positive-semidefinite character of \(\overleftarrow{C}\), provided that the initial value at the beginning of the update is positive-semidefinite. Step (ii) will also maintain positive semi-definiteness, provided \[\alpha_{d}\geq 0, \tag{39}\] which we will from now on postulate as a necessary condition for a valid generalized Oldroyd-B model. As a matter of fact, for \(\alpha_{d}>0\) the eigenvalues are increased during step (ii), and thus in this case the dynamics even conserves positive-definiteness. This is in accord with known results from the literature [14]. ## IV Hamiltonian I: Analysis of Conservative Dynamics For the conservative part of the dynamics we have the equations of motion \[\nabla\cdot\vec{v} = 0, \tag{40}\] \[D_{t}\vec{v}+\nabla p-\Gamma\nabla\cdot\overleftarrow{C} = 0,\] (41) \[\delta\overleftarrow{C} = 0. \tag{42}\] Per construction, these equations are time-reversal symmetric. However, for being conservative, the dynamics must also conserve the underlying Hamiltonian \(\mathcal{H}\), which should, in the present isothermal setting, be interpreted as the Helmholtz free energy of the system. As the model is a local field theory, the Hamiltonian must be written as a functional of the fields \(\vec{v}\) and \(\overleftarrow{C}\). As the kinetic energy density is \(\vec{v}^{2}/2\), the only reasonable functional that is consistent with the requirement of a _local_ free energy density has the form (in \(d\) spatial dimensions) \[\mathcal{H}=\int d\vec{r}\left[\frac{\vec{v}^{2}}{2}+f(\overleftarrow{C}) \right], \tag{43}\] where \(f\) is a scalar function of the conformation tensor. It should be noted that for any volume integral we assume either a bounded domain with periodic boundary conditions, or an infinite domain with rapidly decaying fields, such that an integration by parts will not involve any boundary terms. Clearly, \(f\) must be chosen in such a way that the Hamiltonian is conserved, i. e. \[\frac{d\mathcal{H}}{dt}=0. \tag{44}\] To this end, we first consider the equation of motion \[\partial_{t}v_{\alpha}=-v_{\beta}\partial_{\beta}v_{\alpha}-\partial_{\alpha} p+\Gamma\partial_{\beta}C_{\alpha\beta}. \tag{45}\] Multiplying with \(v_{\alpha}\) and integrating over space yields, after some integration by parts, and making use of incompressibility, \[\frac{d}{dt}\int d\vec{r}\frac{\vec{v}^{2}}{2}=-\Gamma\int d\vec{r}\kappa_{ \alpha\beta}C_{\alpha\beta}. \tag{46}\] Similarly, we study \[\partial_{t}C_{\alpha\beta}=-v_{\gamma}\partial_{\gamma}C_{\alpha\beta}+C_{ \beta\gamma}\kappa_{\alpha\gamma}+C_{\alpha\gamma}\kappa_{\beta\gamma}. \tag{47}\] Multiplying with \(\chi_{\alpha\beta}:=\partial f/\partial C_{\alpha\beta}\), followed by integration over space, yields, again after some integration by parts, and index re-labeling, making use of the symmetry of \(\overleftarrow{C}\) and \(\overleftarrow{X}\), \[\frac{d}{dt}\int d\vec{r}f \tag{48}\] \[= \int d\vec{r}\left[-v_{\gamma}\chi_{\alpha\beta}\partial_{\gamma} C_{\alpha\beta}+\chi_{\alpha\beta}C_{\beta\gamma}\kappa_{\alpha\gamma}+\chi_{ \alpha\beta}C_{\alpha\gamma}\kappa_{\beta\gamma}\right]\] \[= -\int d\vec{r}v_{\gamma}\partial_{\gamma}f+2\int d\vec{r}\,\kappa _{\alpha\beta}C_{\beta\gamma}\chi_{\gamma\alpha}\] \[= 2\int d\vec{r}\,\kappa_{\alpha\beta}C_{\beta\gamma}\chi_{\gamma \alpha}.\] Therefore \[0=\frac{d\mathcal{H}}{dt}=\int d\bar{r}\,\kappa_{\alpha\beta}\left[2C_{\beta\gamma} \chi_{\gamma\alpha}-\Gamma C_{\alpha\beta}\right]. \tag{49}\] Now, the conservation of \(\mathcal{H}\) must hold for any velocity gradient tensor. An obvious solution to this problem is therefore the requirement \[2C_{\beta\gamma}\chi_{\gamma\alpha}=\Gamma C_{\alpha\beta} \tag{50}\] or \[\chi_{\alpha\gamma}C_{\gamma\beta}=\frac{\Gamma}{2}C_{\alpha\beta}. \tag{51}\] This should hold for any conformation tensor, in particular also for non-degenerate (invertible) tensors. This yields \[\chi_{\alpha\beta}=\frac{\Gamma}{2}\delta_{\alpha\beta}, \tag{52}\] which implies, setting an unimportant integration constant to zero, \[f=\frac{\Gamma}{2}C_{\alpha\beta}\delta_{\alpha\beta}=\frac{\Gamma}{2}C_{ \alpha\alpha}=\frac{\Gamma}{2}\mathrm{tr}\overset{\leftrightarrow}{C}. \tag{53}\] However, this is not the only solution to the problem. To see this, let us set \[\chi_{\alpha\beta}=\frac{\Gamma}{2}\delta_{\alpha\beta}-\frac{\Gamma}{2}\chi _{\alpha\beta}^{\prime}, \tag{54}\] and insert this into Eq. 49. This yields \[0=\frac{d\mathcal{H}}{dt}=\int d\bar{r}\,\kappa_{\alpha\beta}C_{\beta\gamma} \chi_{\gamma\alpha}^{\prime}. \tag{55}\] In other words, we should find a non-trivial solution \(\overset{\leftrightarrow}{\chi}^{\prime}\) to the matrix equation \[\mathrm{tr}\left(\overset{\leftrightarrow}{\kappa}\overset{ \leftrightarrow}{C}\overset{\leftrightarrow}{\chi}^{\prime}\right)=0. \tag{56}\] Now, in the situation that \(\overset{\leftrightarrow}{C}\) is fully non-degenerate, we may set \[\overset{\leftrightarrow}{\chi}^{\prime}=\alpha_{H}(\overset{ \leftrightarrow}{C})\overset{\leftrightarrow}{C}^{-1}, \tag{57}\] with some scalar function \(\alpha_{H}(\overset{\leftrightarrow}{C})\) (the subscript \(H\) stands for "Hamiltonian"), such that we get \[\alpha_{H}(\overset{\leftrightarrow}{C})\mathrm{tr}\overset{ \leftrightarrow}{\kappa}=0, \tag{58}\] which does indeed hold, since \(\overset{\leftrightarrow}{\kappa}\) is traceless because of incompressibility. In total, we thus find \[\overset{\leftrightarrow}{\chi}=\frac{\Gamma}{2}\left[\overset{ \leftrightarrow}{1}-\alpha_{H}(\overset{\leftrightarrow}{C})\overset{ \leftrightarrow}{C}^{-1}\right], \tag{59}\] where in case of a degenerate conformation tensor we have to set \(\alpha_{H}=0\). We thus find that the conservative part of the dynamics does not determine the Hamiltonian uniquely. Rather we find a fairly large class of Hamiltonians which all are conserved. For a given function \(\alpha_{H}(\overset{\leftrightarrow}{C})\), one obtains the free energy density \(f\) by integrating the relation \(\partial f/\partial C_{\alpha\beta}=\chi_{\alpha\beta}\). ## V Hamiltonian II: Analysis of Dissipative Dynamics The dissipative part of the equations of motion is written as \[\nabla\cdot\vec{v} = 0, \tag{60}\] \[\partial_{t}\vec{v} = \nabla^{2}\vec{v},\] (61) \[\partial_{t}\overset{\leftrightarrow}{C} = -\overset{\leftrightarrow}{C}+\alpha_{d}\overset{\leftrightarrow} {1}. \tag{62}\] If we assume that the conformation tensor is non-degenerate, we may, making use of Eq. 59, write the last equation as \[\partial_{t}\overset{\leftrightarrow}{C} = -\frac{2}{\Gamma}\overset{\leftrightarrow}{C}\cdot\overset{ \leftrightarrow}{\chi}+\left[\alpha_{d}-\alpha_{H}(\overset{\leftrightarrow }{C})\right]\overset{\leftrightarrow}{1}, \tag{63}\] \[\partial_{t}C_{\alpha\beta} = -\frac{2}{\Gamma}C_{\alpha\gamma}\chi_{\gamma\beta}+\left[\alpha_{ d}-\alpha_{H}(\overset{\leftrightarrow}{C})\right]\delta_{\alpha\beta}. \tag{64}\] We now multiply Eq. 61 with \(\vec{v}\), integrate over space, and apply integration by parts, to obtain the standard viscous dissipation rate \[\frac{d}{dt}\int d\bar{r}\,\frac{\dot{v}^{2}}{2}=-\int d\bar{r}\left(\partial _{\beta}v_{\alpha}\right)\left(\partial_{\beta}v_{\alpha}\right)\leq 0. \tag{65}\] Similarly, multiplying Eq. 64 with \(\chi_{\alpha\beta}\) yields \[\partial_{t}f=-\frac{2}{\Gamma}\chi_{\beta\alpha}C_{\alpha\gamma}\chi_{\gamma \beta}+\left[\alpha_{d}-\alpha_{H}(\overset{\leftrightarrow}{C})\right]\chi_{ \alpha\alpha}. \tag{66}\] Now, the Second Law of thermodynamics means that the dynamics should satisfy \[\frac{d\mathcal{H}}{dt}\leq 0. \tag{67}\] From Eqs. 65 and 66 one sees that the only term that causes a potential violation of the Second Law is the second term of Eq. 66 -- note that the first term is strictly non-positive, due to the positive-semidefiniteness of \(\overset{\leftrightarrow}{C}\). In other words, the condition for the Second Law is \[\left[\alpha_{d}-\alpha_{H}(\overset{\leftrightarrow}{C})\right]\mathrm{tr} \overset{\leftrightarrow}{\chi}\leq 0. \tag{68}\] One possibility to achieve that goal is to set \(\alpha_{H}(\overset{\leftrightarrow}{C})=\alpha_{d}\), in which case the term simply vanishes. In other words, we may thus pick, out of the large class of possible Hamiltonians, one particular Hamiltonian, for which indeed the Second Law holds. We then obtain \[\overset{\leftrightarrow}{\chi} = \frac{\Gamma}{2}\left[\overset{\leftrightarrow}{1}-\alpha_{d} \overset{\leftrightarrow}{C}^{-1}\right], \tag{69}\] \[f = \frac{\Gamma}{2}\left[\mathrm{tr}\overset{\leftrightarrow}{C}- \alpha_{d}\mathrm{tr}\ln\overset{\leftrightarrow}{C}\right], \tag{70}\] where we have set an unimportant integration constant to zero. For \(\alpha_{d}>0\), this Hamiltonian is bounded from below, as it should be. For \(\alpha_{d}=0\), this is also the case, due to the positive-semidefiniteness of \(\overset{\leftrightarrow}{C}\). We can thus always satisfy the Second Law by choosing an appropriate Hamiltonian. If one then has additional arguments, e. g. from a microscopic model, why that particular Hamiltonian should be the physically correct one, then the latter can be used as well. Indeed, this is the route of the standard Oldroyd-B model: Here one derives the Hamiltonian with \(\alpha_{d}=1\) from a microscopic model, and the equation of motion (again with \(\alpha_{d}=1\)) from the corresponding kinetic theory. The main point of this paper is however that this choice is less compelling than one might think. The choice \(\alpha_{d}=0\) is, from the microscopic point of view, physically acceptable as well, and, in our opinion, probably even preferable. We will outline these arguments in more detail below. Are there other solutions of the problem posed in Eq. 68? For arbitary functions \(\alpha_{H}(\overleftarrow{C})\) this analysis is difficult, but for the case that \(\alpha_{H}\) is simply a constant independent of \(\overleftarrow{C}\) it is fairly easy. If \(\lambda_{k}>0\) are the eigenvalues of \(\overleftarrow{C}\), then \[\left(\alpha_{d}-\alpha_{H}\right)\mathrm{tr}\overleftarrow{\chi}=\frac{ \Gamma}{2}\left(\alpha_{d}-\alpha_{H}\right)\sum_{k}\left(1-\alpha_{H}\lambda _{k}^{-1}\right). \tag{71}\] Assuming \(\alpha_{d}\neq\alpha_{H}\), and also \(\alpha_{H}>0\) (negative values are prohibited, since then the Hamiltonian were not bounded from below), we see that this expression does _not_ have a definitive sign, since \(1-\alpha_{H}\lambda_{k}^{-1}\) changes sign upon varying \(\lambda_{k}\). The only exception is \(\alpha_{H}=0\), in which case the expression does not depend on the conformation tensor at all. In this case, however, \(\alpha_{d}\leq 0\) is needed for the Second Law. Since on the other hand we need \(\alpha_{d}\geq 0\) for maintaining the positivity of \(\overleftarrow{C}\), the only possibility that remains is \(\alpha_{d}=0\), which brings us back to the original case \(\alpha_{d}=\alpha_{H}\). To summarize our results so far: We have discussed a class of generalized Oldroyd-B models, with constitutive equation \[\delta\overleftarrow{C}=-\overleftarrow{C}+\alpha_{d}\overleftarrow{1}, \tag{72}\] where \(\alpha_{d}\geq 0\), such that the dynamics conserves the positive semi-definiteness of \(\overleftarrow{C}\). We have seen that the conservative part of the dynamics conserves the Hamiltonian \[\mathcal{H}=\int d\vec{r}\left[\frac{\vec{v}^{2}}{2}+f(\overleftarrow{C}) \right], \tag{73}\] with \[f(\overleftarrow{C})=\frac{\Gamma}{2}\left[\mathrm{tr}\overleftarrow{C}- \alpha_{H}\mathrm{tr}\ln\overleftarrow{C}\right], \tag{74}\] where \(\alpha_{H}\geq 0\) to make sure that the Hamiltonian is bounded from below. Furthermore, we have seen that the Second Law requires that \(\alpha_{d}=\alpha_{H}=\alpha\) (we will omit the indexes \(d\) and \(H\) from now on). In this case the constitutive equation can be written in the canonical form \(\delta\overleftarrow{C}=\) "transport coefficient times thermodynamic driving force", where the negative of the latter is given by \[\overleftarrow{\chi}=\frac{\partial f}{\partial\overleftarrow{C}}=\frac{ \Gamma}{2}\left[\overleftarrow{1}-\alpha\overleftarrow{C}^{-1}\right], \tag{75}\] and the constitutive equation reads \[\delta\overleftarrow{C}=-\frac{2}{\Gamma}\overleftarrow{C}\cdot\overleftarrow {\chi}. \tag{76}\] Similarly, the dissipation rate assumes the canonical expression required by the GENERIC formalism [9; 10; 11], i. e. a quadratic form in the driving forces: \[\partial_{t}f=-\frac{2}{\Gamma}\mathrm{tr}\left(\overleftarrow{\chi}\cdot \overleftarrow{C}\cdot\overleftarrow{\chi}\right). \tag{77}\] The basic principles of equilibrium and non-equilibrium thermodynamics are thus satisfied, and this is true for any value \(\alpha\geq 0\). The standard Oldroyd-B model is the special case \(\alpha=1\). Interestingly, for \(\alpha>0\) the minimal Hamiltonian and also the minimum dissipation rate do not occur at \(\overleftarrow{C}=0\) but rather at \(\overleftarrow{C}=\alpha\overleftarrow{1}\), where \(\overleftarrow{\chi}=0\). An even more interesting observation is that, for \(\alpha>0\), the dissipation rate becomes very large whenever the eigenvalues of \(\overleftarrow{C}\) become either large or small. Actually, the rate tends to infinity when one of the eigenvalues tends to zero. This may perhaps provide an explanation for the numerical evidence obtained in Ref. [6], which considered the flow of a (standard, \(\alpha=1\)) Oldroyd-B fluid around an obstacle. The data seem to indicate that a solution beyond a certain critical Weissenberg number \(Wi\) simply does not exist ("high Weissenberg number problem", HWNP). In this context, it should be recalled that \(Wi\) is defined as the dimensionless product of shear rate and molecular relaxation time. In other words, no solution seems to exist at high flow rates. Now, from a physical point of view, we must assume that there is some "pump" that provides a constant power to move the fluid and to balance the dissipative losses, such that a net non-equilibrium steady state is established. If the system provides an infinite energy sink, it is conceivable that we run into a situation where we keep on increasing the pumping power without the flow rate increasing any further, such that higher values of \(Wi\) become inaccessible. _This situation is significantly different if we set \(\alpha=0\)._ Here we have \(f=(\Gamma/2)\mathrm{tr}\overleftarrow{C}\) and \(\partial_{t}f=-(\Gamma/2)\mathrm{tr}\overleftarrow{C}=-f\). We still get large energies and large dissipation rates for large eigenvalues, but the singularity at \(\overleftarrow{C}=0\) is removed. We therefore propose in the present paper to consider the new constitutive equation, which results from setting \(\alpha=0\). It may be that this model has perhaps a less severe, or perhaps even non-existing, HWNP. Numerical investigation of the behavior of this system is left for future work. In the present paper, we wish to establish that this model is thermodynamically consistent (this is the content of the paper so far), and that it is _not_ an arbitrary _ad hoc_ choice, but can be derived from the same microscopic picture upon which the standard Oldroyd-B model is based. The key to construct this new constitutive equation is _to do the statistical mechanics in a different ensemble_. ## VI General considerations on micro-macro coupling ### General remarks The basic idea behind the standard derivation of the Oldroyd-B model from kinetic theory may be sketched as fol lows. Starting points are the following observations and assumptions: (i) The influence of the polymer degrees of freedom on the flow behavior enters only via the term \(\Gamma\nabla\cdot\overleftarrow{C}\) of the momentum equation. Therefore one needs an equation of motion for \(\overleftarrow{C}\). (ii) Physically, \(\overleftarrow{C}\) is nothing but the (properly normalized) stress coming from the polymers. (iii) On a microscopic level, the polymer chains should be represented by some sort of bead-spring model. (iv) Within such a model, the Kramers (or virial) formula provides a well-defined expression for the stress in terms of bead coordinates. (v) This stress tensor must then be averaged, and the average value enters the macroscopic momentum eqution. (vi) At the same time, the underlying microscopic model for the dynamics results in an equation of motion for the averaged stress, which is then the desired constitutive equation. The Oldroyd-B model then postulates very specific assumptions on (i) the microscopic model, (ii) its dynamics, and (iii) _the averaging procedure_. The combination of these three ingredients then gives rise to the specific formulation of the model. The point that we wish to make here is that the averaging procedure is much less unique and compelling than one might think. As a matter of fact, there are several choices possible, and we wish to present here a new alternative way for the averaging procedure, which then gives rise to the modified constitutive equation with \(\alpha=0\). Conversely, the microscopic model, both in terms of its degrees of freedom, and in terms of its assumed dynamics, will remain untouched. ### Constrained averages The lack of uniqueness becomes most transparent if formulated in a general and abstract language. We assume that the microscopic description is based on a vector of microscopic phase-space coordinates \(\vec{\xi}\), and that the dynamics can be described in terms of the evolution of the phase-space probability density \(P(\vec{\xi},t)\). For an observable \(A\) (i. e. a phase-space function \(A(\vec{\xi})\)), the simple thermal average is then given by \[\langle A\rangle\left(t\right)=\int d\vec{\xi}\,P(\vec{\xi},t)A(\vec{\xi}). \tag{78}\] However, this average is typically _not_ the average whose result should be fed into the macroscopic description. As is well-known from the Mori-Zwanzig formalism Mori and Zwanzig (1961); Zwanzig (1962), the macroscopic description is rather based upon the identification (or better: choice) of a set of macroscopic or "slow" observables \(X_{i}\), i. e. functions \(X_{i}(\vec{\xi})\), which facilitate the coupling. Averaging should then only be done over the remaining "fast" variables. In order to exclude the slow variables from averaging, we introduce Dirac delta functions in the phase space integrals, analogously to the microcanonical ensemble known from standard statistical physics. We therefore define the constrained average of an observable \(A(\vec{\xi})\) as follows: \[\left[A\right](t) = \frac{\int d\vec{\xi}\,\prod_{j}\delta(X_{j}-Y_{j})PA}{\int d\vec {\xi}\,\prod_{j}\delta(X_{j}-Y_{j})P} \tag{79}\] \[= \frac{\left(\prod_{j}\delta(X_{j}-Y_{j})A\right)}{\left(\prod_{j }\delta(X_{j}-Y_{j})\right)},\] where \(Y_{j}\) is the macroscopic value of \(X_{j}\). To determine the \(Y_{j}\), we postulate the reasonable requirement that they should be the unconstrained averages of the \(X_{j}\): \[Y_{j}=\left\langle X_{j}\right\rangle. \tag{80}\] Furthermore, we have \[\left[X_{i}\right] = \frac{\left\langle\prod_{j}\delta(X_{j}-Y_{j})X_{i}\right\rangle }{\left\langle\prod_{j}\delta(X_{j}-Y_{j})\right\rangle} \tag{81}\] \[= \frac{\left\langle\prod_{j}\delta(X_{j}-Y_{j})Y_{i}\right\rangle }{\left\langle\prod_{j}\delta(X_{j}-Y_{j})\right\rangle}=Y_{i},\] as it should be. One thus sees that one gets different averaging procedures depending on the choice of the slow variables \(X_{i}\). Traditional polymer rheology has always assumed that the only reasonable choice for the \(X_{i}\) are the components of the stress tensor, or, directly related to it, the components of the conformation tensor. However, the conformation tensor may as well take the role of an observable \(A\), while for the \(X_{i}\) different variables are being used. What we propose in the present paper is to rather take the components of the end-to-end vector for the \(X_{i}\). As a shorthand notation, we combine the set of observables \(X_{i}\) and their averages \(Y_{i}\) as vectors \(\vec{X}\) and \(\vec{Y}\). Similarly, we assume that we are not interested in only a single observable \(A\), but in a whole set, again written as a vector \(\vec{A}\). We thus may write \[\left[\vec{A}\right]=\frac{\left\langle\delta\left(\vec{X}-\vec{Y}\right) \vec{A}\right\rangle}{\left\langle\delta\left(\vec{X}-\vec{Y}\right)\right\rangle} \tag{82}\] and \[\left[\vec{X}\right]=\vec{Y}=\left\langle\vec{X}\right\rangle. \tag{83}\] ### Hamiltonian One should note that the macroscopic Hamiltonian should _not_ be calculated according to that recipe -- it is not an observable but rather a thermodynamic potential. Assuming that the microscopic system is governed by a Hamiltonian \(\mathcal{H}_{0}(\vec{\xi})\), and is studied in the canonical ensemble, where \(\beta=1/(k_{B}T)\) (\(k_{B}\) Boltzmann's constant, \(T\) absolute temperature), then the constrained partition function is \[Z\left(\vec{Y}\right)=\int d\vec{\xi}\,\delta\left(\vec{X}-\vec{Y}\right)\exp \left(-\beta\mathcal{H}_{0}\right), \tag{84}\] and the macroscopic Hamiltonian (or free energy) is \[\mathcal{H}=-\beta^{-1}\ln Z. \tag{85}\] ### Fokker-Planck dynamics We now assume that the underlying dynamics on the microscale is described by an evolution equation of the Fokker-Planck type for the probability density \(P\), \[\partial_{t}P=\mathcal{L}P, \tag{86}\] where \(\mathcal{L}\) is the Fokker-Planck operator \[\mathcal{L}=\frac{\partial}{\partial\bar{\xi}}\cdot\overleftarrow{\mathcal{D} }\cdot\frac{\partial}{\partial\bar{\xi}}-\frac{\partial}{\partial\bar{\xi}} \cdot\tilde{V}; \tag{87}\] here \(\overleftarrow{\mathcal{D}}\) denotes the (symmetric and positive-semidefinite) diffusion tensor, and \(\tilde{V}\) the drift velocity in \(\bar{\xi}\) space. Its adjoint is written as \[\mathcal{L}^{\dagger}=\frac{\partial}{\partial\bar{\xi}}\cdot \overleftarrow{\mathcal{D}}\cdot\frac{\partial}{\partial\bar{\xi}}+\tilde{V} \cdot\frac{\partial}{\partial\bar{\xi}}. \tag{88}\] For the time evolution of the unconstrained average of an observable \(A\) we thus find \[\partial_{t}\left\langle A\right\rangle = \partial_{t}\int d\bar{\xi}A(\bar{\xi})P(\bar{\xi},t) \tag{89}\] \[= \int d\bar{\xi}A(\bar{\xi})\mathcal{L}P(\bar{\xi},t)\] \[= \int d\bar{\xi}\,P(\bar{\xi},t)\mathcal{L}^{\dagger}A(\bar{\xi})\] \[= \left\langle\mathcal{L}^{\dagger}A\right\rangle.\] This, in turn, tells us that the time evolution of the vector of ensemble-defining quantities \(\tilde{Y}\) is given by \[\partial_{t}\tilde{Y}=\partial_{t}\left\langle\vec{X}\right\rangle=\left\langle \mathcal{L}^{\dagger}\vec{X}\right\rangle. \tag{90}\] We thus find that the rheological model is constructed from the combination of the microscopic Fokker-Planck model with a set of chosen ensemble-defining observables \(\tilde{X}\). Equation 90 (or a suitable equivalent formulation) thus turns out to be the _constitutive equation_ of the rheological model. One should realize that one obtains a closed-form constitutive equation only if \(\mathcal{L}^{\dagger}\vec{X}\) is expressable as a linear function of \(\vec{X}\). For the Oldroyd-B models (both standard and modified) discussed in the present paper this is the case; however in general this is more the exception than the rule. If it is not the case, Eq. 90 nevertheless remains not only valid (within the framework of the chosen model), but also practically useful: It is always possible to estimate the RHS of Eq. 90 by stochastic Brownian Dynamics (BD) simulations. This philosophy has already been put into practice in the so-called CONNFFES-SIT approach [15]. The micro-macro coupling, however, involves not only the vector of ensemble-defining quantities \(\vec{X}\), but also a vector of observables \(\vec{A}\), whose constrained average \(\left[\vec{A}\right]\) appears in the macroscopic dynamic equation. If \(\vec{A}\) happens to coincide with \(\vec{X}\), then matters are easy, since then \[\left[\vec{A}\right]=\left[\vec{X}\right]=\tilde{Y}, \tag{91}\] such that the results from the integration of the constitutive equation may be used directly. This is exactly the route that is taken in the standard variant of the Oldroyd-B model. However, in the general case one rather has to use Eq. 82. The difficulty here is that in many cases it is not possible to construct a closed-form expression for the RHS of Eq. 82. This is directly related to the fact that one typically cannot construct a closed-form analytical solution of the Fokker-Planck equation (FPE), if one allows for arbitrary initial conditions and arbitrary non-equilibrium external driving with unknown time dependence. We presume, however, that it should be possible to construct a suitable sampling algorithm which allows the estimation of the averages on the RHS of Eq. 82 by BD. The details of such a procedure are however not yet clear and are a topic for future research. There is, however, yet one other case where the evaluation of \(\left[\vec{A}\right]\) becomes very easy. This is if the constraints apply to the whole underlying set of dynamic variables, such that we simply have \(\vec{X}=\bar{\xi}\) and no non-trivial averaging remains. Indeed, we then have \[\left\langle\delta\left(\vec{X}-\vec{Y}\right)\vec{A}\right\rangle = \int d\bar{\xi}\delta\left(\bar{\xi}-\tilde{Y}\right)\vec{A}\left( \bar{\xi}\right)P\left(\bar{\xi},t\right) \tag{92}\] \[= \vec{A}\left(\tilde{Y}\right)P\left(\tilde{Y},t\right)\] and \[\left\langle\delta\left(\vec{X}-\tilde{Y}\right)\right\rangle=P\left(\tilde{Y},t\right), \tag{93}\] such that \[\left[\vec{A}\right]=\vec{A}\left(\tilde{Y}\right), \tag{94}\] which allows us to again directly use the results of the integration of the constitutive equation. It is this second route that we use in the present paper to construct the modified Oldroyd-B model. ## VII The Oldroyd-B Hamiltonian from statistical mechanics From now on, we will use conventional units again. Consider a single Hookean dumbbell (this is the simplified model for a polymer molecule) with spring constant \(k\) in \(d\)-dimensional space. In thermal equilibrium, the vector \(\tilde{q}\), which connects the two beads with each other, will have a mean square extension (in tensorial form) of \[\left\langle q_{\alpha}q_{\beta}\right\rangle=\frac{k_{B}T}{k}\delta_{\alpha \beta}, \tag{95}\] as a direct consequence of the equipartition theorem. This motivates the introduction of a non-dimensionalized extension vector \(\tilde{p}\) via \[\tilde{q}=\left(\frac{k_{B}T}{k}\right)^{1/2}\tilde{p}, \tag{96}\] such that \[\left\langle p_{\alpha}p_{\beta}\right\rangle=\delta_{\alpha\beta}. \tag{97}\] Consider now a set of \(N\) such dumbbells with normalized extension vectors \(\bar{p}_{i}\). Assuming that there is no interaction whatsoever, except the intramolecular spring forces, the corresponding Hamiltonian is given by \[\beta\mathcal{H}_{0}=\frac{1}{2}\sum_{i}\bar{p}_{i}^{2}. \tag{98}\] ### Conformation tensor ensemble Let us now define a microscopic expression for the conformation tensor via \[\hat{C}_{\alpha\beta}=N^{-1}\sum_{i}p_{i\alpha}p_{i\beta}. \tag{99}\] Consequently, \[\beta\mathcal{H}_{0}=\frac{N}{2}\hat{C}_{\alpha\alpha}=\frac{N}{2}\mathrm{tr} \hat{\overleftarrow{C}}. \tag{100}\] We now wish to evaluate the partition function at fixed conformation tensor \(\overleftarrow{C}\), of which we assume that it is non-degenerate. A fixed conformation tensor means a fixed value for \(\mathcal{H}_{0}\), i. e. \(\beta\mathcal{H}_{0}=(N/2)\mathrm{tr}\overleftarrow{C}\). The condition that the configuration of dumbbells, \(\{\bar{p}_{i}\}\), has to satisfy is therefore \[N^{-1}\sum_{i}\bar{p}_{i}\bar{p}_{i}^{T}=\overleftarrow{C}. \tag{101}\] Introducing the variable transformation \[\bar{p}_{i}=\overleftarrow{C}^{1/2}\bar{\pi}_{i}, \tag{102}\] the condition may also be written as \[\overleftarrow{C}^{1/2}N^{-1}\sum_{i}\bar{\pi}_{i}\bar{\pi}_{i}^{T} \overleftarrow{C}^{1/2}=\overleftarrow{C}, \tag{103}\] or, after multiplication with \(\overleftarrow{C}^{-1/2}\) from both left and right, \[N^{-1}\sum_{i}\bar{\pi}_{i}\bar{\pi}_{i}^{T}=\overleftarrow{1}. \tag{104}\] Furthermore, we notice \[d\bar{p}_{1}d\bar{p}_{2}\ldots d\bar{p}_{N}=\left[\det\overleftarrow{C}^{1/2 }\right]^{N}d\bar{\pi}_{1}d\bar{\pi}_{2}\ldots d\bar{\pi}_{N} \tag{105}\] and re-write \[\left[\det\overleftarrow{C}^{1/2}\right]^{N} = \left[\det\overleftarrow{C}\right]^{N/2}=\exp\left(\frac{N}{2} \ln\det\overleftarrow{C}\right) \tag{106}\] \[= \exp\left(\frac{N}{2}\mathrm{tr}\ln\overleftarrow{C}\right).\] Similarly, we find \[\delta\left(N^{-1}\sum_{i}\bar{p}_{i}\bar{p}_{i}^{T}-\overleftarrow {C}\right) \tag{107}\] \[= \delta\left(\overleftarrow{C}^{1/2}\left(N^{-1}\sum_{i}\bar{\pi }_{i}\bar{\pi}_{i}^{T}-\overleftarrow{1}\right)\overleftarrow{C}^{1/2}\right)\] \[= \left(\det\overleftarrow{C}\right)^{-1}\delta\left(N^{-1}\sum_{i} \bar{\pi}_{i}\bar{\pi}_{i}^{T}-\overleftarrow{1}\right)\] \[= \exp\left(-\mathrm{tr}\ln\overleftarrow{C}\right)\delta\left(N^{- 1}\sum_{i}\bar{\pi}_{i}\bar{\pi}_{i}^{T}-\overleftarrow{1}\right).\] All in all, this results in \[Z=\exp\left(-\frac{N}{2}\mathrm{tr}\overleftarrow{C}+\frac{N-2}{2}\mathrm{tr} \ln\overleftarrow{C}\right)\bar{Z}, \tag{108}\] with \[Z=\int d\bar{\pi}_{1}d\bar{\pi}_{2}\ldots d\bar{\pi}_{N}\delta\left(N^{-1} \sum_{i}\bar{\pi}_{i}\bar{\pi}_{i}^{T}-\overleftarrow{1}\right). \tag{109}\] \(\bar{Z}\) does not depend on \(\overleftarrow{C}\) anymore, which means that it may be set to unity, by means of choosing a proper normalization of the partition function (or, equivalently, a convenient zero for the free energy). We thus find \[\mathcal{H} = k_{B}T\frac{N}{2}\mathrm{tr}\overleftarrow{C}-k_{B}T\frac{N-2}{ 2}\mathrm{tr}\ln\overleftarrow{C} \tag{110}\] \[\approx k_{B}T\frac{N}{2}\left(\mathrm{tr}\overleftarrow{C}-\mathrm{tr} \ln\overleftarrow{C}\right),\] where in the last step we have assumed that \(N\) is large. Assuming that the system is confined to a volume \(V\), such that \(n=N/V\) is the number of dumbbells per unit volume, the free energy per unit volume is \[f\left(\overleftarrow{C}\right)=k_{B}T\frac{n}{2}\left(\mathrm{tr} \overleftarrow{C}-\mathrm{tr}\ln\overleftarrow{C}\right). \tag{111}\] This is indeed the free energy of the standard Oldroyd-B model as identified before, see Eq. 74 with \(\alpha=1\), where we need to identify \(nk_{B}T\), after transformation to natural units, with the parameter \(\Gamma\). ### End-to-end vector ensemble We now wish to perform the same exercise as in the previous subsection; however this time we do not wish to keep the conformation tensor fixed but rather the normalized end-to-end vector \[\hat{\bar{Q}}=N^{-1}\sum_{i}\bar{p}_{i}. \tag{112}\] This problem can be solved very easily, relying on standard results of Gaussian statistics. We write the partition function with a constraining value \(\bar{Q}\) as \[Z=\frac{\int d\bar{p}_{1}\ldots d\bar{p}_{N}\delta\left(\bar{Q}-\hat{\bar{Q}} \right)\exp\left(-\beta\mathcal{H}_{0}\right)}{\int d\bar{p}_{1}\ldots d\bar{p }_{N}\exp\left(-\beta\mathcal{H}_{0}\right)}, \tag{113}\] where we have introduced the denominator for convenient normalization. Because of \(\int d\bar{Q}Z(\bar{Q})=1\) we may interpret \(Z\) just as the probability density for the end-to-end vector. However, the \(\bar{p}_{i}\) are just Gaussian random variables and \(\hat{\bar{Q}}\) is a linear combination thereof, meaning that it is Gaussian as well. Trivial evaluation yields \[\left\langle\bar{Q}\right\rangle=0 \tag{114}\] \[\left\langle Q_{\alpha}Q_{\beta}\right\rangle=N^{-1}\delta_{\alpha\beta}, \tag{115}\] which means \[Z(\tilde{Q})=\text{const.}\exp\left(-\frac{N}{2}\tilde{Q}^{2}\right). \tag{116}\] After re-adjusting the zero of the free energy, we thus obtain \[\mathcal{H}=k_{B}T\frac{N}{2}\tilde{Q}^{2}. \tag{117}\] Within the framework of this modified theory, we therefore have to define the conformation tensor differently -- instead of \(\hat{\overline{C}}=N^{-1}\sum_{i}\tilde{p}_{i}\tilde{p}_{i}^{T}\) ("average of the square"), we now have to consider \[\hat{\overline{C}}_{Q}=\hat{\tilde{Q}}\hat{\tilde{Q}}^{T}=\left(N^{-1}\sum_{i }\tilde{p}_{i}\right)\left(N^{-1}\sum_{i}\tilde{p}_{i}\right)^{T} \tag{118}\] ("square of the average"). With that re-definition of the conformation tensor, we get \[\mathcal{H}=k_{B}T\frac{N}{2}\text{tr}\hat{\overline{C}}_{Q} \tag{119}\] and for the free energy per unit volume \[f\left(\hat{\overline{C}}_{Q}\right)=k_{B}T\frac{n}{2}\text{tr}\hat{ \overline{C}}_{Q}, \tag{120}\] which is identical to the result derived before (Eq. 74), however this time with \(\alpha=0\). ## VIII Kramers stress tensor The key to couple the Fokker-Planck system to the hydrodynamics of the solution is the Kramers (or virial) expression for the stress tensor [8]. Again we consider our set of \(N\) harmonic dumbbells immersed homogeneously in a volume \(V\). The microscopic expression for the polymer part of the stress tensor is then \[\tau^{(mic)}_{\alpha\beta}=-\frac{N}{V}k_{B}T\delta_{\alpha\beta}-\frac{1}{V} \sum_{i}q_{i\alpha}F_{i\beta}. \tag{121}\] Here \(\tilde{q}_{i}\) is the vector that connects the two beads that together form the dumbbell \(i\) under consideration. Similarly, \(\tilde{F}_{i}\) is the corresponding force, \(\tilde{F}_{i}=-k\tilde{q}_{i}\). Therefore, \[\tau^{(mic)}_{\alpha\beta}=-\frac{N}{V}k_{B}T\delta_{\alpha\beta}+\frac{k}{V} \sum_{i}q_{i\alpha}q_{i\beta}. \tag{122}\] Again we introduce normalized connector vectors \(\tilde{p}_{i}\) via \[\tilde{q}_{i}=\left(\frac{k_{B}T}{k}\right)^{1/2}\tilde{p}_{i}, \tag{123}\] which gives rise to \[\tau^{(mic)}_{\alpha\beta} = -\frac{N}{V}k_{B}T\delta_{\alpha\beta}+\frac{k_{B}T}{V}\sum_{i}p_ {i\alpha}p_{i\beta} \tag{124}\] \[= -\frac{N}{V}k_{B}T\delta_{\alpha\beta}+\frac{Nk_{B}T}{V}\hat{C}_ {\alpha\beta}\] \[= nk_{B}T\left(\hat{C}_{\alpha\beta}-\delta_{\alpha\beta}\right).\] On the macroscopic (hydrodynamic) level, an analogous macroscopic version of \(\hat{\overline{\tau}}\) will enter. It is clear that this must be some suitably averaged stress tensor. As we have learned from the previous developments, the averaging depends on the underlying ensemble, or, in other words, on the choice of the "slow" variables. We thus obtain \[\tau_{\alpha\beta}=nk_{B}T\left(\left[\hat{C}_{\alpha\beta}\right]-\delta_{ \alpha\beta}\right). \tag{125}\] As the second term does not contribute to the forcing term in the hydrodynamic equation of motion, it may as well be omitted. This yields \[\tau_{\alpha\beta}=nk_{B}T\left[\hat{C}_{\alpha\beta}\right]. \tag{126}\] We have already learned that in dimensionless units we need to identify \(nk_{B}T\) with \(\Gamma\). In summary, we thus obtain a rheological model which is specified by the momentum equation (in reduced units) \[\nabla\cdot\vec{v} = 0, \tag{127}\] \[D_{i}\vec{v} = -\nabla p+\nabla^{2}\vec{v}+\Gamma\nabla\cdot\left[\hat{\overline {C}}\right], \tag{128}\] augmented by a constitutive equation, i. e. an equation of motion for \(\left[\hat{\overline{C}}\right]\). To make further progress, we need to consider the microscopic dynamics in detail (see the following section), and also the details of the constrained average. As all dumbbells have the same properties, we may write both the constrained and the unconstrained averages of \(\hat{\overline{C}}\) as single-dumbbell averages: \[\left[\hat{C}_{\alpha\beta}\right] = \left[p_{\alpha}p_{\beta}\right], \tag{129}\] \[\left\langle\hat{C}_{\alpha\beta}\right\rangle = \left\langle p_{\alpha}p_{\beta}\right\rangle. \tag{130}\] For the ensemble which fixes the conformation tensor (standard Oldroyd-B model), we can directly take advantage of the fact that for ensemble-defining observables the constrained average coincides with the unconstrained one, see Sec. VI, such that \(\left[\hat{\overline{C}}\right]=\left\langle\hat{\overline{C}}\right\rangle\). In other words, for the standard Oldroyd-B model we have \[\left[\hat{C}_{\alpha\beta}\right]=\left[p_{\alpha}p_{\beta}\right]=\left\langle \hat{C}_{\alpha\beta}\right\rangle=\left\langle p_{\alpha}p_{\beta}\right\rangle. \tag{131}\] This means that we just have to find the equation of motion for \(\left\langle p_{\alpha}p_{\beta}\right\rangle\) within a single-dumbbell picture, and the construction of the model is done. This is an easy and straightforward task, and this is the theoretical reason why the standard version of the Oldroyd-B model is so popular. If instead the average end-to-end vector \(\tilde{Q}\) defines the ensemble, things are slightly more involved. We here have the situation where the observables that are needed for the macroscopic equations differ from those that define the ensemble. As already outlined at the end of Sec. VI, it is in that situation typically impossible to construct a closed constitutive equation of motion, except if the number of constraints is identical to the number of degrees of freedom. We are therefore led to the conclusion that in the present case we should set \(N=1\), such that the constraint acts on the single-dumbbell end-to-end vector. In this case, we trivially obtain \[\left[p_{\alpha}p_{\beta}\right]=Q_{\alpha}Q_{\beta}. \tag{132}\] Since on the other hand \(Q_{\alpha}=\left\langle p_{\alpha}\right\rangle\), we may also write \[\left[p_{\alpha}p_{\beta}\right]=\left\langle p_{\alpha}\right\rangle\left \langle p_{\beta}\right\rangle. \tag{133}\] Again, we are led to consider the square of the average instead of the average of the square (a similar notion already occurred when we calculated the Hamiltonian in Sec. VII.2). The desired constitutive equation therefore is found from a single-dumbbell dynamical model, by finding the equation of motion for \(\left\langle p_{\alpha}\right\rangle\left\langle p_{\beta}\right\rangle\), which again is an easy task. The fact that one should set \(N=1\) is further corroborated by a consideration of the averages _in thermal equilibrium_, which can be calculated exactly, for arbitrary values of \(N\), by straightforward evaluation of Gaussian integrals. One finds (see Appendix A) \[\left[p_{\alpha}p_{\beta}\right]=Q_{\alpha}Q_{\beta}+\left(1-N^{-1}\right) \delta_{\alpha\beta}, \tag{134}\] which may be written in the suggestive form \[\left[p_{\alpha}p_{\beta}\right]-\left[p_{\alpha}\right]\left[p_{ \beta}\right] \tag{135}\] In other words, the constraint reduces the fluctuations by the factor \(1-N^{-1}\). For \(N=1\) there are no fluctuations left (this result was already clear from the considerations above), while for \(N\rightarrow\infty\) there is no reduction in fluctuations whatsoever. This is in perfect accordance with the general notion in statistical physics that in the thermodynamic limit (\(N\rightarrow\infty\)) constraints do not matter, and ensembles become equivalent. It is reasonable to assume (though difficult to prove) that similar behavior also occurs in nonequilibrium situations. We thus see that, in order to bring about a sizable effect of the constraint, one has to consider a small value of \(N\), meaning in practice \(N=1\). We feel that such a constrained ensemble acting on the single-dumbbell distribution is not physically unreasonable. ## IX Fokker-Planck equation As already discussed in the previous section, for our purposes a single-dumbbell picture is sufficient. We thus consider a single Hookean dumbbell with connector vector \(\tilde{q}\) and normalized connector vector \(\tilde{p}\). If \(\tilde{r}\) denotes the center of mass of the dumbbell, then the beads are located at the positions \(\tilde{r}\pm\tilde{q}/2\), and the velocity flow field at the two bead positions is \(\tilde{v}(\tilde{r})\pm\tilde{\kappa}(\tilde{r})\cdot\tilde{q}/2\). Here we neglect higher-than-linear terms in the velocity profile, which is reasonable, given the smallness of polymer molecules. Assuming stick boundary conditions for the beads, i. e. assuming that the beads just move with the flow, we find \(\tilde{q}=\tilde{\kappa}(\tilde{r})\cdot\tilde{q}\). Adding the effect of the spring force, and thermal noise, the corresponding Langevin equation for \(\tilde{q}\) is \[\dot{q}_{\alpha}=\kappa_{\alpha\beta}q_{\beta}-\mu kq_{\alpha}+\eta_{\alpha}, \tag{136}\] where \(\mu\) is a mobility, and \(\eta_{\alpha}\) is a Gaussian white noise with \[\left\langle\eta_{\alpha}(t)\right\rangle = 0, \tag{137}\] \[\left\langle\eta_{\alpha}(t)\eta_{\beta}(t^{\prime})\right\rangle = 2\mu k_{B}T\delta_{\alpha\beta}\delta(t-t^{\prime}). \tag{138}\] In terms of \(\bar{p}\), this is written as \[\dot{p}_{\alpha}=\kappa_{\alpha\beta}p_{\beta}-\frac{1}{\tau}p_{\alpha}+\xi_{ \alpha}, \tag{139}\] where the relaxation time is given by \(\tau=(\mu k)^{-1}\) and \[\left\langle\xi_{\alpha}(t)\right\rangle = 0, \tag{140}\] \[\left\langle\xi_{\alpha}(t)\xi_{\beta}(t^{\prime})\right\rangle = 2\frac{1}{\tau}\delta_{\alpha\beta}\delta(t-t^{\prime}). \tag{141}\] If \(P(\tilde{r},\tilde{p},t)\) denotes the probability density in \(\tilde{p}\) space, with \[\int d\bar{p}\,P(\tilde{r},\bar{p},t)=1, \tag{142}\] then \(P\) obeys the Fokker-Planck equation (derived from the above Langevin equation) \[\partial_{t}P=\mathcal{L}P, \tag{143}\] where \(\mathcal{L}\) is the Fokker-Planck operator, which has the explicit form \[\mathcal{L} = -\frac{\partial}{\partial p_{\alpha}}\left(\kappa_{\alpha\beta}p _{\beta}-\frac{1}{\tau}p_{\alpha}\right)+\frac{1}{\tau}\frac{\partial}{ \partial p_{\alpha}}\frac{\partial}{\partial p_{\alpha}} \tag{144}\] \[= -\frac{\partial}{\partial\bar{p}}\cdot\left(\overleftarrow{\kappa }\cdot\bar{p}-\frac{1}{\tau}\bar{p}\right)+\frac{1}{\tau}\frac{\partial}{ \partial\bar{p}}\cdot\frac{\partial}{\partial\bar{p}}.\] In order to take into account the fact that the probability density is advected with the flow, we write the Fokker-Planck equation as \[D_{t}P=\mathcal{L}P. \tag{145}\] For the thermal average of an observable \(A\) we thus have the equation of motion \[D_{t}\left\langle A\right\rangle=\left\langle\mathcal{L}^{\dagger}A\right\rangle, \tag{146}\] where \(\mathcal{L}^{\dagger}\) is the adjoint Fokker-Planck operator, explicitly given as \[\mathcal{L}^{\dagger}=\left(\overleftarrow{\kappa}\cdot\bar{p}-\frac{1}{\tau} \bar{p}\right)\cdot\frac{\partial}{\partial\bar{p}}+\frac{1}{\tau}\frac{ \partial}{\partial\bar{p}}\cdot\frac{\partial}{\partial\bar{p}}. \tag{147}\] In particular, if we choose \(\bar{p}\) as the observable, then we find \[\mathcal{L}^{\dagger}\bar{p}=\overset{\leftrightarrow}{\kappa}\cdot\bar{p}-\frac{ 1}{\tau}\bar{p} \tag{148}\] and hence \[D_{t}\left(\bar{p}\right)=\overset{\leftrightarrow}{\kappa}\cdot\left(\bar{p} \right)-\frac{1}{\tau}\left(\bar{p}\right). \tag{149}\] Similarly, one finds after a few lines of straightforward algebra \[\mathcal{L}^{\dagger}p_{\alpha}p_{\beta}=p_{\alpha}p_{\gamma}\kappa_{\beta \gamma}+p_{\beta}p_{\gamma}\kappa_{\alpha\gamma}-\frac{2}{\tau}\left(p_{\alpha }p_{\beta}-\delta_{\alpha\beta}\right), \tag{150}\] which implies \[D_{t}\left\langle p_{\alpha}p_{\beta}\right\rangle-\left\langle p _{\alpha}p_{\gamma}\right\rangle\kappa_{\beta\gamma}-\left\langle p_{\beta}p_ {\gamma}\right\rangle\kappa_{\alpha\gamma}\] \[= -\frac{2}{\tau}\left(\left\langle p_{\alpha}p_{\beta}\right\rangle -\delta_{\alpha\beta}\right) \tag{151}\] or \[\delta\left\langle p_{\alpha}p_{\beta}\right\rangle=-\frac{2}{\tau}\left( \left\langle p_{\alpha}p_{\beta}\right\rangle-\delta_{\alpha\beta}\right). \tag{152}\] Furthermore, from Eq. 149 we find \[\delta_{t}\left(\left\langle p_{\alpha}\right\rangle\left\langle p_{\beta} \right\rangle\right)=-\frac{2}{\tau}\left\langle p_{\alpha}\right\rangle\left \langle p_{\beta}\right\rangle. \tag{153}\] The last two equations are nothing but the constitutive equations of the standard and the modified Oldroyd-B model, respectively. Obviously, we have to set \(\tau=2\) in our reduced unit system. The two equations differ by the unit tensor on the RHS. By taking the difference between the two equations, one sees that this term is directly related to thermal fluctuations. Finally, we may now define, for \(\alpha\geq 0\), \[\overset{\leftrightarrow}{C}=\alpha\left(\bar{p}\bar{p}^{T}\right)+\left(1- \alpha\right)\left\langle\bar{p}\right\rangle\left\langle\bar{p}^{T}\right\rangle. \tag{154}\] From the previous equations of motion we then immediately find (recall \(\tau=2\)) \[\delta_{t}\overset{\leftrightarrow}{C}=-\overset{\leftrightarrow}{C}+\alpha \overset{\leftrightarrow}{1}, \tag{155}\] i. e. the equation of motion of the generalized Oldroyd-B model, which thus turns out to be a linear combination of the standard and the modified version. If we restrict the range of \(\alpha\) to \(0\leq\alpha\leq 1\), which seems reasonable, we actually have a convex combination. ## X The generalized Oldroyd-B model in the framework of the Navier-Stokes-Fokker-Planck system We have already seen that the Oldroyd-B model can be shown to be dissipative, and this is true for both the standard version as well as the modified version. It is also true for a suitable linear combination thereof, i. e. the generalized Oldroyd-B model. Similarly, we have seen that both variants can be derived from the same Navier-Stokes-Fokker-Planck (NSFP) system. The only difference is the prescription how to obtain the macroscopic conformation tensor entering the momentum equation from the moments of the Fokker-Planck propagator \(P\). In this section we wish to demonstrate that this underlying NSFP system is dissipative as well, which should hardly be surprising, in view of the results of the previous sections. Let us therefore repeat the equations of motion of the NSFP system, again using dimensionless units (cf. Eqs. 24, 25, 145, 144): \[\nabla\cdot\vec{v} = 0, \tag{156}\] \[D_{t}\bar{v}+\nabla p-\Gamma\nabla\cdot\overset{\leftrightarrow}{C} = \nabla^{2}\bar{v},\] (157) \[D_{t}P\left(\bar{r},\bar{p},t\right) = \mathcal{L}P\left(\bar{r},\bar{p},t\right), \tag{158}\] \[\mathcal{L}=-\frac{\partial}{\partial\bar{p}}\cdot\left(\overset{\leftrightarrow }{\kappa}\cdot\bar{p}-\frac{1}{2}\bar{p}\right)+\frac{1}{2}\frac{\partial}{ \partial\bar{p}}\cdot\frac{\partial}{\partial\bar{p}}. \tag{159}\] The field degrees of freedom for this system are the velocity flow field \(\bar{v}\) and the propagator \(P\). In order to turn this into a closed system, we still need to add a constitutive equation, which is here the prescription how to calculate \(\overset{\leftrightarrow}{C}\) from \(P\). Within the investigations of the present paper, we of course should take the prescription for the generalized Oldroyd-B model, i. e. Eq. 154. ### Analysis of conservative dynamics For the conservative part of the NSFP system, we have the equations of motion \[\nabla\cdot\vec{v} = 0, \tag{160}\] \[D_{t}\bar{v}+\nabla p-\Gamma\nabla\cdot\overset{\leftrightarrow}{C} = 0,\] (161) \[D_{t}P\left(\bar{r},\bar{p},t\right) = -\frac{\partial}{\partial\bar{p}}\cdot\overset{\leftrightarrow}{ \kappa}\cdot\bar{p}P\left(\bar{r},\bar{p},t\right). \tag{162}\] For being conservative, the dynamics must also conserve the underlying Hamiltonian \(\mathcal{H}\), which should be interpreted as the Helmholtz free energy of the system. We now assume that the free energy for this system may be written as (in \(d\) spatial dimensions) \[\mathcal{H}_{\text{NSFP}} = \mathcal{H}_{1}+\mathcal{H}_{2}+\mathcal{H}_{3}+\mathcal{H}_{4}, \tag{163}\] \[\mathcal{H}_{1} = \frac{1}{2}\int d\bar{r}\bar{v}^{2},\] (164) \[\mathcal{H}_{2} = \frac{\Gamma}{2}\left(1-\alpha\right)\int d\bar{r}\left(\bar{p} \right)^{2},\] (165) \[\mathcal{H}_{3} = \frac{\Gamma}{2}\alpha\int d\bar{r}\left\langle\bar{p}^{2}\right\rangle,\] (166) \[\mathcal{H}_{4} = \int d\bar{r}\int d\bar{p}\,\psi\left(P\right), \tag{167}\] where \(\psi\) is a scalar function of the propagator \(P\). \(\psi(P)\) must be chosen in such a way that the Hamiltonian is conserved, i. e. \[\frac{d\mathcal{H}}{dt}=0. \tag{168}\] We notice that in view of our constitutive equation we may write \[\mathcal{H}_{2}+\mathcal{H}_{3}=\frac{\Gamma}{2}\int d\vec{r}\,\mathrm{tr}\overleftrightarrow{ C}. \tag{169}\] From Secs. IV and V we recall that the Hamiltonian of the generalized Oldroyd-B (GOB) model can therefore be written as \[\mathcal{H}_{\mathrm{GOB}}=\mathcal{H}_{1}+\mathcal{H}_{2}+\mathcal{H}_{3}- \frac{\Gamma}{2}\alpha\int d\vec{r}\,\mathrm{tr}\ln\overleftrightarrow{C}. \tag{170}\] We already derived that \(d\mathcal{H}_{\mathrm{GOB}}/dt=0\), and we also found that \(\mathcal{H}_{\mathrm{GOB}}\) is conserved even if we omit the \(\mathrm{tr}\ln\overleftrightarrow{C}\) term. Therefore, we may just refer to the results of Secs. IV and V to immediately conclude \[\frac{d}{dt}\left(\mathcal{H}_{1}+\mathcal{H}_{2}+\mathcal{H}_{3}\right)=0. \tag{171}\] For the dynamics of \(\mathcal{H}_{4}\) we only need to study the dynamics of \(P\), which we may write as \[\partial_{t}P = \mathcal{L}_{c}P, \tag{172}\] \[\mathcal{L}_{c} = -v_{\alpha}\partial_{\alpha}-\frac{\partial}{\partial p_{\alpha} }\kappa_{\alpha}p_{\beta}. \tag{173}\] For the operator \(\mathcal{L}_{c}\) we note that incompressibility implies the operator identities \[v_{\alpha}\partial_{\alpha} = \partial_{\alpha}v_{\alpha}, \tag{174}\] \[\frac{\partial}{\partial p_{\alpha}}\kappa_{\alpha\beta}p_{\beta} = \kappa_{\alpha\beta}p_{\beta}\frac{\partial}{\partial p_{\alpha}}. \tag{175}\] With this it is straightforward to show, via integration by parts, that \(\mathcal{L}_{c}\) is skew-adjoint: \[\int d\vec{r}\int d\vec{p}\,f\mathcal{L}_{c}g=-\int d\vec{r}\int d\vec{p}\,g \mathcal{L}_{c}f. \tag{176}\] Furthermore, \(\mathcal{L}_{c}\) satisfies a standard product rule: \[\mathcal{L}_{c}(fg)=f\mathcal{L}_{c}g+g\mathcal{L}_{c}f, \tag{177}\] and of course it also satisfies the chain rule. Now, \[\frac{d}{dt}\mathcal{H}_{4}\] \[= \int d\vec{r}\int d\vec{p}\,\frac{\partial\psi}{\partial P} \mathcal{L}_{c}P\] \[= -\int d\vec{r}\int d\vec{p}\,\left(v_{\alpha}\frac{\partial\psi}{ \partial P}\partial_{\alpha}P+\kappa_{\alpha\beta}p_{\beta}\frac{\partial\psi }{\partial P}\frac{\partial P}{\partial p_{\alpha}}\right)\] \[= -\int d\vec{r}\int d\vec{p}\,\left(v_{\alpha}\partial_{\alpha} \psi+\kappa_{\alpha\beta}p_{\beta}\frac{\partial\psi}{\partial p_{\alpha}}\right)\] \[= -\int d\vec{r}\int d\vec{p}\,\left(\partial_{\alpha}v_{\alpha} \psi+\frac{\partial}{\partial p_{\alpha}}\kappa_{\alpha\beta}p_{\beta}\psi\right)\] \[= 0. \tag{178}\] In other words, any differentiable function \(\psi(P)\) will give rise to a Hamiltonian \(\mathcal{H}_{4}\) that is conserved. Note that this analysis has relied heavily on the properties of \(\mathcal{L}_{c}\), and, in particular, the incompressibility of the flow. Let us now assume a more general operator \(\mathcal{L}_{c}\), of which we only know that it is a first-order differential operator that satisfies product rule and chain rule. Denoting the adjoint operator with \(\mathcal{L}_{c}^{\dagger}\) (which is of course also a first-order differential operator satisfying product rule and chain rule), we may then proceed as follows: \[\frac{d}{dt}\mathcal{H}_{4} = \int d\vec{r}\int d\vec{p}\,\frac{\partial\psi}{\partial P} \mathcal{L}_{c}P \tag{179}\] \[= \int d\vec{r}\int d\vec{p}\,P\mathcal{L}_{c}^{\dagger}\frac{ \partial\psi}{\partial P}\] \[= \int d\vec{r}\int d\vec{p}\,P\frac{\partial^{2}\psi}{\partial P^{2 }}\mathcal{L}_{c}^{\dagger}P\] \[= \int d\vec{r}\int d\vec{p}\,P\mathcal{L}_{c}\left(P\frac{ \partial^{2}\psi}{\partial P^{2}}\right)\] \[= \int d\vec{r}\int d\vec{p}\,P\frac{\partial}{\partial P}\left(P \frac{\partial^{2}\psi}{\partial P^{2}}\right)\mathcal{L}_{c}P.\] This offers various possibilities to achieve \(d\mathcal{H}_{4}/dt=0\): Firstly, we may assume \(\partial\psi/\partial P=0\), which would imply a constant Hamiltonian, which may as well be set to zero. In other words, this would simply mean to discard \(\mathcal{H}_{4}\) altogether. The second possibility is \(\partial^{2}\psi/\partial P^{2}=0\), which however would give rise to a function \(\psi\) that varies linearly with \(P\), or, in other words, to a Hamiltonian that is not bounded from below. This possibility must therefore be dismissed. Therefore, the simplest non-trivial solution is provided by the condition \[P\frac{\partial^{2}\psi}{\partial P^{2}}=A, \tag{180}\] where \(A\) is some constant. The solution of this differential equation is \[\psi=AP\ln P, \tag{181}\] i. e. a Boltzmann-like function, which for \(A\geq 0\) is bounded from below. Here we have ignored a linear and a constant term, by setting the corresponding integration constants to zero. ### Analysis of dissipative dynamics The dissipative part of the equations of motion can be written as \[\nabla\cdot\vec{v} = 0, \tag{182}\] \[\partial_{t}\vec{v} = \nabla^{2}\vec{v},\] (183) \[\partial_{t}P = \frac{1}{2}\left(\frac{\partial}{\partial\vec{p}}\cdot\vec{p}+ \frac{\partial^{2}}{\partial\vec{p}^{2}}\right)P=:\mathcal{L}_{d}P. \tag{184}\] For the dissipative dynamics of \(\mathcal{H}_{1}\) we may directly refer to the results of Sec. V, where we showed that \(d\mathcal{H}_{1}/dt\leq 0\) as a result of viscous dissipation. For \(\mathcal{H}_{2}\) we first notice that the dissipative part of the equation of motion for \(\left(\ddot{p}\right)\) is (cf. Eq. 149) \[\partial_{t}\left(\ddot{p}\right)=-\frac{1}{2}\left(\ddot{p}\right), \tag{185}\] resulting in \[\partial\left(\ddot{p}\right)^{2}=-\left(\ddot{p}\right)^{2} \tag{186}\] or \[\frac{d\mathcal{H}_{2}}{dt}=-\frac{\Gamma}{2}(1-\alpha)\int d\bar{r}\left( \ddot{p}\right)^{2}. \tag{187}\] At this point, it becomes clear that indeed we should restrict the range of \(\alpha\) to the interval \(0\leq\alpha\leq 1\), such that indeed the conformation tensor is a _convex_ combination of the second- and first-moment based expressions. If this condition is satisfied, then indeed \(d\mathcal{H}_{2}/dt\leq 0\). For \(\mathcal{H}_{3}\), we consider the dissipative part of Eq. 152, \[\partial_{t}\left(p_{\alpha}p_{\beta}\right)=-\left(p_{\alpha}p_{\beta} \right)+\delta_{\alpha\beta}, \tag{188}\] from which we conclude \[\partial_{t}\left(\ddot{p}^{2}\right)=-\left(\ddot{p}^{2}\right)+d \tag{189}\] (recall \(d\) denotes the spatial dimension). Therefore \[\frac{d\mathcal{H}_{3}}{dt}=-\frac{\Gamma}{2}\alpha\int d\bar{r}\left(\dot{p }^{2}-d\right). \tag{190}\] For the dynamics of \(\mathcal{H}_{4}\) we study the properties of the dissipative part of the Fokker-Planck operator, \(\mathcal{L}_{d}\) (cf. Eq. 184). Via integration by parts it is easily shown that its adjoint operator is given by \[\mathcal{L}_{d}^{\dagger}=\frac{1}{2}\left(\frac{\partial^{2}}{\partial\dot{p }^{2}}-\ddot{p}\cdot\frac{\partial}{\partial\ddot{p}}\right). \tag{191}\] Assuming \[\mathcal{H}_{4}=A\int d\bar{r}\int d\bar{p}P\ln P, \tag{192}\] we thus find \[\frac{d\mathcal{H}_{4}}{dt} = A\int d\bar{r}\int d\bar{p}\left(\ln P+1\right)\mathcal{L}_{d}P\] \[= A\int d\bar{r}\int d\bar{p}\,P\mathcal{L}_{d}^{\dagger}(\ln P+1)\] \[= A\int d\bar{r}\left(\mathcal{L}_{d}^{\dagger}(\ln P+1)\right).\] Straightforward evaluation, combined with some regrouping of terms, yields \[\mathcal{L}_{D}^{\dagger}(\ln P+1) \tag{193}\] \[= \frac{1}{2}\left[\frac{1}{P}\frac{\partial^{2}P}{\partial\dot{p }^{2}}+\ddot{p}^{2}+\ddot{p}\cdot\frac{\partial}{\partial\ddot{p}}\ln P-\left( \ddot{p}+\frac{\partial}{\partial\ddot{p}}\ln P\right)^{2}\right].\] Now, \[\left(\frac{1}{P}\frac{\partial^{2}P}{\partial\dot{p}^{2}}\right)=\int d\bar{ p}\,\frac{\partial^{2}P}{\partial\dot{p}^{2}}=0, \tag{194}\] \[\left\langle\ddot{p}\cdot\frac{\partial}{\partial\ddot{p}}\ln P\right\rangle= \int d\bar{p}\,\dot{p}\cdot\frac{\partial P}{\partial\ddot{p}}=-d, \tag{195}\] such that \[\left\langle\mathcal{L}_{D}^{\dagger}(\ln P+1)\right\rangle=\frac{1}{2}\left( \ddot{p}^{2}-d-\left(\ddot{p}+\frac{\partial}{\partial\ddot{p}}\ln P\right)^ {2}\right). \tag{196}\] Therefore, if we set \(A=\Gamma\alpha\), we can combine the results for \(\mathcal{H}_{3}\) and \(\mathcal{H}_{4}\) to yield \[\frac{d\left(\mathcal{H}_{3}+\mathcal{H}_{4}\right)}{dt}=-\frac{\Gamma}{2} \alpha\int d\bar{r}\left(\left(\ddot{p}+\frac{\partial}{\partial\ddot{p}}\ln P \right)^{2}\right)\leq 0. \tag{197}\] Therefore, the dynamics is indeed dissipative, and the Second Law holds. ## XI Summary, discussion and outlook Let us briefly summarize the main results of the present paper, which are also presented in Table 1. We have discussed generalized Oldroyd-B models of the form \[\nabla\cdot\ddot{v} = 0, \tag{198}\] \[D_{t}\ddot{v}+\nabla p-\Gamma\nabla\cdot\overleftarrow{C} = \nabla^{2}\ddot{v},\] (199) \[\delta_{t}^{\ast}\overleftarrow{C} = -\overleftarrow{C}+\alpha\overleftarrow{1}, \tag{200}\] with \(0\leq\alpha\leq 1\), where we focused on the extreme cases \(\alpha=1\) (standard Oldroyd-B model) and \(\alpha=0\) (modified Oldroyd-B model). We have seen that the dynamics conserves the positivity of the conformation tensor, and that the model is thermodynamically consistent for each choice of \(\alpha\), where for the free energy density we have to set \[f(\overleftarrow{C})=\frac{\Gamma}{2}\left[\operatorname{tr}\overleftarrow{C}- \alpha\operatorname{tr}\ln\overleftarrow{C}\right]. \tag{201}\] We have also seen that all these models can be derived from the NSFP system, where the standard model corresponds on a definition of the conformation tensor based on the second moment of the propagator ("tensorial theory"), while the modified model bases the definition on the first moment ("vectorial theory"). The generalized model is then simply the convex combination of the two extreme cases. The NSFP system was shown to be dissipative for each choice of \(\alpha\). These results establish, from a formal point of view, that the closure, which prescribes how to obtain the conformation tensor from the NSFP system, is not unique but ambiguous. Importantly, we argued that this is more than a formal mathematical exercise: Rather, the ambiguity can be traced back to an ambiguity in the underlying non-equilibrium ensemble, which in turn is a matter of _choice_ (as a matter of fact: choice of the ensemble-defining slow variable). Here it is important to realize that the conformation tensor that enters the macroscopic momentum conservation equation is is a suitably _averaged_ conformation tensor, and the details of the averaging depend on the ensemble. The details of the closure then dictate both the precise form of the resulting constitutive equation, and the precise form of the underlying free energy. It seems to us that these ensemble aspects, and their fairly far-reaching consequences, have up to now not been fully appreciated, and are of significant importance to the whole field of rheology. In the present study, we have identified an ensemble based on the conformation tensor as the underlying statistical-mechanical theory for the standard version of the model, while the modified version builds on an ensemble based on the end-to-end vector. Significantly, in the former case we have to look at the limit \(N\to\infty\), such that the constraining effect on the single dumbbell is negligible, and thermal fluctuations are fully present, while in the latter case we have to study \(N=1\), such that thermal fluctuations are fully suppressed. In practice, this means that in the modified case we base the theory not on the second but on the first moment of the distribution. As far as we know and understand, there is no fundamental and obvious _a priori_ principle which would tell us that one of the two versions is, in some sense, superior to the other one. A vague intuitive feeling tells us that perhaps the version without fluctuations might be more appropriate for the desired macroscopic description, since, after all, the momentum conservation equation does not include any fluctuations either. This is corroborated by the fact that the derivation of Ref. [12], which was not based upon coupling the momentum conservation equation to a Fokker-Planck system, but rather on direct coarse-graining combined with GENERIC, also gives rise to the fluctuation-free version. We feel that the best way to incorporate thermal fluctuations into rheological models is to generalize the equations of motion to field-theoretic stochastic differential equations, where the noise term represents thermal fluctuations. The construction of such equations of motion for viscoelastic models has already been worked out in Ref. [16], and we refer the interested reader to that paper. Again, the GENERIC formalism provides a straightforward guiding principle in this task. It is then reasonable to assume that a model that describes fluctuations explicitly via Langevin noise should _not_ include fluctuations in the corresponding deterministic part. If this is correct, then this consideration provides a strong argument in favor of the modified model. In this context, one should also realize that the modified model (without noise) allows the trivial solution of a conformation tensor (and polymer stress) that simply vanishes identically -- provided this is compatible with the initial and boundary conditions. If this latter condition holds, then the solution \(\widetilde{C}\simeq 0\) is indeed expected to hold throughout the dynamics, regardless of (possibly even turbulent) flow conditions. Note also that in the special case of vanishing flow, where \(\delta_{t}=\partial_{t}\), one may solve the constitutive equation trivially, with the result of an exponential relaxation towards zero (for the noise-free modified model), or towards the unit tensor (standard model), which indicates that in the modified model the state \(\widetilde{C}=0\) is anything but exotic. We believe that this should _not_ be dismissed as an absurd unphysical result, which would invalidate the modified model. Rather, we think it is likely that this actually describes the real physics of the underlying microscopic model: If we turn off thermal fluctuations completely, then it simply makes sense to assume that all dumbbells will eventually shrink to zero extension, and will then remain in that state, where they are also unable to modify the flow and are rather transported as passive Langrangian particles. If this is indeed true, then one should expect that the modified model is able to provide non-trivial rheological behavior only for the version with added Langevin noise. We find one property of the modified model quite attrac \begin{table} \begin{tabular}{|c|c|c|} \hline version & standard & modified \\ \hline \hline momentum equation & \(D_{i}\ddot{v}=-\nabla p+\Gamma\nabla\cdot\overleftarrow{C}+\nabla^{2}\ddot{v}\) & \(\nabla\cdot\ddot{v}=0\) \\ \hline constitutive relation & \(\delta\overleftarrow{C}=-\overleftarrow{C}+\overleftarrow{1}\) & \(\delta\overleftarrow{C}=-\overleftarrow{C}\) \\ \hline positivity of \(\overleftarrow{C}\) & strictly positive-definite & positive-semidefinite \\ \hline Hamiltonian & \(\mathcal{H}=f\ddot{v}^{2}/2+f(\overleftarrow{C})\) \\ \hline free energy density & \(f=(\Gamma/2)\left(\mathrm{tr}\overleftarrow{C}-\mathrm{tr}\ln\overleftarrow{C}\right)\) & \(f=(\Gamma/2)\mathrm{tr}\overleftarrow{C}\) \\ \hline driving force & \(\widetilde{\chi}=(\Gamma/2)\left(\overleftarrow{1}-\overleftarrow{C}^{-1}\right)\) & \(\widetilde{\chi}=(\Gamma/2)\overleftarrow{1}\) \\ \hline dissipation rate & \(\partial_{t}f=-(2/\Gamma)\mathrm{tr}\left(\overleftarrow{\chi}\cdot \overleftarrow{C}\cdot\overleftarrow{\chi}\right)\) \\ \hline \(\overleftarrow{C}\) from kinetic-theory moments & \(C_{\alpha\beta}=\left\langle p_{\alpha}p_{\beta}\right\rangle\) & \(C_{\alpha\beta}=\left\langle p_{\alpha}\right\rangle\left\langle p_{\beta}\right\rangle\) \\ \hline ensemble-defining quantity & conformation tensor & end-to-end vector \\ \hline \end{tabular} \end{table} Table 1: Summary of results. tive: The singularity of the Hamiltonian at \(\hat{\vec{C}}=0\), and the corresponding singularity in the dissipation rate, are removed. Therefore, the system might exhibit less dissipative resistance against external driving, which might then perhaps lead to an alleviated high Weissenberg number problem. We thus see that these considerations lead to a number of unanswered questions and speculations. It is therefore of high interest to test the new model by numerical simulations, which is however left to future work. ###### Acknowledgements. Stimulating discussions with J. Ravi Prakash are gratefully acknowledged. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project No. 233630050-TRR 146. ## Author declarations The authors have no conflicts to disclose. All authors have contributed equally to the preparation of this manuscript. ## Data availability Data sharing is not applicable to this article as no new data were created or analyzed in this study. ## Appendix A Proof of Eq. 134 We consider our system of \(N\) dumbbells in thermal equilibrium, and recall \[\hat{\vec{Q}}=N^{-1}\sum_{i}\bar{p}_{i}. \tag{11}\] The probability distribution of \(\hat{\vec{Q}}\) is Gaussian, which (in \(d\) spatial dimensions) can be written as \[\left\langle\delta\left(\hat{\vec{Q}}-\hat{Q}\right)\right\rangle=\left(\frac {N}{2\pi}\right)^{d/2}\exp\left(-\frac{N}{2}\vec{Q}^{2}\right). \tag{12}\] We wish to calculate the constrained average of the conformation tensor of, say, the first dumbbell, \(\left[p_{1\alpha}p_{1\beta}\right]\). Per definition we have \[\left[p_{1\alpha}p_{1\beta}\right]\left\langle\delta\left(\hat{\vec{Q}}-\hat{ Q}\right)\right\rangle=\left\langle\delta\left(\hat{\vec{Q}}-\hat{Q}\right)p_{1 \alpha}p_{1\beta}\right\rangle. \tag{13}\] The RHS may be evaluated by making use of the Fourier representation of the delta function, \[\left(2\pi\right)^{d}\delta\left(\hat{\vec{Q}}-\hat{Q}\right) \tag{14}\] \[= \int d\tilde{k}\exp(-i\tilde{k}\cdot\tilde{Q})\exp\left(i\frac{ \tilde{k}}{N}\cdot\bar{p}_{1}\right)\ldots\exp\left(i\frac{\tilde{k}}{N}\cdot \bar{p}_{N}\right).\] Making use of the fact that the dumbbells are statistically independent, and have identical properties, we find \[\left(2\pi\right)^{d}\left\langle\delta\left(\hat{\vec{Q}}-\hat{ Q}\right)p_{1\alpha}p_{1\beta}\right\rangle \tag{15}\] \[= \int d\tilde{k}\exp(-i\tilde{k}\cdot\tilde{Q})\left(\exp\left(i \frac{\tilde{k}}{N}\cdot\bar{p}_{1}\right)p_{1\alpha}p_{1\beta}\right)\] \[\left(\exp\left(i\frac{\tilde{k}}{N}\cdot\bar{p}_{2}\right) \right)^{N-1}.\] Now, \[\left\langle\exp\left(i\frac{\tilde{k}}{N}\cdot\bar{p}_{2}\right)\right\rangle =\exp\left(-\frac{\tilde{k}^{2}}{2N^{2}}\right) \tag{16}\] and \[\left\langle\exp\left(i\frac{\tilde{k}}{N}\cdot\bar{p}_{1}\right) p_{1\alpha}p_{1\beta}\right\rangle \tag{17}\] \[= -N^{2}\frac{\partial}{\partial k_{\alpha}}\frac{\partial}{ \partial k_{\beta}}\left(\exp\left(i\frac{\tilde{k}}{N}\cdot\bar{p}_{1}\right)\right)\] \[= -N^{2}\frac{\partial}{\partial k_{\alpha}}\frac{\partial}{ \partial k_{\beta}}\exp\left(-\frac{\tilde{k}^{2}}{2N^{2}}\right)\] \[= \left(\delta_{\alpha\beta}-\frac{k_{\alpha}k_{\beta}}{N^{2}} \right)\exp\left(-\frac{\tilde{k}^{2}}{2N^{2}}\right),\] resulting in \[\left(2\pi\right)^{d}\left\langle\delta\left(\hat{\vec{Q}}-\hat{ Q}\right)p_{1\alpha}p_{1\beta}\right\rangle\] \[= \int d\tilde{k}\exp(-i\tilde{k}\cdot\hat{Q})\left(\delta_{ \alpha\beta}-\frac{k_{\alpha}k_{\beta}}{N^{2}}\right)\exp\left(-\frac{ \tilde{k}^{2}}{2N}\right)\] \[= \left(\delta_{\alpha\beta}+N^{-2}\frac{\partial}{\partial Q_{ \alpha}}\frac{\partial}{\partial Q_{\beta}}\right)\int d\tilde{k}\exp(-i \tilde{k}\cdot\tilde{Q})\exp\left(-\frac{\tilde{k}^{2}}{2N}\right)\] \[= \left(\delta_{\alpha\beta}+N^{-2}\frac{\partial}{\partial Q_{ \alpha}}\frac{\partial}{\partial Q_{\beta}}\right)\left(2\pi N\right)^{d/2} \exp\left(-\frac{N}{2}\vec{Q}^{2}\right).\] Taken together, one thus finds \[\left[p_{1\alpha}p_{1\beta}\right] \tag{18}\] \[= \exp\left(+\frac{N}{2}\vec{Q}^{2}\right)\left(\delta_{\alpha \beta}+N^{-2}\frac{\partial}{\partial Q_{\alpha}}\frac{\partial}{\partial Q_{ \beta}}\right)\exp\left(-\frac{N}{2}\vec{Q}^{2}\right)\] \[= Q_{\alpha}Q_{\beta}+\left(1-N^{-1}\right)\delta_{\alpha\beta}.\]
2304.07482
Documentation Practices in Agile Software Development: A Systematic Literature Review
Context: Agile development methodologies in the software industry have increased significantly over the past decade. Although one of the main aspects of agile software development (ASD) is less documentation, there have always been conflicting opinions about what to document in ASD. Objective: This study aims to systematically identify what to document in ASD, which documentation tools and methods are in use, and how those tools can overcome documentation challenges. Method: We performed a systematic literature review of the studies published between 2010 and June 2021 that discusses agile documentation. Then, we systematically selected a pool of 74 studies using particular inclusion and exclusion criteria. After that, we conducted a quantitative and qualitative analysis using the data extracted from these studies. Results: We found nine primary vital factors to add to agile documentation from our pool of studies. Our analysis shows that agile practitioners have primarily developed their documentation tools and methods focusing on these factors. The results suggest that the tools and techniques in agile documentation are not in sync, and they separately solve different challenges. Conclusions: Based on our results and discussion, researchers and practitioners will better understand how current agile documentation tools and practices perform. In addition, investigation of the synchronization of these tools will be helpful in future research and development.
Md Athikul Islam, Rizbanul Hasan, Nasir U. Eisty
2023-04-15T06:14:00Z
http://arxiv.org/abs/2304.07482v1
# Documentation Practices in Agile Software Development: A Systematic Literature Review ###### Abstract _Context_: Agile development methodologies in the software industry have increased significantly over the past decade. Although one of the main aspects of agile software development (ASD) is less documentation, there have always been conflicting opinions about what to document in ASD. _Objective_: This study aims to systematically identify what to document in ASD, which documentation tools and methods are in use, and how those tools can overcome documentation challenges. _Method_: We performed a systematic literature review of the studies published between 2010 and June 2021 that discusses agile documentation. Then, we systematically selected a pool of 74 studies using particular inclusion and exclusion criteria. After that, we conducted a quantitative and qualitative analysis using the data extracted from these studies. _Results_: We found nine primary vital factors to add to agile documentation from our pool of studies. Our analysis shows that agile practitioners have primarily developed their documentation tools and methods focusing on these factors. The results suggest that the tools and techniques in agile documentation are not in sync, and they separately solve different challenges. _Conclusions_: Based on our results and discussion, researchers and practitioners will better understand how current agile documentation tools and practices perform. In addition, investigation of the synchronization of these tools will be helpful in future research and development. Software Engineering; Agile Software Development; Documentation; Systematic Literature Review ## I Introduction Software documentation is an integral part of software development. It works as a communication medium between developers of a team and is utilized as an information repository by maintenance engineers [76]. Documentation elucidates how the system is structured, its functionalities, and the design rationale [70]. Software documentation should be included as part of software development and is sometimes called "common sense" [26]. Even outdated document serves a purpose and may be helpful [26]. However, the incomplete, wrong, clumsy, abstruse, outdated, and inadequate document often leads to the unpopularity of software documentation among the developers [4]. Regardless of the application type, almost all medium to large software projects produces a certain amount of documentation [76]. ASD is an iterative approach that helps teams deliver value to their customers faster and more efficiently. It is immensely applied in both industry and academia [54]. This incremental approach maintains a strong focus on project goals and customer involvement. Over the past decade, numerous software developers have adopted the ASD model [33]. This concept of agile is gained by different agile methodologies like Scrum, eXtreme Programming (XP), Crystal, Lean, Dynamic Systems Development Method (DSDM), and Feature-Driven Development (FDD). Documentation has a lower priority than working software in Agile practices [27]. Some agile practitioners consider code as its documentation. As a result, the information that is recorded in documentation (documented information, henceforth) may not be well maintained, resulting in inadequate information for the team to understand development tasks [40]. Another reason for less focus on documentation is that it consumes time that could have been allocated in development [70]. As modern software systems are complicated, developers revisit the system more often. Trivial maintenance work is assigned to junior developers who have little experience with the code. As a result, lack of documentation hinders inter-team building and knowledge loss [78]. In addition, users of a software product expect quality results [2]. Considering these scenarios, agile documentation plays a significant role in ASD [70, 71]. Moreover, offshore agile development is widespread nowadays, and documentation is one of the key factors to make offshore agile development successful [58]. Therefore, developers should document the features of each iteration to help future developers refactor them into smaller tasks. The practitioners working on agile documentation have implemented different documentation strategies to overcome the issues raised by the lack of documentation. They have identified different key elements that developers should document such as user stories, functional requirements, source code, etc [10, 13, 43]. They have also developed various tools and models such as wikis, simul loco, doxygen, etc to document these elements effectively [9, 55, 66]. In this paper, we focus on understanding the pivotal information to document in agile and the existing techniques and tools that result in document optimization. We conducted a systematic literature review from existing research and analyzed them to answer our research questions mentioned in section II. Our findings will benefit any agile practitioners and developers to optimize their documentation effort and help researchers in further study. ## II Research Methodology In our research methodology, we followed the directions proposed by Kitchenham and Charters [45]. We have divided our research review into four steps, namely planning, conducting, and reporting the review results. ### _Planning_ We have planned this review by confirming the need for such a review and have proposed our research questions accordingly. Our planning phase includes search strategy, search string, and inclusion/exclusion criteria. #### Ii-A1 Research questions We pose the following research questions to drive this study. * **RQ1: Which information do agile practitioners document?** * **RQ2: Which documentation generation tools and methods do agile practitioners use?** * **RQ3: How can the tools and methods overcome the documentation challenges in agile software development?** #### Ii-A2 Search strategy After defining the need for this systematic review and research questions, we started to carry out the formulation of a search strategy based on the guidelines provided by Kitchenham and Charters [45]. In Table I we broke down the question into individual facets i.e. population, intervention, and constructed search string using Boolean ANDs and ORs. We initially fetched studies from the electronic databases and then explored them through reference searches (snowballing) to seek other meaningful studies. After that, we applied our inclusion and exclusion criteria to the fetched studies involving a different number of researchers, as explained in Section II-A4. #### Ii-A3 Search criteria The search criteria used for this review consist of three parts - C1, C2, and C3, defined as follows: * C1: We constructed the C1 string, which enables the keyword agile either in the title or abstract. * C2: The C2 string is made up of keywords such as document or document* either in title or abstract. * C3: We constructed the C3 part, which enables keywords such as tools or document* tools either in the title or abstract. The boolean expression: C1 _AND_ C2 _OR_ C3 We provided our search string in Table I. Another key thing to note is that we have filtered out the result fetched from the search query by applying the checkbox feature of the IEEE Xplore database. In this case, we filtered out publication topics such as internet, organizational aspects, computer-aided instruction, DP industry, computer science education, educational courses, knowledge management, mobile computing, security of data, business data processing, and teaching not relevant to our research. #### Ii-A4 Inclusion and exclusion criteria As per the guidelines of Kitchenham and Charters [45] we have set inclusion and exclusion criteria based on our research questions. Here, we only considered papers that are in English, published in conferences and journals, and published within the time frame 2010 - 2021. The published papers should describe the agile documentation approach, tools, or knowledge relevant to our RQs. Therefore, we did not include any opinion, viewpoint, keynote, discussions, editorials, comments, tutorials, prefaces, anecdote papers, and presentations. In addition, we excluded the papers that did not discuss agile documentation or agile documentation tools but may discuss agile software development methods as a side topic. ### _Conducting the review_ Once we agreed on the protocol, we started our review properly. This section discusses the findings of our search and extracted data from relevant databases and sources. #### Ii-B1 Study search and selection We searched the IEEE Xplore database against our search query and search criteria and fetched 206 studies. In round 1, the first author immensely analyzed the titles and abstracts of the fetched studies based on the inclusion criteria. After the first round, we came out with 77 papers, and most of these studies covered all the inclusion criteria. A critical part was to ensure the papers did not come from opinions, discussions, editorials, comments, tutorials, prefaces, and presentations as per the exclusion criteria. In round 2, we inspected the full-text review of the papers based on all inclusion and exclusion criteria. We read the papers fully a few times where there were disagreements and required consensus. We excluded 23 papers based on the exclusion criteria. Finally, to satisfy the inclusion of relevant primary studies as much as possible, we performed backward and forward snowballing following the guideline provided by Wohlin [84] and included 20 more papers in our list of primary studies. The final pool of selected papers was 77. #### Ii-B2 Data extraction We followed the data extraction strategy of Kitchenham and Charters [45] and came up with a data extraction form that we designed to collect all the information needed to address the review questions. We set a few quality evaluation criteria, such as * How well was data collection carried out? * Is the research design defensible? * How much are the findings credible? * Is the research scope well addressed? In addition to the RQs and quality evaluation criteria, the form included the data as (i) name of the reviewer, (ii) date of data extraction, (iii) title, (iv) authors, (v) journal, (vi) publication details, (vii) future work, (viii) limitations, (ix) year of publication, (x) methodology, (xi) data analysis, (xii) validation technique, (xiii) relevancy and xiv) space for additional notes. This data extraction was performed independently by the first and second authors. #### Ii-A3 Data synthesis After the data extraction, we combined and summarized the results of the included primary studies according to the guidelines of Kitchenham and Charters [45]. Our data synthesis includes both quantitative and descriptive. For the descriptive synthesis, we tabulated the data based on research questions. In this case, we synthesized what type of information and tools are used in agile documentation. On the other hand for quantitative synthesis, we again developed a tabular form for research questions. These tables were structured to highlight similarities and differences between study outcomes. Later the data of these tables were represented by bar charts and pie charts. ## III Results In this section, we present our findings. We address each research question from RQ1 to RQ3. ### _(RQ1) Which information do agile practitioners document?_ Table II summarizes our findings of documented information. Our primary pool of studies consisted of interviews, surveys, case studies, experiments, and statistical analysis. Many agile practitioners, graduate students, and software engineers directly participated in these studies. We collected the findings from these studies and grouped them. The following sections briefly describe each element of Table II. #### Iii-A1 User stories User stories function as the shortcut for more formal documentation and require more details [29]. On the other hand, they represent the small, concise user-driven features and hence need to be documented [56]. #### Iii-A2 Functional requirements End-users define these requirements and expect them as facilities when they interact with the system. Therefore, these functionalities focus more on the technical aspects that need to be implemented [57]. #### Iii-A3 Non-functional requirements The success of an agile project depends on the non-functional requirements, which are also referred to as quality requirements [12]. These requirements are quality constraints that must be satisfied by the system. So, failure to meet these requirements compromises the entire system and makes it useless [23]. #### Iii-A4 Source code The source code should be documented for traceability and deriving the code that does not perform [10]. #### Iii-A5 UI structure These are wireframes and sometimes are documented externally. However, they can be the basic layouts of application screens and the product owner usually provides these wireframes [13]. #### Iii-A6 Technical debt If a system needs fixes or updates, it is best to document them currently. Similarly, it is best to document them in the case of technical debt. As a result, future developers will be aware of that technical debt. Therefore, all instances of technical debt should be documented [36]. #### Iii-A7 System architecture Some documentation tools have evolved to document architecture [10, 31]. Architecture works as the backbone of the entire system [10]. Documenting architecture is one of the most complicated and challenging parts of software development [31]. #### Iii-A8 API reference guides To understand the usage and integration of API, APIs should have comprehensive documentation. Good API reference guides make the APIs easier to maintain and helps onboard new developers to the team [75]. #### Iii-A9 Test specification It is tough to demonstrate large-scale systems solely using test cases in the industry. Testing needs more mature documentation to keep track of the test cases, user scenarios, and bugs [10]. ### _(RQ2) Which documentation generation tools and methods do agile practitioners use?_ In order to answer this research question, we first explored the current tools and methods used in ASD from our primary pool of studies and found a total of 23 tools and methods. Next, we categorized them based on their document type and found ten categories. Table III lists categories alongside their tools and Figure 1 shows the percentage of tools under each category. We also listed the studies that reported the tools and methods. Moreover, we mentioned the role of each tool or method in the right-most column of the table. #### Iii-A1 Source code based documentation Source code-based documentation can mitigate certain risks [32]. This category consists of 6 tools, and these tools mainly focus on how software practitioners can generate documentation from source code comments. These tools cover the functionalities of some of the popular documentation generation tools like JavaScriptDoc or Docstring and offer some additional features [44]. #### Iii-A2 Wiki based documentation When developers consider straightforward and flexible documentation options in agile, they consider wiki-based documentation in the first place because the primary goal of the wiki is to minimize the development-documentation gap by making documentation more convenient and attractive to developers [80]. For example, sprintDoc and XSDoc are tools based on wikis and can be integrated with other tools such as the VCS IDE. Fig. 1: Categorisation of documentation tools/methods #### Iii-B3 Scrum Scrum is a lightweight framework for agile development and is very popular. Scrumconix supporting scrum proved to be a valuable and lightweight tool to document and understand a software project [59]. #### Iii-B4 User story The user stories explain how the software will work for the users and provide an essential source for the design of the software according to user needs [1]. Methods such as the COSMIC method [20, 65] can measure the quality of user stories to generate high-quality documentation. #### Iii-B5 Traceability Traceability tools like Trace++ support the transition from traditional to agile methodologies. They offer traceability between documents generated during conventional software development and agile methods [28]. In addition, TraceMan provides traces to critical agile artifacts [10]. #### Iii-B6 API/Web service In this category, we found a tool called Docio which can generate API documents with I/O examples [38]. This tool is more like the popular REST API documentation generation tool Swagger but only supports the C programming language [79]. #### Iii-B7 Flow chart A few tools emphasize providing meaningful graphical diagrams either based on source code or requirements [46, 69]. Flowgen, CLARET, and TCC are some of the tools in this category. #### Iii-B8 Architectural Experts ascertained the lack of documentation and architectural design in agile projects [60]. Abstract specification tool in this category assists the architects in organizing relevant information regarding the architecture while creating design and architecture blueprints, thus reducing the effort of documentation [31]. Also, Active Documentation Software Design (ADSD) is an approach that enables incorporating domain documentation to agile development, while having the processes adaptive [67]. #### Iii-B9 View based View-based software documentation enables different perspectives on the software system and enables the explicit and simultaneous modeling of all of those viewpoints as views in the documentation [11]. #### Iii-B10 NLP-based Researchers are aware of integrating modern NLP-based techniques and tools into the source code comments where documentation is only available in the form of source code comments. As a result, these tools directly contribute to determining the quality of documentation. JavadocMiner is one of such NLP-based tools that developers can easily embed with Eclipse IDE [83]. _(RQ3) How can the tools and methods overcome the documentation challenges in agile software development?_ Agile methods or tools that have tried to address the challenges in dynamic contexts have gained much interest among practitioners and researchers [11]. Keeping that in mind, different researchers attempted to identify those challenges and built tools that provide on-demand solutions [25]. We listed all agile challenges in table IV that our previously mentioned tools and methods attempted to resolve. Figure 2 represents a number of tools/methods that resolved a particular challenge. #### Iii-B1 Minimal documentation One of the primary challenges in agile documentation is to keep the documentation minimal [16, 21]. Many documentation generation tools that generate documentation from source code and chart, diagram, and flowchart-based documentation evolved to keep documentation minimum and simple. Practitioners must keep minimal documentation to enhance agile software products, and the tool simul loco comments may answer this problem. Simul loco documentation is extremely useful [66]. Moreover, GitHub plus Markdown support options so that reviewers can give a quick review, and it only takes a few minutes for a minor Fig. 2: Number of tools/methods resolving a particular challenge update. A document can be improved easily and continuously [48]. Abstract specification tool proposes a considerably shorter abstract specification document, requiring minimal documentation efforts and resulting in shorter documentation that is easier to review, update, and communicate [31]. #### Iii-C2 Neglect of non-functional requirements The effect of requirements changes on the architecture is crucial. It was difficult to trace precisely which architectural decisions had to be reconsidered because of the lack of traceability between the textual requirements specification documents and the architectural models. TraceMan fixes this since the trace links are created during the artifacts' creation. As a result, we can better understand the functional and non-functional requirements using TraceMan more accurately and consequently [10]. #### Iii-C3 Inadequate Architecture The primary goals of agile development are flexibility, minimalism, and collaboration. Abstract specification tool achieves these by creating a short and focused architecture document [31]. #### Iii-C4 Lack of traceability Conventional agile projects entail intensive labor work to generate and maintain traceable links. Consequently, lack of traceability provides a weak layer over the software system no matter how much flexible the system is [18]. On the other hand, trace++ generates a large number of traceability relations combining the various artifacts [28]. #### Iii-C5 Mediocre user stories Although user stories are essential for ASD, people struggle to document high-quality user stories. Even user stories mentioned in the current dataset are of poor quality [19]. TraceMan produces high-quality user stories by having detailed traces of user store information [10]. #### Iii-C6 Others Some tools resolve the complex API documentation challenges by creating API documentation with ease [38]. In addition, there are some tools such as Scrumconix [59], and view-based software documentation that cover challenges posed in the area of scrum and view-based [11]. ## IV Threats to Validity The threats to our systematic literature review are the specification of the candidate pool of papers and primary study selection bias. We selected our primary pools of studies through database searches and used keywords. Our keywords were very precise, and we obtained a good number of papers. However, we may missed some papers due to our specific search string. We also used a specific period to select our studies, which might have discarded relevant papers. On the other hand, we relied on IEEE Xplore for a primary pool of studies, which threatens to have a complete set of primary studies. To mitigate this risk, we performed both backward and forward snowballing, which eventually resulted in a collection of papers from other databases like ACM, Springer, etc. We followed the standard inclusion and exclusion criteria, which might still introduce some personal bias. ## V Conclusion Working software gets priority over detailed documentation in agile software development. Even though documentation is less of a priority in ASD, studies have shown that a minimal level of documentation is essential. This research aimed to identify key elements to record in ASD and locate appropriate tools to aid in documentation. We conducted a systematic literature review to identify essential information in agile documentation and the effectiveness of current methodologies and tools in agile software development. As a result, we have compiled a list of essential elements in agile documentation and tools and approaches that can help alleviate the documentation challenges. Our findings will aid in understanding key aspects of agile documentation and how agile documentation tools and approaches function. In the future, we want to map the relationships between these technologies and develop a method that can be used as a one-stop solution for agile documentation. We also intend to conduct a multi-vocal literature review to find more industry concerns and solutions. Finally, our future plan also involves a survey of agile practitioners to see the usefulness of this article.
2301.02324
Reasoning about Causality in Games
Causal reasoning and game-theoretic reasoning are fundamental topics in artificial intelligence, among many other disciplines: this paper is concerned with their intersection. Despite their importance, a formal framework that supports both these forms of reasoning has, until now, been lacking. We offer a solution in the form of (structural) causal games, which can be seen as extending Pearl's causal hierarchy to the game-theoretic domain, or as extending Koller and Milch's multi-agent influence diagrams to the causal domain. We then consider three key questions: i) How can the (causal) dependencies in games - either between variables, or between strategies - be modelled in a uniform, principled manner? ii) How may causal queries be computed in causal games, and what assumptions does this require? iii) How do causal games compare to existing formalisms? To address question i), we introduce mechanised games, which encode dependencies between agents' decision rules and the distributions governing the game. In response to question ii), we present definitions of predictions, interventions, and counterfactuals, and discuss the assumptions required for each. Regarding question iii), we describe correspondences between causal games and other formalisms, and explain how causal games can be used to answer queries that other causal or game-theoretic models do not support. Finally, we highlight possible applications of causal games, aided by an extensive open-source Python library.
Lewis Hammond, James Fox, Tom Everitt, Ryan Carey, Alessandro Abate, Michael Wooldridge
2023-01-05T22:47:28Z
http://arxiv.org/abs/2301.02324v2
# Reasoning about Causality in Games+ ###### Abstract Causal reasoning and game-theoretic reasoning are fundamental topics in artificial intelligence, among many other disciplines: this paper is concerned with their intersection. Despite their importance, a formal framework that supports both these forms of reasoning has, until now, been lacking. We offer a solution in the form of _(structural) causal games_, which can be seen as extending Pearl's causal hierarchy to the game-theoretic domain, or as extending Koller and Milch's multi-agent influence diagrams to the causal domain. We then consider three key questions: 1. How can the (causal) dependencies in games - either between variables, or between strategies - be modelled in a uniform, principled manner? 2. How may causal queries be computed in causal games, and what assumptions does this require? 3. How do causal games compare to existing formalisms? To address question i), we introduce _mechanised games_, which encode dependencies between agents' decision rules and the distributions governing the game. In response to question ii), we present definitions of predictions, interventions, and counterfactuals, and discuss the assumptions required for each. Regarding question iii), we describe correspondences between causal games and other formalisms, and explain how causal games can be used to answer queries that other causal or game-theoretic models do not support. Finally, we highlight possible applications of causal games, aided by an extensive open-source Python library. ###### Contents * 1 Introduction * 1.1 Contributions * 1.2 Related Work * 2 Background * 2.1 Causal Models * 2.2 Game-Theoretic Models * 3 Mechanised MAIDs and Relevance * 3.1 Mechanised MAIDs * 3.2 Relevance * 4 Causality in Games * 4.1 Predictions * 4.2 Interventions * 4.3 Counterfactuals * 5 Solution Concepts and Subgames * 5.1 Nash Equilibria * 5.2 Subgames * 5.3 Equilibrium Refinements * 6 Connections to EFGs * 6.1 Transformations * 6.2 Equivalences * 6.3 Causality in EFGs * 7 Applications * 7.1 Case Study: Insurance Pricing * 7.2 Blame, Intent, Incentives, and Fairness * 8 Discussion * 8.1 Advantages and Disadvantages of Causal Games * 8.2 Future Work * A Proofs * A.1 Transformations between Game Representations * A.2 Theoretical Results * B Further Examples * B.1 Counterfactuals Using the Closest Possible World Principle * B.2 Non-Existence Results * B.3 Reasoning about Existing Concepts using Causal Games * C Codebase * C.1 Creating MAIDs * C.2 Computing Equilibria ## Notation \begin{tabular}{l l r} \hline \hline **Symbol** & **Object** & **Page** \\ \hline \(\textbf{Anc}_{V}\) & Ancestors of \(V\) & 7 \\ \(\textsf{c}_{\mathcal{R}}\mathcal{G}\) & Condensed \(\mathcal{R}\)-Relevance Graph & 61 \\ \(\textbf{Ch}_{V}\) & Children of \(V\) & 7 \\ \(\textbf{Desc}_{V}\) & Descendants of \(V\) & 7 \\ do & Do Operator & 8 \\ \(\textit{dom}(V)\) & Domain of \(V\) & 7 \\ \(\textsf{E}_{V}\) & Exogenous Variable for \(V\) & 9 \\ \(\mathscr{E}\) & Edges & 7 \\ \(\mathcal{E}\) & Extensive-Form Game & 10 \\ \(\textbf{Fa}_{V}\) & Family of \(V\) & 7 \\ \(\mathcal{G}\) & Graph & 7 \\ \(\mathcal{I}\) & Intervention & 8 \\ \(J\) & Intervention Set & 34 \\ \(\mathcal{M}\) & Model & 7 \\ \(\mathcal{M}(\zeta_{k})\) & Perturbed Model & 32 \\ \(\textsf{M}_{V}\) & Mechanism Variable for \(V\) & 14 \\ \(\textsf{m}\mathcal{G}\) & Mechanised Graph & 15 \\ \(\textsf{m}\mathcal{M}\) & Mechanised Model & 15 \\ \(N\) & Agents & 10 \\ \(\textbf{Pa}_{V}\) & Parents of \(V\) & 7 \\ Pr or \(P\) & Probability Distribution & 7, 10 \\ Pr\({}^{\boldsymbol{\pi}}\) or \(P^{\sigma}\) & Probability Distribution Combining Pr with \(\boldsymbol{\pi}\) or \(P\) with \(\sigma\) & 11, 12 \\ \(\mathcal{Q}\) & Queries & 18 \\ \(\mathcal{R}\) & Rationality Relations & 14 \\ \(\mathcal{R}(\textsf{m}\mathcal{M})\) & \(\mathcal{R}\)-Rational Outcomes of \(\mathcal{M}\) & 16 \\ \(r_{D}\) & Rationality Relation for \(D\) & 14 \\ \(r_{\mathcal{R}}\mathcal{G}\) & \(\mathcal{R}\)-Relevance Graph & 17 \\ \(V\) & Variable & 7 \\ \(v\) & Value of \(V\) & 7 \\ \(\boldsymbol{V}\) & Variables & 7 \\ \(\boldsymbol{v}\) & Values of \(\boldsymbol{V}\) & 7 \\ \(\delta\) & Kronecker Delta Function & 9 \\ \(\Delta(\boldsymbol{A}\mid\boldsymbol{B})\) & Set of all Conditional Probability Distributions over \(\boldsymbol{A}\) given \(\boldsymbol{B}\) & 7 \\ \(\Theta_{V}\) & Parameter Variable for \(V\) & 14 \\ \(\boldsymbol{\theta}\) & Parameters & 7 \\ \(\mu^{i}\) & Mixed Policy for Agent \(i\) & 28 \\ \(\boldsymbol{\pi}^{i}\) & (Behavioural) Policy for Agent \(i\) & 12 \\ \(\dot{\boldsymbol{\pi}}^{i}\) & Pure (Behavioural) Policy for Agent \(i\) & 28 \\ \(\boldsymbol{\pi}\) & (Behavioural) Policy Profile & 12 \\ \(\Pi_{D}\) & Decision Rule Variable for \(D\) & 14 \\ \(\sigma\) & (Behavioural) Strategy Profile & 11 \\ \(\bot_{\mathcal{G}}\) & d-Separated in \(\mathcal{G}\) & 7 \\ \(\bot\) & Independent & 7 \\ \(\prec\) & Topological Ordering & 61 \\ \hline \hline \end{tabular} Introduction Causal reasoning and game-theoretic reasoning are core capabilities for intelligent systems, and as such, they are fundamental research topics in artificial intelligence (AI). Causal reasoning is concerned with identifying causal relationships and estimating the effects of interventions. Game-theoretic reasoning is concerned with strategic behaviour: how rational decision-makers interact, taking into account others' incentives. Whilst formal treatments of causality [69, 71, 78, 87] and the game-theoretic foundations of multi-agent systems [67, 84, 95] have individually led to many recent applications in AI, our present concern is with techniques that _combine_ causal and strategic reasoning. Models that support both these kinds of reasoning offer a wide range of possible applications including analysing incentives [21, 54], fairness [46, 52, 63, 98], and blame and intention [27, 36]. As systems of multi-agent systems become increasingly ubiquitous and sophisticated, the problem of how to formally reason about these notions becomes increasingly acute.A framework that supports causal analysis of systems containing multiple self-interested agents would therefore appear to be of great importance [21, 27, 72]. Causal questions are often more challenging to answer in multi-agent settings; one must consider not only the causal dependencies present in the environment, but also the dependencies between agents' strategies. Similarly, reasoning strategically about what the effects of an action _would be_, or what other agents _would have done_ under different circumstances, naturally leads one to consider both interventions and counterfactual possibilities when analysing games. These causal concepts, however, are typically left implicit in game-theoretic models. The central purpose of this paper is to introduce a unifying framework for modelling games that supports both causal and game-theoretic reasoning. This framework - _(structural) causal games_ - can be interpreted in two different ways. In one sense, it lifts Koller and Milch's (henceforth, K&M) multi-agent influence diagrams (MAIDs) [48] from the probabilistic models at level one of Pearl's 'causal hierarchy' [69] to causal models that support both interventions and counterfactuals, corresponding to levels two and three of the hierarchy, respectively. Building on K&M's graphical representation also means we may employ existing game-theoretic concepts such as _strategic relevance_[48] and _sufficient recall_[60], which can be elicited purely from the structure of the game. Alternatively, causal games can be interpreted as generalising the models of the causal hierarchy to the game-theoretic domain by introducing decision variables that lack a distribution until chosen by the corresponding agent, and a set of real-valued utility variables representing the payoff of each agent. Causal games thus support both game-theoretic and causal queries, as well as combinations of the two. It is our hope that this framework serves as a foundation for further work at the intersection of causality and game theory. ### Contributions In answering the three key questions introduced above, we make the following contributions. i) How can the (causal) dependencies in games - either between variables, or between strategies - be modelled in a uniform, principled manner? * We introduce _mechanised MAIDs_ in Section 3, which allow us to model the dependencies of decision rules on each other and on parameters in the game. * We also generalise K&M's notion of _strategic relevance_ to \(\mathcal{R}\)_-relevance_ to enable modelling many different decision-making principles. * Furthermore, we derive sound and complete graphical criteria for detecting relevant variables when agents are playing best responses to one another. ii) How may causal queries be computed in such models, and what assumptions does this require? * We generalise Pearl's causal hierarchy of models to the game-theoretic domain by introducing (structural) causal games in Section 4. * We then describe how these models can be used to answer conditional, interventional, and counterfactual queries. By quantifying over the equilibria in the game (leading to _first order_ queries such as "is it the case that in every Nash equilibrium, setting variable \(X\) to value \(x\) would increase agent \(i\)'s expected payoff?") and taking into account causal effects due to rational agents who adapt their strategies to changes in the environment, such queries strictly generalise those in other causal models. * How do the models we propose compare to existing formalisms? * subgame perfectness and trembling hand perfectness - to MAIDs in Section 5, and provide a detailed comparison between MAIDs and EFGs (including several equivalence results) in light of this, in Section 6. * We also show that often more subgames can be found in a MAID than in the corresponding EFG, and thus more non-credible threats can be ruled out. * Finally, we discuss a range of applications in which causal games subsume prior work in Section 7, and (dis)advantages with respect to other work in Section 8. A previous conference paper contained preliminary results on subgames and equilibrium refinements in MAIDs, as well as their connections to EFGs [39], but did not contain any discussion of causality, which is the main focus of this paper. Similarly, a previous tool paper introduced an earlier version of our codebase [26], which implements many of the concepts in this paper though does not contain any theoretical work, which is the emphasis of the present paper. We conclude this section with a review of other related work. Before the primary exposition of our results, we also provide the relevant background material on causal models, EFGs, and MAIDs, in Section 2. Proofs, further examples, and details of our codebase are relegated to Appendices A, B, and C, respectively. ### Related Work Causal games build on Pearl's hierarchy of causal models [69], and work on influence diagrams (IDs) - a kind of graphical model used to capture a single-agent decision-making scenarios [42]. Later works have explicitly considered _causal_ IDs, or CIDs [17, 21]. Indeed, Heckerman and Shachter's proposal to unify causal modelling and decision analysis via CIDs [40] can be viewed as a precursor to our proposal to unify causal modelling and game theory via causal games. None of these theories, however, place emphasis on games or strategic interactions between multiple agents, which is our setting of interest. In contrast, multi-agent IDs (MAIDs) generalise IDs [48, 60], and were originally developed by K&M as a way to efficiently represent and solve games. One useful feature of MAIDs (which causal games inherit) is that their graphical structure encodes what information is, or is not, relevant for making an optimal decision. Thus in order to find an equilibrium, it is often possible to remove some of the edges (i.e. the _ignorable information_) prior to solving the game [60]. Many other graphical models of games, often inspired by MAIDs, have been introduced [45], including networks of influence diagrams (NIDs) [29], interactive dynamic influence diagrams (I-DIDs) [19, 73], and temporal action graph games [44]. Like MAIDs however, most of these works are motivated by computational concerns, and none incorporate rigorous causal reasoning. Modelling causal relationships in game-theoretic scenarios often leads to cyclic dependencies, such as when one agent's best response depends on the other's, and vice versa. This can result in multiple solutions, a feature that is also be captured by generalised cyclic causal models [7], chain graphs [56], and credal networks [13], among others. In mechanised (structural) causal games, these solutions correspond to different equilibria induced by _many-valued_ functions (i.e., serial relations) representing the decision-making processes of agents in the game. This can be seen as extending generalised structural causal models [35], which use standard structural functions, and allows us to not only characterise the solutions arising from mutually dependent variables, but also those that arise from non-deterministic mechanisms (such as when an agent selects a decision rule using an \(\arg\max\) operation, for example). This is essential for modelling equilibria in games, where agents may be indifferent between decision rules. Perhaps the most similar work to this paper is on _settable systems_, which are partly inspired by generalised structural causal models and can be used to capture optimisation, equilibria, and learning [91, 92]. In order to deal with cycles, settable systems duplicate intervenable variables into a'response' and'setting' variable, so as to indicate which side of the structural function each occurs on. In these models, the emphasis is on the causal analysis of optimisation procedures at a relatively low level of abstraction, meaning that the algorithms used by agents to select actions, or the procedures used to select one equilibrium from many, are explicitly instantiated. In contrast, causal games represent the causal dependencies arising from the fact that agents select their decision rules rationally and non-deterministically, leading us to ask _first-order_ causal queries. Settable systems concentrate on problems such as how to capture the data and attributes of a machine learning process, whereas we focus on problems such as identifying subgames and equilibrium refinements. Despite these differences, settable systems are nonetheless a useful comparator for causal games. The concurrency and semantics community is another that has produced work at the intersection of games and causality. Much of the most influential recent work on the foundations of denotational semantics uses distributed games that are based on _event structures_ - a partially ordered model of discrete events [65] - for deterministic [77] and probabilistic [94] concurrent games. Other related approaches to concurrent games use simpler mathematical formalisms [10, 97]. Though containing the same primitive concepts, these works are motivated primarily by the problem of deriving formal, low-level semantics for programs or probabilistic systems, whereas we are interested in more high-level models of strategic interactions that can be applied to a wide range of scenarios and disciplines. Indeed, closely related causal models have been used (often in the context of analysing AI systems) to define notions of blame and intent (both in the single-agent and multi-agent settings) [27, 36], harm [76], incentives to control or respond to certain variables [21, 54], [46, 52, 63, 98], social influence [43], and reasoning patterns such as manipulation and signalling [72]. We show in Sections 4 and 7 that causal games subsume the models in these works and allow for even richer concepts. Aside from analysing AI systems, one other relevant domain of application is in economic analysis and mechanism design. For example, Toulis and Parkes use a behavioural causal model to determine long-term effects of policy interventions on multi-agent economies [88] - though their emphasis is on dynamical systems and behavioural analysis - and some regulators such as the UK's Financial Conduct Authority include an informal 'causal chain' in their cost-benefit analyses when proposing policy analyses [24]. We provide a more formal case study of this second example in Section 7. ## 2 Background We assume a basic familiarity with both probabilistic graphical models and game theory, though for completeness, we briefly review Pearl's causal hierarchy [69] and two game-theoretic models: extensive form games (EFGs) [51, 90] and multi-agent influence diagrams (MAIDs) [48]. Readers familiar with these models may safely skip these sections. Throughout this paper, we use capital letters \(V\) for random variables, lowercase letters \(v\) for their instantiations, and bold letters \(\boldsymbol{V}\) and \(\boldsymbol{v}\) respectively for sets of variables and their instantiations. We let \(\mathit{dom}(V)\) denote the domain of \(V\) (where by default we assume that \(\mathit{dom}(V)\) is finite) and abuse notation somewhat by writing \(\mathit{dom}(\boldsymbol{V})\coloneqq\bigtimes_{V\in\boldsymbol{V}}\mathit{ dom}(V)\). \(\mathbf{Pa}_{V}\) denotes the parents of a variable \(V\) in a graphical representation and \(\mathbf{pa}_{V}\) the instantiation of \(\mathbf{Pa}_{V}\). We also define \(\mathbf{Ch}_{V}\), \(\mathbf{Anc}_{V}\), \(\mathbf{Desc}_{V}\), and \(\mathbf{Fa}_{V}\coloneqq\mathbf{Pa}_{V}\cup\{V\}\) as the children, ancestors, descendants, and family of \(V\), respectively (where note that neither \(\mathbf{Anc}_{V}\) nor \(\mathbf{Desc}_{V}\) contain \(V\), by convention). As with \(\mathbf{pa}_{V}\), their instantiations are written in lowercase. We use \(\Delta(\boldsymbol{V})\) to denote the set of all probability distributions over the values of \(\boldsymbol{V}\), and therefore write \(\Delta(\boldsymbol{A}\mid\boldsymbol{B})\coloneqq\bigtimes_{\boldsymbol{b} \in\mathit{dom}(\boldsymbol{B})}\Delta(\boldsymbol{A})\) to express the set of all conditional probability distributions (CPDs) over \(\boldsymbol{A}\) given the values of \(\boldsymbol{B}\). Unless otherwise indicated, we use superscripts to indicate an agent \(i\in N=\{1,\ldots,n\}\) and subscripts to index the elements of a set; for example, the decision variables belonging to agent \(i\) are denoted \(\boldsymbol{D}^{i}=\{D_{1}^{i},\ldots,D_{m}^{i}\}\). ### Causal Models Pearl's _causal hierarchy_ consists of three kinds of model: associational, interventional, and counterfactual [69]. Each step up the hierarchy demands stricter assumptions. The lowest level, _association_, pertains to correlations between variables that allow for _predictions_ about a system. For this, it is sufficient to use observational data to construct a joint probability distribution over all of the variables in that system, which can then be represented graphically as a _Bayesian network_ (BN). On the second level, we wish to reason about the effects of _interventions_ - deliberate alterations made to the variables from outside the system. This requires the edges of the BN to reflect causal, not just associational, relationships, giving rise to a _causal Bayesian network_ (CBN). The final level of the hierarchy is concerned with counterfactual questions - asking what would have happened had something been different, given that we made a certain observation - which corresponds to conditioning (as in associational queries), then intervening. Answering such questions requires knowledge of the underlying deterministic relationships between the variables, typically characterised using a _structural causal model_ (SCM). **Association** **Definition 1** (69).: A **Bayesian network (BN)** over a set of random variables \(\boldsymbol{V}\) with joint distribution \(\Pr(\boldsymbol{V})\) is a structure \(\mathcal{M}=(\mathcal{G},\boldsymbol{\theta})\) where \(\mathcal{G}=(\boldsymbol{V},\mathscr{E})\) is a directed acyclic graph (DAG) with vertices \(\boldsymbol{V}\) and edges \(\mathscr{E}\) that is **Markov compatible** with \(\Pr\), meaning that \(\Pr(\boldsymbol{v};\boldsymbol{\theta})=\prod_{V\in\boldsymbol{V}}\Pr(v\mid \mathbf{pa}_{V};\theta_{V})\). We drop the parameters \(\boldsymbol{\theta}=\{\theta_{V}\}_{V\in\boldsymbol{V}}\) of the CPDs from our notation where unambiguous. We can use the _d-separation_ graphical criterion to identify the set of conditional independencies that any Markov compatible joint distribution over a DAG \(\mathcal{G}\) must satisfy [70]. **Definition 2** (69).: A **path**\(p\) in a DAG \(\mathcal{G}=(\boldsymbol{V},\mathscr{E})\) is a sequence of unrepeated adjacent variables in \(\boldsymbol{V}\). A path \(p\) is said to be **blocked** by a set of variables \(\boldsymbol{Y}\subset\boldsymbol{V}\) if and only if \(p\) contains either: * A _chain_\(X\to W\to Z\) or \(X\gets W\gets Z\), or a _fork_\(X\gets W\to Z\), and \(W\in\boldsymbol{Y}\); or * A _collider_\(X\to W\gets Z\) and \(W\notin\mathbf{Anc}_{\boldsymbol{Y}}\cup\{Y\}\). For disjoint sets \(\boldsymbol{X},\boldsymbol{Y},\boldsymbol{Z}\), the set \(\boldsymbol{Y}\)**d-separates**\(\boldsymbol{X}\) from \(\boldsymbol{Z}\), denoted \(\boldsymbol{X}\perp_{\mathcal{G}}\boldsymbol{Z}\mid\boldsymbol{Y}\), if every path in \(\mathcal{G}\) from a variable in \(\boldsymbol{X}\) to a variable in \(\boldsymbol{Z}\) is blocked by a variable in \(\boldsymbol{Y}\). Otherwise, \(\boldsymbol{X}\) is said to be **d-connected** to \(\boldsymbol{Z}\) given \(\boldsymbol{Y}\), denoted \(\boldsymbol{X}\not\perp_{\mathcal{G}}\boldsymbol{Z}\mid\boldsymbol{Y}\). For example, in the graph \(\mathcal{G}\) shown in in Figure 0(a), we have that \(A\not\perp_{\mathcal{G}}B\mid C\) due to the active path \(A\gets D\to B\), but that \(A\perp_{\mathcal{G}}B\mid D\), as conditioning on \(D\) blocks the connection along the aforementioned path as well as along the path \(A\gets C\to D\to B\). If \(\boldsymbol{X}\perp_{\mathcal{G}}\boldsymbol{Z}\mid\boldsymbol{Y}\) in \(\mathcal{G}\), then \(\boldsymbol{X}\) and \(\boldsymbol{Z}\) are probabilistically independent conditional on \(\boldsymbol{Y}\) in the sense that \(\Pr(\boldsymbol{x}\mid\boldsymbol{y},\boldsymbol{z})=\Pr(\boldsymbol{x}\mid \boldsymbol{y})\), written \(\boldsymbol{X}\perp\!\!\!\perp\boldsymbol{Z}\mid\boldsymbol{Y}\), in every distribution \(\Pr\) that is Markov compatible with \(\mathcal{G}\) and for which \(\Pr(\boldsymbol{y},\boldsymbol{z})>0\). Conversely, if \(\boldsymbol{X}\not\perp_{\mathcal{G}}\boldsymbol{Z}\mid\boldsymbol{Y}\), then \(\boldsymbol{X}\) and \(\boldsymbol{Z}\) are dependent conditional on \(\boldsymbol{Y}\) in at least one distribution Markov compatible with \(\mathcal{G}\)[89]. A second well-established graphical criterion will also be a useful auxiliary result in Section 3. **Definition 3** (82).: Given a DAG \(\mathcal{G}\), a variable \(V\) is a **requisite probability node** for \(\Pr(\boldsymbol{x}\mid\boldsymbol{y})\) if there exist two parameterisations \(\boldsymbol{\theta}\neq\boldsymbol{\theta}^{\prime}\) of \(\mathcal{G}\) for BNs \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) differing only on \(\theta_{V}\) such that \(\Pr(\boldsymbol{x}\mid\boldsymbol{y};\boldsymbol{\theta})\neq\Pr(\boldsymbol{ x}\mid\boldsymbol{y};\boldsymbol{\theta}^{\prime})\). **Lemma 1** (30).: _Given a BN \(\mathcal{M}\), a variable \(V\) is a requisite probability node for the query \(\Pr(\boldsymbol{X}\mid\boldsymbol{Y})\) if and only if \(\mathsf{M}_{V}\not\perp_{\mathfrak{m}\perp\mathcal{G}}\boldsymbol{X}\mid \boldsymbol{Y}\)._ #### Intervention Associated models such as BNs are, in general, insufficient to answer questions about interventions as they do not describe how the joint distribution changes in response; a causal model, or level two model, such as a causal Bayesian network (CBN), is required. The graph underlying a CBN differs from that of a BN only in its causal interpretation: the directed edges now represent the fact that intervening on a variable cannot affect those causally 'upstream' of it. The simplest form of intervention, a _hard intervention_\(\text{do}(\boldsymbol{Y}=\boldsymbol{y})\), sets the values of variables \(\boldsymbol{Y}\) to some \(\boldsymbol{y}\). We denote the resulting joint distribution by \(\Pr_{\boldsymbol{y}}(\boldsymbol{V})\) or, equivalently, \(\Pr(\boldsymbol{V_{y}})\)[69].1 Footnote 1: This is also known as an atomic [12], structural [20], surgical [69], or independent [49] intervention. **Definition 4** (69).: A **causal Bayesian network (CBN)** is a BN \(\mathcal{M}=(\mathcal{G},\boldsymbol{\theta})\) such that \(\mathcal{G}\) is Markov compatible with \(\Pr_{\boldsymbol{y}}\) for every \(\boldsymbol{Y}\subseteq\boldsymbol{V}\) and \(\boldsymbol{y}\in\text{dom}(\boldsymbol{Y})\), and that: \[\Pr_{\boldsymbol{y}}(v\mid\mathbf{pa}_{V})=\begin{cases}1&\text{ when $V\in \boldsymbol{Y}$ and $v$ is consistent with $\boldsymbol{y}$,}\\ \Pr(v\mid\mathbf{pa}_{V})&\text{ when $V\notin\boldsymbol{Y}$ and $\mathbf{pa}_{V}$ is consistent with $ \boldsymbol{y}$.}\end{cases}\] By the Law of Total Probability, \(\Pr_{\boldsymbol{y}}(v\mid\mathbf{pa}_{V})=0\) when \(v\) is inconsistent with \(\boldsymbol{y}\). When \(\mathbf{pa}_{V}\) is inconsistent with \(\boldsymbol{y}\), then conditioning on a zero-probability event means that \(\Pr_{\boldsymbol{y}}(v\mid\mathbf{pa}_{V})\) is undefined. More generally, a _soft intervention_, specified using a partial distribution \(\mathcal{I}\) over variables \(\boldsymbol{Y}\) replaces each CPD \(\Pr(Y\mid\mathbf{Pa}_{Y})\) with a new CPD \(\mathcal{I}(Y\mid\mathbf{Pa}_{Y}^{*};\theta_{Y}^{*})\) for each \(Y\in\boldsymbol{Y}\) where \(\mathbf{Pa}_{Y}^{*}\) may differ from \(\mathbf{Pay}_{Y}\).2 Any intervention \(\mathcal{I}\) on the set of variables \(\mathbf{Y}\) leads to a new joint distribution \(\Pr_{\mathcal{I}}(\mathbf{v})\coloneqq\prod_{Y\in\mathbf{Y}}\mathcal{I}(y\mid\mathbf{ pa}_{Y}^{*})\cdot\prod_{V\in\mathbf{V}\setminus\mathbf{Y}}\Pr(v\mid\mathbf{pa}_{V})\). Footnote 2: A soft intervention is also known as a parametric [20] or dependent [49] intervention, and is referred to as conditional or stochastic when deterministic or stochastic, respectively [12]. Hard interventions are a special case in which each \(\mathcal{I}(Y\mid\mathbf{Pa}_{Y}^{*})=\delta(Y,\!y)\) for some \(y\in\textit{dom}(Y)\), where \(\delta\) is the Kronecker delta function. We represent an intervention \(\mathcal{I}\) on \(\mathbf{Y}\) graphically by outlining each variable \(Y\in\mathbf{Y}\), replacing each variable name \(Y\) with \(Y_{\mathcal{I}}\), and removing or adding edges from parent variables as necessary, if \(\mathbf{Pa}_{Y}^{*}\neq\mathbf{Pa}_{Y}\). More generally, we use \(\Pr(\mathbf{V}_{\mathcal{I}})\) and \(\Pr_{\mathcal{I}}(\mathbf{V})\) interchangeably, and denote the new graph and model as \(\mathcal{G}_{\mathcal{I}}\) and \(\mathcal{M}_{\mathcal{I}}\) respectively. When \(\mathcal{I}\) is a hard intervention, we simply sever all incoming edges to \(\mathbf{Y}\), setting their values to \(\mathbf{y}\), and write \(\mathbf{V}_{\mathbf{y}}\), \(\mathcal{G}_{\mathbf{y}}\), and \(\mathcal{M}_{\mathbf{y}}\) respectively. Examples of hard and soft interventions are shown in Figures 1(a) and 1(b) respectively. #### Counterfactuals In counterfactual queries, evidence about the actual state of the world informs us about a hypothetical scenario in which some variables have been modified. For instance, we might be interested in the probability of \(\mathbf{x}\) in the scenario in which \(\mathbf{y}\), given that (in fact) we observed \(\mathbf{z}\), written \(\Pr(\mathbf{x}_{\mathbf{y}}\mid\mathbf{z})\). To answer such questions in general, one must appeal to level three of the causal hierarchy, such as by using a structural causal model (SCM). In SCMs, variables are partitioned into exogenous and endogenous sets \(\mathsf{E}\) and \(\mathbf{V}\) respectively, where each endogenous variable \(V\in\mathbf{V}\) is deterministically related to its parents via a structural function \(f_{V}:\textit{dom}(\mathbf{V}\setminus\{V\})\times\textit{dom}(\mathsf{E})\to \textit{dom}(V)\) that specifies the mechanism governing the values of the variable, and where all stochasticity is relegated to the distribution \(\Pr(\mathsf{E};\mathbf{\theta})\) over the exogenous variables. In this paper, we make the simplifying assumption that all SCMs are _Markovian_, meaning that each variable \(V\) has exactly one exogenous parent \(\mathsf{E}_{V}\) and the exogenous variables are independent. We also depart from convention by describing SCMs as a particular form of CBN (which, in turn, are a particular form of BN), using deterministic distributions \(\Pr(V\mid\mathbf{Pa}_{V})=\delta\big{(}V,\!f_{V}(\mathbf{Pa}_{V})\big{)}\) for each endogenous variable \(V\). This equivalent formulation will prove useful for avoiding unnecessary repetition and notation when introducing causal models of games in Section 4. **Definition 5** (69).: A (Markovian) **structural causal model (SCM)** is a CBN \(\mathcal{M}=(\mathcal{G},\mathbf{\theta})\) where \(\mathcal{G}=(\mathsf{E}\cup\mathbf{V},\mathscr{E})\) is a DAG over exogenous variables \(\mathsf{E}=\{\mathsf{E}_{V}\}_{V\in\mathbf{V}}\) and endogenous variables \(\mathbf{V}\), where \(\mathbf{Pa}_{V}\cap\mathsf{E}=\{\mathsf{E}_{V}\}\). The parameters \(\mathbf{\theta}\) assign deterministic distributions \(\Pr(v\mid\mathbf{pa}_{V};\theta_{V})\) to each endogenous variable and a stochastic distribution \(\Pr(\mathsf{E};\mathbf{\theta})=\prod_{\mathsf{E}\in\mathsf{E}}\Pr(\mathsf{E}; \theta_{\mathsf{E}})\) to the exogenous variables. Using such a model, we can evaluate a general counterfactual query \(\Pr(\mathbf{x}_{\mathcal{I}}\mid\mathbf{z})\) by following three steps [69]: 1. Update \(\Pr(\mathsf{e})\) to \(\Pr(\mathsf{e}\mid\mathbf{z})\) by conditioning on observation \(\mathbf{z}\) ('abduction'); 2. Apply the intervention \(\mathcal{I}(\mathbf{Y}\mid\mathbf{Pa}_{\mathbf{Y}}^{*})\) to the variables \(\mathbf{Y}\) ('action'); 3. Return the marginal distribution \(\Pr(\mathbf{x})\) in this modified model ('prediction'). By convention, exogenous variables are viewed as beyond the realm of observation and intervention. Further, we assume that each \(\mathcal{I}(Y\mid\mathbf{Pa}_{Y}^{*})\) is a deterministic function of its parents. Soft, stochastic interventions may be modelled by adding a new exogenous variable \(\mathsf{E}_{Y}^{*}\in\mathbf{Pa}_{Y}^{*}\). Note that by marginalising out the exogenous variables in an SCM, we can form a standard (C)BN with a joint distribution \(\Pr(\mathbf{v})=\sum_{\mathbf{e}}\Pr(\mathbf{v},\mathbf{e})\). This CBN is Markov compatible with the SCM's graph \(\mathcal{G}\) when restricted to variables in \(\mathbf{V}\). Moreover, any CBN is also a BN. Thus, each model in the causal hierarchy can be seen to generate those on lower levels; every query that can be answered in a lower level model can also be answered in a higher level model. ### Game-Theoretic Models We now review two important formalisms for representing sequential strategic decision-making scenarios, extensive form games (EFGs) and multi-agent influence diagram (MAIDs). Both of these models are illustrated using the following example of a signalling game [86]. **Example 1** (Job Market).: _A worker, who is either hard-working or lazy, is hoping to be hired by a firm. They have the option of pursuing a university education, but know that they will suffer from three years of studying, especially if they are lazy. The firm prefers hard-workers, but must decide whether to hire the worker without directly observing the worker's temperament, only their education._ #### Extensive Form Games The material on EFGs is required only for Section 6, and so may be safely skipped or referred back to later, depending on the reader's preferences. **Definition 6** (51).: An **extensive form game (EFG)** is a structure \(\mathcal{E}=(N,T,P,A,\lambda,I,U)\), where: * \(N=\{1,\ldots,n\}\) is a set of agents. * \(T=(\mathbf{V},\mathscr{E})\) is a game tree with nodes \(\mathbf{V}\) that are partitioned into sets \(\mathbf{V}^{0},\mathbf{V}^{1},\ldots,\mathbf{V}^{n},\mathbf{L}\) where \(R\in\mathbf{V}\) is the root of \(T\), \(\mathbf{L}\) are the leaves of \(T\), \(\mathbf{V}^{0}\) are chance nodes, and \(\mathbf{V}^{i}\) are the decision nodes controlled by agent \(i\in N\). The nodes are connected by edges \(\mathscr{E}\). Figure 3: (a) An EFG representing Example 1. (b) A MAID representing the same game. * \(P=\{P_{1},\ldots,P_{|\mathbf{V}^{0}|}\}\) is a set of probability distributions \(P_{j}(\mathbf{Ch}_{V^{0}_{j}})\) over the children of each chance node \(V^{0}_{j}\). * \(A\) is a set of actions, where \(A^{i}_{j}\subseteq A\) denotes the set of actions available at \(V^{i}_{j}\in\mathbf{V}^{i}\). * \(\lambda:\mathscr{E}\to A\) is a labelling function mapping each edge \((V^{i}_{j},V^{k}_{l})\) to an action \(a\in A^{i}_{j}\). * \(I=\{I^{1},\ldots,I^{n}\}\) contains a collection of information sets \(I^{i}\subset 2^{\mathbf{V}^{i}}\), which partition the decision nodes controlled by agent \(i\). Each information set \(I^{i}_{j}\in I^{i}\) is defined such that for all \(V^{i}_{k},V^{i}_{l}\in I^{i}_{j}\), the available actions \(A^{i}_{j}\coloneqq A_{V^{i}_{k}}=A_{V^{i}_{l}}\) are the same at both nodes. * \(U:\mathbf{L}\to\mathbb{R}^{n}\) is a utility function mapping each leaf node to a vector that determines the final payoff for each agent. Figure 2(a) shows Example 1's signalling game in extensive form. Nature, as a chance node \(V^{0}\), flips a biased coin at the root of the tree to decide whether the worker is hard-working \(h\) (with probability \(p\)) or lazy \(\neg h\) (with probability \(1-p\)). The worker's decision whether to go \(g\) or avoid \(\neg g\) university is represented at nodes \(V^{1}_{1},V^{1}_{2}\in\mathbf{V}^{1}\) and the firm's decisions are given by \(V^{2}_{1},V^{2}_{2},V^{2}_{3},V^{2}_{4}\in\mathbf{V}^{2}\), each with the option to offer a job or reject the worker (\(j\) or \(\neg j\)). The two non-singleton information sets (marked by dotted black lines) represent the fact that the firm does not know whether the worker is hard-working. The payoffs for the worker and the firm are given by the first and second elements at the leaves of the tree respectively. The worker receives a payoff of 5 if they are given a job offer, but they incur a cost of 1 for going to university if they are hard-working and a cost of 2 for going to university if they are lazy. The firm receives a payoff of 3 if they hire a hard worker. If they offer a job to a lazy worker, they incur a cost of 2, and if they reject a hard worker, they incur an opportunity cost of 1. **Definition 7**.: Given an EFG \(\mathcal{E}=(N,T,P,A,\lambda,I,U)\), a (behavioural) **strategy**\(\sigma^{i}\) for agent \(i\) is a set of probability distributions \(\sigma^{i}_{j}(A^{i}_{j})\) over the actions available to the agent at each of their information sets \(I^{i}_{j}\). A strategy is **pure** when each \(\sigma^{i}_{j}(a)\in\{0,1\}\) and **fully stochastic** when \(\sigma^{i}_{j}(a)>0\), for all \(a\in A^{i}_{j}\). A **strategy profile**\(\sigma=(\sigma^{1},\ldots,\sigma^{n})\) is a tuple of strategies, and \(\sigma^{-i}=(\sigma^{1},\ldots,\sigma^{i-1},\sigma^{i+1},\ldots,\sigma^{n})\) denotes the partial strategy profile of all agents other than \(i\), hence \(\sigma=(\sigma^{i},\sigma^{-i})\). Combining a strategy profile \(\sigma\) with the distributions in \(P\) defines a probability distribution \(P^{\sigma}\) over paths \(\rho\) in \(\mathcal{E}\). For each path \(\rho\) beginning from \(R\) and terminating in a leaf node \(\rho[\mathbf{L}]\in\mathbf{L}\), agent \(i\) receives utility \(U(\rho[\mathbf{L}])[i]\) - the \(i^{\text{th}}\) entry in the corresponding payoff vector. Agent \(i\)'s expected utility under a strategy profile \(\sigma\) is therefore given by \(\mathbb{E}_{\sigma}\big{[}U(\rho[\mathbf{L}])[i]\big{]}\). **Definition 8**.: A **subgame** of an EFG, \(\mathcal{E}=(N,T,P,A,\lambda,I,U)\), is the game \(\mathcal{E}\) restricted to a subtree \(T^{\prime}=(\mathbf{V}^{\prime},\mathscr{E}^{\prime})\) of \(T\) such that: for any information set \(I^{i}_{j}\) in \(\mathcal{E}\), if there exists \(V_{k}\in I^{i}_{j}\cap\mathbf{V}^{\prime}\), then \(I^{i}_{j}\subseteq\mathbf{V}^{\prime}\); and for any \(V_{k}\in\mathbf{V}^{\prime}\), if \((V_{k},V_{l})\in\mathscr{E}\), then \(V_{l}\in\mathbf{V}^{\prime}\) and \((V_{k},V_{l})\in\mathscr{E}^{\prime}\). In other words, a subtree of the original game tree forms a subgame if it is closed under information sets and descendants. Any EFG is trivially a subgame of itself, so a subgame on a strictly smaller subtree is called _proper_. In this paper, we denote subgames in EFGs by enclosing them within dashed boxes. For instance, the EFG in Figure 2(a) has no proper subgames, but the EFG given later in Figure 7(c) has two. #### Multi-Agent Influence Diagrams Influence diagrams (IDs) generalise BNs to the decision-theoretic setting by adding decision and utility variables [42, 62], and multi-agent influence diagrams (MAIDs) generalise IDs by introducing multiple agents [48, 60]. MAIDs can therefore be viewed as a BN over a graph without parameters for the decision variables, although technically lie between levels one and two of Pearl's causal hierarchy as the decisions are effectively modelled as causal interventions [40], but paths between non-decision variables need not encode causal relationships [21]. **Definition 9** (48).: **A multi-agent influence diagram (MAID)** is a structure \(\mathcal{M}=(\mathcal{G},\boldsymbol{\theta})\) where \(\mathcal{G}=(N,\boldsymbol{V},\mathcal{E})\) specifies a set of agents \(N=\{1,\ldots,n\}\) and a DAG \((\boldsymbol{V},\mathcal{E})\) where \(\boldsymbol{V}\) is partitioned into chance variables \(\boldsymbol{X}\), decision variables \(\boldsymbol{D}=\bigcup_{i\in N}\boldsymbol{D}^{i}\), and utility variables \(\boldsymbol{U}=\bigcup_{i\in N}\boldsymbol{U}^{i}\). The parameters \(\boldsymbol{\theta}=\{\theta_{V}\}_{V\in\boldsymbol{V}\setminus\boldsymbol{D}}\) define the CPDs \(\Pr(V\mid\mathbf{Pa}_{V};\theta_{V})\) for each non-decision variable such that for _any_ parameterisation of the decision variable CPDs, the resulting joint distribution over \(\boldsymbol{V}\) induces a BN. Figure 2(b) shows a MAID representing Example 1. Chance variables, such as whether the worker's temperament is hard-working or lazy (\(T\)), are denoted by white circles. Decision and utility variables are represented using squares and diamonds respectively. The worker's decision (\(D^{1}\)) and utility (\(U^{1}\)) variables are displayed in red, and the firm's (\(D^{2}\) and \(U^{2}\)) in blue. Instead of information sets, the fact that an agent is unaware of the value of a certain variable when making a decision is represented by a missing edge between the two (e.g., the absence of an edge \(T\to D^{2}\)). The parameters \(\boldsymbol{\theta}\) define conditional distributions for variables \(T\), \(U^{1}\), and \(U^{2}\), in accordance with the values shown in Figure 2(a). **Definition 10**.: Given a MAID \(\mathcal{M}=(\mathcal{G},\boldsymbol{\theta})\), a **decision rule**\(\pi_{D}\) for \(D\in\boldsymbol{D}\) is a CPD \(\pi_{D}(D\mid\mathbf{Pa}_{D})\) and a **partial policy profile**\(\pi_{\boldsymbol{D}^{\prime}}\) is a set of decision rules \(\pi_{D}\) for each \(D\in\boldsymbol{D}^{\prime}\subseteq\boldsymbol{D}\), where we write \(\pi_{-\boldsymbol{D}^{\prime}}\) for the set of decision rules for each \(D\in\boldsymbol{D}\setminus\boldsymbol{D}^{\prime}\). A (behavioural) **policy**\(\boldsymbol{\pi}^{i}\) refers to \(\boldsymbol{\pi}_{\boldsymbol{D}^{i}}\), and a (full, behavioural) **policy profile**\(\boldsymbol{\pi}=(\boldsymbol{\pi}^{1},\ldots,\boldsymbol{\pi}^{n})\) is a tuple of policies, where \(\boldsymbol{\pi}^{-i}\coloneqq(\boldsymbol{\pi}^{1},\ldots,\boldsymbol{\pi}^{ i-1},\boldsymbol{\pi}^{i+1},\ldots,\boldsymbol{\pi}^{n})\). A decision rule is **pure** if \(\pi_{D}(d\mid\mathbf{pa}_{D})\in\{0,1\}\) and **fully stochastic** if \(\pi_{D}(d\mid\mathbf{pa}_{D})>0\) for all \(d\in\textit{dom}(D)\) and each **decision context**\(\mathbf{pa}_{D}\in\textit{dom}(\mathbf{Pa}_{D})\); this holds for a policy (profile) if it holds for all decision rules in the policy (profile). By combining \(\boldsymbol{\pi}\) with the partial distribution \(\Pr\) over the chance and utility variables, we obtain a joint distribution \(\Pr^{\boldsymbol{\pi}}(\boldsymbol{x},\boldsymbol{d},\boldsymbol{u})\coloneqq \prod_{V\in\boldsymbol{V}\setminus\boldsymbol{D}}\Pr(v\mid\mathbf{pa}_{V}) \cdot\prod_{D\in\boldsymbol{D}}\pi_{D}(d\mid\mathbf{pa}_{D})\) over all the variables in \(\mathcal{M}\); inducing a BN. The expected utility for an agent \(i\) given a policy profile \(\boldsymbol{\pi}\) is defined as the expected sum of their utility variables in this BN, \(\sum_{U\in\boldsymbol{U}^{i}}\mathbb{E}_{\boldsymbol{\pi}}[U]\). This allows a Nash equilibrium (NE) [64] to be defined, which identifies outcomes of a game where every agent is simultaneously playing a best-response.3 Footnote 3: In Section 5.1, we build upon what’s already known about NE in MAIDs, by explaining the difference between mixed policies and behavioural policies in MAIDs and clarifying when an NE is guaranteed to exist. **Definition 11** (48).: A policy profile \(\boldsymbol{\pi}\) is a **Nash equilibrium (NE)** in a MAID if, for every agent \(i\in N\), \(\boldsymbol{\pi}^{i}\in\arg\max_{\tilde{\boldsymbol{x}}^{i}\in\textit{dom}( \boldsymbol{\Pi}^{i})}\sum_{U\in\boldsymbol{U}^{i}}\mathbb{E}_{(\tilde{\boldsymbol {x}}^{i},\boldsymbol{\pi}^{-i})}[U]\). K&M also define _strategic relevance (\(s\)-relevance)_[48] as a concept to infer whether the choice of a decision rule can affect the optimality of another decision rule. They further show how \(s\)-relevance can be determined using a graphical criterion (\(s\)_-reachability_). In what follows, we use capital letters for variables \(\Pi_{D}\) and \(\boldsymbol{\Theta}\) which may take different values \(\pi_{D}\) and \(\boldsymbol{\theta}\). A more detailed introduction to this notation is provided in Section 3.1. **Definition 12** (48).: Let \(D_{k},D_{l}\in\boldsymbol{D}\) be decision nodes in a MAID \(\mathcal{M}\). \(\Pi_{D_{l}}\) is **strategically relevant** (or \(s\)_-relevant_) to \(\Pi_{D_{k}}\) if there exist two joint distributions over \(\boldsymbol{V}\) parameterised by \(\boldsymbol{\Theta}\) and policy profiles \(\boldsymbol{\pi}\) and \(\boldsymbol{\pi}^{\prime}\) respectively, and a decision rule \(\pi_{D_{k}}\), such that: * \(\pi_{D_{k}}\in\arg\max_{\tilde{\pi}_{D}\in\textit{dom}(\Pi_{D})}\sum_{U\in \boldsymbol{U}^{i}}\mathbb{E}_{(\tilde{\pi}_{D},\boldsymbol{\pi}_{-D})}[U]\); * \(\boldsymbol{\pi}\) differs from \(\boldsymbol{\pi}^{\prime}\) only at \(\Pi_{D_{l}}\); * \(\pi_{D_{k}}\notin\arg\max_{\tilde{\pi}_{D}\in\textit{dom}(\Pi_{D})}\sum_{U\in \boldsymbol{U}^{i}}\mathbb{E}_{(\tilde{\pi}_{D},\boldsymbol{\pi}^{\prime}_{-D})}[U]\), and neither does any decision rule that agrees with \(\pi_{D_{k}}\) on all \(\mathbf{pa}_{D_{k}}\) such that \(\Pr^{\pi^{\prime}}(\mathbf{pa}_{D_{k}})>0\).4 Footnote 4: This final condition is included in order to ensure that \(\pi_{D_{k}}\) doesn’t lead to poor decisions in decision contexts that occur with probability zero. **Proposition 1** (48).: \(\Pi_{D^{\prime}}\) _is \(s\)-relevant to \(\Pi_{D}\) if and only if \(\Pi_{D^{\prime}}\not\perp_{\mathcal{G}^{\prime}}\mathbf{U}^{i}\cap\mathbf{Desc}_{D} \mid D,\mathbf{Pa}_{D}\) (\(\Pi_{D}\) is \(s\)-reachable from \(\Pi_{D^{\prime}}\)), where \(\mathcal{G}^{\prime}\) is the same as \(\mathcal{G}\) with an additional variable \(\Pi_{D^{\prime}}\) and edge \(\Pi_{D^{\prime}}\to D^{\prime}\)._ Both \(s\)-relevance and \(s\)-reachability only consider which other _decision rules_ matter under a particular assumption about the _rationality_ of agents (corresponding to a notion of subgame perfectness). In Section 3, we generalise this idea to also consider the parameterisation of non-decision variables, which is key to reasoning about causality in games. We also generalise the concepts to other assumptions about agents' rationality. By considering the directed (but not necessarily acyclic) graph over all variables \(\mathbf{\Pi}_{\mathbf{D}}\coloneqq\Pi_{D}\}_{D\in\mathbf{D}}\) such that there is an edge \(\Pi_{D^{\prime}}\to\Pi_{D}\) if and only if \(\Pi_{D^{\prime}}\) is \(s\)-relevant to \(\Pi_{D}\) - called the (\(s\)-)relevance graph - K&M also introduce a weakening of the perfect recall assumption that is sufficient for the existence of an NE. **Definition 13** (48).: Agent \(i\) in a MAID \(\mathcal{M}\) has **perfect recall** if there exists a topological ordering \(D_{1}\prec\cdots\prec D_{m}\) over \(\mathbf{D}^{i}\) such that \(\mathbf{Fa}_{D_{j}}\subset\mathbf{Pa}_{D_{k}}\) for any \(1\leq j<k\leq m\). \(\mathcal{M}\) is said to have perfect recall if all agents in \(\mathcal{M}\) have perfect recall. **Definition 14** (60).: Agent \(i\) in a MAID \(\mathcal{M}\) has **sufficient recall** if the \(s\)-relevance graph restricted to just agent \(i\)'s decision rules is acyclic. The MAID \(\mathcal{M}\) is said to have sufficient recall if all agents have sufficient recall.5 Footnote 5: Note that although sufficient recall is defined using the \(s\)-relevance graph, a similar criterion could be created for any set of rationality relations \(\mathcal{R}\) and the resulting \(\mathcal{R}\)-relevance graph. **Proposition 2** (48).: _Any MAID with sufficient recall has at least one NE in behavioural policies._ ## 3 Mechanical MAIDs and Relevance Although MAIDs allow us to elegantly and concisely represent the dependencies between variables in a game, the edges in the graph only tell part of the story. Indeed, game-theoretic models are traditionally presented as objects to be _solved_, with the procedure used to produce solutions (i.e., policy profiles) left extrinsic to the game representation. However, the fact that agents act strategically means that agents' policies may be dependent on some of the parameters of the game (as well as other agents' policies) in ways that are crucial for causal reasoning, yet not explicitly represented by MAIDs. In this section, we introduce _mechanised MAIDs_, which allow us to model these dependencies alongside the existing dependencies of the MAID. This representation will be fundamental for causal reasoning in games (in Section 4), as well as for the introduction of subgames in MAIDs (in Section 5.2). In the following subsections, we first define mechanised MAIDs before formally introducing the concept of _relevance_ and corresponding graphical criteria. ### Mechanised MAIDs In order to explicitly capture the implicit dependencies between the decision rules and CPDs of a MAID \(\mathcal{M}=(\mathcal{G},\mathbf{\theta})\), a _mechanised MAID_\(\mathcal{mM}\) adds two elements: a (graphical) representation of the decision rules and parameters, and a description of how these depend on one another. #### Mechanised Graphs For the first addition, we extend \(\mathcal{G}\) to form a _mechanised graph_\(\mathsf{m}\mathcal{G}\). For each decision variable \(D\), a new parent \(\Pi_{D}\) representing its decision rule is added, and for each non-decision variable \(V\), a new parent \(\Theta_{V}\) representing the parameters of its CPD. We call these additional parents _mechanism variables_\(\mathsf{M}\), as they determine the mechanisms by which values of the variables in the game are set,6 and the variables \(\mathbf{V}\) in the original MAID _object-level variables_. To distinguish between different types of mechanism variable, we often refer to those for decisions \(\mathbf{\Pi}=\mathsf{M}_{\mathbf{D}}\) as _decision rule variables_ and those for non-decisions \(\mathbf{\Theta}=\mathsf{M}_{\mathbf{V}\setminus\mathbf{D}}\) as _parameter variables_. Footnote 6: This name is partially inspired by the field of mechanism design, in which agents act by reporting their types \(t\in T\) and the mechanism \(f:T\to O\) determines how this type profile (which we might view as \(\mathbf{d}\)) is mapped to an outcome \(o\in O\) of the game (which we might view as \(\mathbf{v}\)). In the mechanised graph \(\mathsf{m}\mathcal{G}\), we also add new edges \(\mathscr{E}^{\prime}\subseteq\bigcup_{D\in\mathbf{D}}\big{(}(\mathsf{M}\setminus \Pi_{D})\times\Pi_{D}\big{)}\) from other mechanism variables into decision rule variables. This represents the fact that agents typically select a decision rule \(\pi_{D}\) (i.e., the value of \(\Pi_{D}\)) based on both the parameterisation of the game (i.e., the values of \(\mathbf{\Theta}\)) and the selection of the other decision rules in the game \(\mathbf{\pi}_{-D}\). For example, the worker and the firm in Example 1 might want to select their decision rules \(\pi_{D^{1}}\) and \(\pi_{D^{2}}\) as a function of the probability \(p\) that a worker has a hard-working temperament \(T=h\). This would imply edges \(\Theta_{T}\to\Pi_{D^{1}}\) and \(\Theta_{T}\to\Pi_{D^{2}}\). In general, an agent might select each of their decision rules based on the value of any other mechanism variable - and so \(\mathscr{E}^{\prime}\) would be maximal (i.e., \(\mathbf{Pa}_{\Pi_{D}}=\mathbf{M}_{\mathbf{V}\setminus\{D\}}\)) - though typically some of these variables will not be relevant and thus \(\mathscr{E}^{\prime}\) will not be maximal, a notion we make precise in Section 3.2. A mechanised graph \(\mathsf{m}\mathcal{G}\) for Example 1 is shown in Figure 3(a), where decision rule and parameter variables are represented using black and white rounded squares respectively. #### Mechanised Games For the second addition to \(\mathcal{M}\) - a description of how the decision rules and parameters depend on one another - we must specify how the values of the new mechanism variables are determined and how each of the CPDs of the original object-level variables are defined as a function of their new mechanism variable parent. Beginning with the latter, note that for any parameterisation \(\mathbf{m}_{\mathbf{V}\setminus\mathbf{D}}=\mathbf{\theta}\in\mathit{dom}(\mathbf{\Theta})\) of the game and any policy profile \(\mathbf{m}_{\mathbf{D}}=\mathbf{\pi}\in\mathit{dom}(\mathbf{\Pi})\), the resulting joint distribution over \(\mathbf{V}\) can be written as: \[\Pr^{\mathbf{\pi}}(\mathbf{v};\mathbf{\theta})=\Pr(\mathbf{v}\mid\mathbf{m})\coloneqq\prod_{ V\in\mathbf{V}}\Pr(v\mid\mathbf{pa}_{V},\mathsf{m}_{V})\] where \(\Pr(v\mid\mathbf{pa}_{V},\mathsf{m}_{V})\) is simply the CPD \(\Pr(v\mid\mathbf{pa}_{V};\theta_{V})\) defined by parameters \(\theta_{V}=\mathsf{m}_{V}\) if \(V\) is a non-decision variable, or the decision rule \(\pi_{D}(d\mid\mathbf{pa}_{D})\) defined by \(\pi_{D}=\mathsf{m}_{V}\) if \(V\) is a decision variable \(D\). As such, given the parameters \(\mathbf{\theta}\) of the MAID \(\mathcal{M}\) and a policy profile \(\mathbf{\pi}\), the distribution over \(\mathbf{V}\) in \(\mathsf{m}\mathcal{M}\) is identical to that in the original MAID. Finally, we must specify how the values of the mechanism variables are determined. For the parameter variables, we simply set the distribution over each \(\Theta_{V}\) to \(\delta(\Theta_{V}\),\(\theta_{V})\), and let its domain be given by \(\Delta(V\mid\mathbf{Pa}_{V})\). Intuitively, this may be viewed as the fact that any particular game induced over the graph corresponds to an instantiation of the parameter variables in the mechanised game, the values of which are known by all agents.7 Footnote 7: This simply amounts to a common prior assumption. Though such an assumption can be relaxed, doing so introduces additional complexities that we do not address in this work. In order to provide values for the decision rule variables, we introduce a set of _rationality relations_\(\mathcal{R}=\{r_{D}\}_{D\in\mathbf{D}}\) that describe assumptions about how the agents choose decision rules. Concretely, each decision rule \(\Pi_{D}\) is governed by a _many-valued_ function \(r_{D}:\mathit{dom}(\mathbf{Pa}_{\Pi_{D}})\to\mathit{dom}(\Pi_{D})\) which can equivalently be viewed as a serial relation \(r_{D}\subseteq\mathit{dom}(\mathbf{Pa}_{\Pi_{D}})\times\mathit{dom}(\Pi_{D})\).8 This accounts for the fact an agent may not deterministically choose a _single_ decision rule \(\pi_{D}\) in response to some \(\mathbf{p}\mathbf{a}_{\Pi_{D}}\). In the remainder of the paper we abuse notation somewhat by also using \(r_{D}(\mathbf{p}\mathbf{a}_{\Pi_{D}})\) to denote the _set_ of all decision rules \(\pi_{D}\) such that \(r_{D}(\mathbf{p}\mathbf{a}_{\Pi_{D}})=\pi_{D}\).9 Footnote 9: Whether we refer to some \(r_{D}(\mathbf{p}\mathbf{a}_{\Pi_{D}})\in\textit{dom}(\Pi_{D})\) or \(r_{D}(\mathbf{p}\mathbf{a}_{\Pi_{D}})\subseteq\textit{dom}(\Pi_{D})\) will typically be unambiguous. **Definition 15**.: Given a MAID \(\mathcal{M}=(\mathcal{G},\mathbf{\theta})\) and a set of rationality relations \(\mathcal{R}\), a **mechanised MAID** is a structure \(\mathsf{m}\mathcal{M}=(\mathsf{m}\mathcal{G},\mathbf{\theta},\mathcal{R})\), such that: the **mechanised graph \(\mathsf{m}\mathcal{G}=(N,\mathbf{V}\cup\mathsf{M},\mathsf{m}\mathcal{G})\)** is a directed (possibly cyclic) graph over \(\mathbf{V}\) and \(\mathsf{M}\) with edges \(\mathsf{m}\mathcal{G}:=\mathcal{E}\cup\{(\mathsf{M}_{V},V)\}_{V\in\mathbf{V}} \cup\mathcal{E}^{\prime}\), where \(\mathcal{E}^{\prime}\subseteq\bigcup_{D\in\mathbf{D}}\left((\mathsf{M}\setminus \Pi_{D})\times\Pi_{D}\right)\); and \(\mathcal{R}=\{r_{D}\}_{D\in\mathbf{D}}\) is a set of **rationality relations**, where each \(r_{D}:\textit{dom}(\mathbf{P}\mathbf{a}_{\Pi_{D}})\rightarrow\textit{dom}( \Pi_{D})\) is a many-valued function. As an example, let us suppose that each agent in Example 1 plays a _best response_ with respect to the other. In this case, the values of \(\Pi_{D^{1}}\) and \(\Pi_{D^{2}}\) are determined by relations \(\mathcal{R}^{\text{BR}}=\{r_{D^{1}}^{\text{BR}},r_{D^{2}}^{\text{BR}}\}\) such that: \[\pi_{D}\in r_{D}^{\text{BR}}(\mathbf{p}\mathbf{a}_{\Pi_{D}})\ \ \Leftrightarrow\ \ \pi_{D}\in\operatorname*{arg\,max}_{\hat{\pi}_{D}\in\textit{dom}(\Pi_{D})} \sum_{U\in\mathbf{U}^{i}}\mathbb{E}_{(\hat{\pi}_{D},\mathbf{\pi}_{-D})}[U], \tag{1}\] for each \(D\in\mathbf{D}^{i}\), where note that the expectation - despite the fact that the notation above includes object-level variables - is defined in terms of the _mechanism variables_\(\mathbf{p}\mathbf{a}_{\Pi_{D}}=\mathbf{\mathsf{m}}_{\mathbf{V}\setminus\{D\}}\) and \(\pi_{D}\). Note that for games in which each agent has only one decision rule, this gives rise to an NE, as defined in Definition 11. In other words, the values of the mechanism variables determine the CPDs of the object-level variables, which in turn define the joint distribution over the object-level variables, and hence any expected quantities in the game. Many other kinds of game-theoretic equilibria can be naturally defined in terms of rationality relations, including subgame perfect equilibria and trembling hand perfect equilibria (introduced in Section 5.3).10 Footnote 10: It would also be possible to define \(\mathcal{R}\) such that, for example, every agent randomises their actions at every decision, or chooses the action that minimises their expected utility. Whilst it would be hard to view such behaviour as ‘rational’ in the game-theoretic sense, one may think of \(\mathcal{R}\) as defining the _degree(s)_ of rationality in a game. It will often be visually useful to restrict a mechanised MAID to just the mechanism variables, as can be seen in Figure 3(b). We can then view a mechanised graph as simply the composition of the original MAID and this graph of mechanism variables, via the addition of edges \(\mathsf{M}_{V}\to V\) for each \(V\in\mathbf{V}\). Figure 4: (a) A mechanised graph \(\mathsf{m}\mathcal{G}\) representing Example 1. Dotted edges represent those that are present in neither the \(\mathcal{R}^{\text{BR}}\)-minimal nor the \(s\)-minimal mechanised graph. Dashed edges represent just those that are not present in the \(s\)-minimal mechanised graph. (b) The \(\mathcal{R}^{\text{BR}}\)-relevance graph is formed by removing the dotted edges, and the \(s\)-relevance graph by further removing the dashed edge. #### Rational Outcomes If all of the rationality relations \(\mathcal{R}\) in a MAID are satisfied by \(\boldsymbol{\pi}\), then we say that \(\boldsymbol{\pi}\) is an _\(\mathcal{R}\)-rational outcome_ of the game. For example, the \(\mathcal{R}^{\text{BR}}\)-rational outcomes of the MAID \(\mathcal{M}\) representing Example 1 are simply the NEs of \(\mathcal{M}\). Note that \(\mathcal{R}\)-rationality is not merely a convenience when reasoning about games, rather, it performs a necessary role by fully characterising the process by which decision rules are generated. As we will see later, the nature of the chosen rationality relations also allows us to deduce various facts about the game, purely from its graphical structure. In essence, such results hinge on the links between how agents choose their decision rules, and how the object-level variables depend on one another. **Definition 16**.: Given a mechanised MAID \(\mathsf{m}\mathcal{M}\), we say that any \(\pi_{D}\in r_{D}(\mathbf{pa}_{\Pi_{D}})\) is an _\(\mathcal{R}\)-rational response_ to \(\mathbf{pa}_{\Pi_{D}}\), and that a (partial) policy profile \(\boldsymbol{\pi}_{\boldsymbol{D}^{\prime}}\) is \(\mathcal{R}\)-rational_ if \(\pi_{D}\in r_{D}(\mathbf{pa}_{\Pi_{D}})\) for every \(D\in\boldsymbol{D}^{\prime}\). The set of (full) \(\mathcal{R}\)-rational policy profiles in \(\mathsf{m}\mathcal{M}\) are the _\(\mathcal{R}\)-rational outcomes_ of the game (where we tend to drop \(\mathcal{R}\) from the terms above when unambiguous) and is denoted by \(\mathcal{R}(\mathsf{m}\mathcal{M})\). In general, the \(\mathcal{R}\)-rational outcomes in \(\mathsf{m}\mathcal{M}\) therefore define a _set_ of distributions \(\{\Pr^{\boldsymbol{\pi}}\}_{\boldsymbol{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{ M})}\) over the variables \(\boldsymbol{V}\) where \(\Pr^{\boldsymbol{\pi}}(\boldsymbol{v};\boldsymbol{\theta})=\Pr(\boldsymbol{v} \mid\boldsymbol{\mathsf{m}})\) for \(\boldsymbol{m}_{\boldsymbol{D}}=\boldsymbol{\pi}\) and \(\boldsymbol{m}_{\boldsymbol{V}\setminus\boldsymbol{D}}=\boldsymbol{\theta}\), which can be extended over the mechanised MAID as \(\Pr(\boldsymbol{v},\boldsymbol{\mathsf{m}})=\Pr(\boldsymbol{v}\mid\boldsymbol {\mathsf{m}})\delta(\mathsf{M}\),\(\boldsymbol{\mathsf{m}}\)). We can thus view a mechanised MAID as a set of BNs induced by the \(\mathcal{R}\)-rational outcomes.11 The key point here is that rationality relations form a principled, formal, and highly general way to model the inherent non-determinism in a game, in order to render the model suitable for causal reasoning. Footnote 11: We continue, however, to highlight different variable types in our diagrams (rather than drawing each variable as a chance variable) for clarity of exposition. Note that Definition 16 is closely related to the notion of a _solution_ in cyclic causal models [7]. In both formalisms, a solution corresponds to a joint probability distribution consistent with all cyclic relationships, and in general a model may have many or no solutions. This, in turn, will impact the answers to the causal queries that we might wish to ask (as we explain further in Section 4). Similarly, we can view the rational responses at a decision variable as a _credal set_[57], and therefore the resultant model as a form of credal network [13], where the imprecise probabilities arise from the fact that there may be more than one rational outcome in a game. **Remark 1**.: _Often, agents might only be boundedly rational [85]. For example, due to computational costs, agents might seek only to satisfice (rather than optimise) their expected utility, leading to \(\epsilon\)-approximate variations of equilibria concepts in which each agent cannot improve their expected utility by more than \(\epsilon>0\) by deviating [15]. These bounded rationality conditions can also be captured using rationality relations._ ### Relevance A natural question to ask is which edges in the mechanised graph \(\mathsf{m}\mathcal{G}\) are necessary. In other words, which elements of \(\mathbf{Pa}_{\Pi_{D}}\) are necessary for computing the rational response \(r_{D}(\mathbf{pa}_{\Pi_{D}})\)? In the mechanised game shown in Figure 3(a), for example, we don't need to know \(\theta_{U^{2}}\) in order to compute the set of best responses \(\pi_{D^{1}}\) if we are given \(\pi_{D^{2}}\). Whenever a mechanism variable \(\mathsf{M}_{V}\) is found to be irrelevant to \(\Pi_{D}\) in this sense, we may remove the edge \(\mathsf{M}_{V}\to\Pi_{D}\) to make this independence explicit. Removing all irrelevant edges results in a subgraph \(\mathsf{m}_{\mathcal{R}}\mathcal{G}\) of \(\mathsf{m}\mathcal{G}\) that is _minimal_ with respect to \(\mathcal{R}\). In the extreme case where all decision rules are chosen independently from all other mechanism variables, then all edges \(\mathscr{E}^{\prime}\) between mechanism variables can be pruned. We call this subgraph of \(\mathsf{m}\mathcal{G}\) the _independent mechanised graph_12\(\mathsf{m}_{\perp}\mathcal{G}\). Game-theoretic models thereby break the independent causal mechanism_ assumption, which states that mechanisms should be causally and probabilistically independent of each other [71]. Formally, we have the following definitions, which are similar in spirit to those proposed in earlier work [48, 60, 66], but differ in that they consider the case of non-deterministic rationality relations that encode the solutions of a game. **Definition 17**.: Given a mechanised MAID \(\mathsf{m}\mathcal{M}\) with rationality relations \(\mathcal{R}\), \(\mathsf{M}_{V}\in\mathbf{Pa}_{\Pi_{D}}\) is \(\mathcal{R}\)**-relevant** to \(\Pi_{D}\) if there exists \(\mathbf{pa}_{\Pi_{D}}\neq\mathbf{pa}^{\prime}_{\Pi_{D}}\) such that \(r_{D}(\mathbf{pa}_{\Pi_{D}})\neq r_{D}(\mathbf{pa}^{\prime}_{\Pi_{D}})\), where \(\mathbf{pa}_{\Pi_{D}}\) and \(\mathbf{pa}^{\prime}_{\Pi_{D}}\) differ only on \(\mathsf{M}_{V}\). **Definition 18**.: Given a mechanised MAID \(\mathsf{m}\mathcal{M}\) with rationality relations \(\mathcal{R}\), we say that its mechanised graph \(\mathsf{m}\mathcal{G}\) is \(\mathcal{R}\)**-minimal**, denoted by \(\mathsf{m}_{\mathcal{R}}\mathcal{G}\), when it contains an edge \(\mathsf{M}_{V}\to\Pi_{D}\) if and only if \(\mathsf{M}_{V}\) is \(\mathcal{R}\)-relevant to \(\Pi_{D}\). When \(\mathsf{m}_{\mathcal{R}}\mathcal{G}\) is restricted to the mechanism variables \(\mathsf{M}\), we refer to it as the \(\mathcal{R}\)**-relevance graph**, denoted \(r_{\mathcal{R}}\mathcal{G}\). Definition 17 is a property of a _game_ after mechanising it via rationality relations. Therefore, an immediate follow-up question is whether given some choice of \(\mathcal{R}\), we can identify \(\mathcal{R}\)-relevance (and hence prune edges in order to find the \(\mathcal{R}\)-minimal graph) for any MAID \(\mathcal{M}=(\mathcal{G},\boldsymbol{\theta})\) simply by appealing to its underlying _graph_\(\mathcal{G}\), and the form of \(\mathcal{R}\). For many natural choices of \(\mathcal{R}\) this is indeed the case, and we can derive sound and complete graphical criteria for identifying \(\mathcal{R}\)-relevance. For a given \(\mathcal{R}\), we refer to these criteria as \(\mathcal{R}\)_-reachability_. For example, the graphical criteria defining \(\mathcal{R}^{\text{BR}}\)-reachability in Proposition 3 enable us to remove the dotted edges \(\Theta_{U^{1}}\to\Pi_{D^{2}}\) and \(\Theta_{U^{2}}\to\Pi_{D^{1}}\) in Figure 3(a). **Proposition 3**.: \(\mathsf{M}_{V}\) _is \(\mathcal{R}^{\text{BR}}\)-relevant to \(\Pi_{D}\) if and only if \(\mathsf{M}_{V}\not\perp_{\mathsf{m}\perp\mathcal{G}}\boldsymbol{U}^{i}\cap \boldsymbol{Desc}_{D}\mid D,\boldsymbol{Pa}_{D}\) or \(\mathsf{M}_{V}\not\perp_{\mathsf{m}\perp\mathcal{G}}\boldsymbol{Pa}_{D}\), where if \(D\in\boldsymbol{D}^{i}\), then \(\boldsymbol{U}^{i}\cap\boldsymbol{Desc}_{D}\neq\varnothing\)._ By applying \(\mathcal{R}\)-reachability to \(\mathsf{m}\mathcal{G}\), we can find the \(\mathcal{R}\)-minimal mechanised graph \(\mathsf{m}_{\mathcal{R}}\mathcal{G}\) and thus the \(\mathcal{R}\)-relevance graph \(r_{\mathcal{R}}\mathcal{G}\). This latter object will be one that we make repeated use of, and can be viewed as a generalisation of K&M's concept of a relevance graph simpliciter, to include all mechanism variables (as opposed to just those for decision variables) and for use with any \(\mathcal{R}\).13 To avoid confusion, we refer to K&M's relevance graph as an _\(s\)-relevance_ graph. By generalising \(s\)-relevance (Definition 12) and \(s\)-reachability (Proposition 1) to all mechanism variables, we can see in Figure 3(a), for example, that \(\Theta_{T}\) is not \(s\)-relevant to \(\Pi_{D^{1}}\), though it is \(\mathcal{R}^{\text{BR}}\)-relevant. Footnote 13: Note that the direction of the edges of the relevance graphs in this paper are the same as in [48], but are reversed compared with [47] and [39]. K&M also refer to \(V\) being relevant to/reachable from \(D\), whereas we phrase this in terms of \(\mathsf{M}_{V}\) being relevant to/reachable from \(\Pi_{D}\). Our generalisation of the idea of \(s\)-relevance to different rationality relations parallels our development of various equilibrium refinements within MAIDs (introduced in Section 5.3), as opposed to the singular concept introduced by K&M. For instance, we shall see in Section 5.3 that \(s\)-relevance corresponds to the concept of subgame perfectness. Moreover, use of the full mechanised graph as opposed to simply its restriction to the decision rule variables is not only important for reasoning about game-theoretic notions such as equilibria and subgames, but will also be critical for reasoning about causality in games where agents may adapt their policies in response to interventions on any mechanism variable (not just those representing decision rules). #### Generalising the Soundness and Completeness Results We conclude this section by noting that it is possible to generalise the proof procedures for Propositions 3 and 1 to other choices of \(\mathcal{R}\). The key step in doing so is to identify, for any decision variable \(D\), a'sufficient' (sound) but'minimal' (complete) set of queries \(\mathcal{Q}_{D}\subseteq\mathcal{Q}\) whose values, given some \(\mathbf{\mathsf{m}}=(\mathbf{\pi},\mathbf{\theta})\), completely determine each \(r_{D}\). Formally, let \(\mathcal{Q}(\mathbf{\mathsf{m}})\) be the set of probabilistic queries \(\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{y};\mathbf{\theta})\) over \(\mathbf{X},\mathbf{Y}\subseteq\mathbf{V}\) that can be formed in a MAID given \(\mathbf{\mathsf{m}}=(\mathbf{\pi},\mathbf{\theta})\). Note that we can use such queries to express expected utilities, among many other things. For any decision variable \(D\), we are looking for a sufficient but minimal subset of queries \(\mathcal{Q}_{D}\subseteq\mathcal{Q}\) and a truth-valued function \(g_{D}\) such that: \[\pi_{D}\in r_{D}(\mathbf{\mathsf{m}}_{V\setminus\{D\}})\ \ \Leftrightarrow\ \ g_{D}\big{(}\mathcal{Q}_{D}(\mathbf{\mathsf{m}}),\mathit{dom}(\mathbf{V})\big{)}. \tag{2}\] In this setting, asking whether \(\mathsf{M}_{V}\) is \(\mathcal{R}\)-relevant to \(\Pi_{D}\) reduces to asking whether the choice of \(\mathsf{m}_{V}\) may affect some \(\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{y})\in\mathcal{Q}_{D}\), which is, in turn, equivalent to asking whether \(V\) is a _requisite probability node_ for \(\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{y})\)[82], introduced in Definition 3. Whether a variable is a requisite probability node can be determined using a well-established graphical criterion [30], given as Lemma 1. We can use this criterion to establish graphical criteria for \(\mathcal{R}\)-reachability, which, given a judicious choice of \(\mathcal{Q}_{D}\), will be sound and complete. For example, the graphical criteria in Propositions 1 (\(s\)-reachability) and 3 (\(\mathcal{R}^{\text{BR}}\)-reachability) correspond to choices \(\mathcal{Q}_{D}^{s}=\{\Pr^{\mathbf{\pi}}(\mathbf{u}^{i}\cap\text{\bf desc}_{D}\mid d, \mathbf{pa}_{D})\}\) and \(\mathcal{Q}_{D}^{\text{BR}}=\{\Pr^{\mathbf{\pi}}(\mathbf{u}^{i}\cap\text{\bf desc}_{D} \mid d,\mathbf{pa}_{D}),\Pr^{\mathbf{\pi}}(\mathbf{pa}_{D})\}\) respectively. With such a soundness and completeness result in hand, we can identify an \(\mathcal{R}\)-relevance graph as follows: \[r_{\mathcal{R}}\mathcal{G}\text{ contains an edge }\mathsf{M}_{V} \rightarrow\Pi_{D}\] \[\Leftrightarrow \mathsf{M}_{V}\text{ is $\mathcal{R}$-relevant to }\Pi_{D}\] by Definition 18 \[\Leftrightarrow r_{D}(\mathbf{pa}_{\Pi_{D}})\text{ may vary with the value of }\mathsf{M}_{V}\] by Definition 17 \[\Leftrightarrow \mathcal{Q}_{D}(\mathbf{pa}_{\Pi_{D}},\pi_{D})\text{ may vary with the value of }\mathsf{M}_{V}\] by (2) \[\Leftrightarrow V\text{ is a requisite probability node for some }\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{y})\in\mathcal{Q}_{D}\] by Definition 3 \[\Leftrightarrow \mathsf{M}_{V}\not\perp_{\mathsf{m}_{\perp}\mathcal{G}}\mathbf{X} \mid\mathbf{Y}\text{ for some }\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{y})\in\mathcal{Q}_{D}\] by Lemma 1 ## 4 Causality in Games Many types of queries may be of interest in game-theoretic scenarios. For instance, recalling Example 1, we could ask questions corresponding to: 1. Predictions, such as a) 'Given that the worker went to university, what is their wellbeing?' or b) 'Given that the worker always decides to go to university, what is their wellbeing?' 2. Interventions, such as a) 'Given that the worker is forced to go to university, what is their wellbeing?' or b) 'Given that the worker goes to university if and only if they are selected via a lottery system, what is their wellbeing?' 3. Counterfactuals, such as a) 'Given that the worker didn't go to university, what would be their wellbeing if they had?' or b) 'Given that the worker never decides to go to university, what would be their wellbeing if they always decided to go to university?' Although these queries are ostensibly similar, they belong to different levels of the causal hierarchy. Predictions can be answered using (mechanised) MAIDs, which, as associational models, reside on level one. However, in order to reason about interventions and counterfactuals in games, we require models on levels two and three respectively. To this end, we introduce _causal games_ (CGs) and _structural causal games_ (SCGs). These can be viewed as generalising MAIDs to the causal setting, or CBNs and SCMs to the game-theoretic setting. The connections between all of these models and their various acronyms are displayed in Figure 5. Importantly, the models in each row are a special case of those in the row below (and the models in each column generalise those in the column to its left). As such, everything defined with respect to MAIDs (such as the mechanised games of Definition 15 and the equilibrium refinements we introduce in Section 5) are also well-defined in both CGs and SCGs. Note that as well as asking queries regarding the object-level variables - such as queries 1a, 2a, and 3a - _after_ some policy profile \(\mathbf{\pi}\) has been chosen, we can also ask queries regarding the mechanism variables - such as queries 1b, 2b, and 3b - _before_ fixing a policy. While this distinction between 'post-policy' and 'pre-policy' queries is less significant in the case of predictions, we shall see in Section 4.2 that this difference corresponds to whether agents can adjust their policies in response to an intervention or not. This distinction, which is critical in game-theoretic settings, does not apply to standard causal models, as they do not contain strategic, decision-making agents. Computationally, pre-policy interventions correspond to altering the original game then calculating the outcomes, and post-policy interventions correspond to calculating the outcomes of the original game, and then altering them. In a sense, these latter queries are a more natural analogue of existing work; in this work we unify both types of query using the same formalism. In the remainder of this section, we show how the six types of causal queries represented by the examples above (each of which is written formally in Table 1) can be answered using causal games. Doing so requires us to identify which level of the causal hierarchy a query belongs to, and to assess whether the query is pre-policy or post-policy. It is also simple to combine pre- and post-policy queries using mechanical games, though for clarity and brevity we do not do so here. ### Predictions Each policy profile \(\mathbf{\pi}\) in a MAID \(\mathcal{M}\) induces a BN with joint distribution \(\Pr^{\mathbf{\pi}}(\mathbf{V};\mathbf{\theta})\). We can therefore easily compute the probability of \(\mathbf{x}\) given some observation \(\mathbf{z}\), written \(\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{z})\), under a given policy profile \(\mathbf{\pi}\). Note that the distribution \(\Pr^{\mathbf{\pi}}(\mathbf{V};\mathbf{\theta})\) in the MAID can \begin{table} \begin{tabular}{l l l l l} \hline \hline & \multicolumn{1}{c}{1) Prediction} & \multicolumn{1}{c}{2) Intervention} & \multicolumn{1}{c}{3) Counterfactual} \\ \hline a) Post-policy & \(\Pr^{\mathbf{\pi}}(u^{1}\mid g)\) & \(\Pr^{\mathbf{\pi}}(u^{1}_{g})\) & \(\Pr^{\mathbf{\pi}}(u^{1}_{g}\mid\neg g)\) \\ b) Pre-policy & \(\Pr(u^{1}\mid\bar{\pi}_{D^{1}})\) & \(\Pr(u^{1}_{\bar{\pi}_{D^{1}}})\) & \(\Pr(u^{1}_{\bar{\pi}_{D^{1}}}\mid\bar{\pi}_{D^{1}})\) \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of the queries we may ask in causal models of games, corresponding to those listed at the top of this section. Recall that \(\mathit{dom}(D^{1})=\{g,\neg g\}\), indicating whether or not the worker goes to university, and that \(\bar{\pi}_{D^{1}},\hat{\pi}_{D^{1}},\tilde{\pi}_{D^{1}}\) denote possible values of the decision rule variable \(\Pi_{D^{1}}\). For example, \(\bar{\pi}_{D^{1}}\) in query 1b is the decision rule in which the worker always decides to go to university, i.e., \(\bar{\pi}_{D^{1}}(D_{1}\mid T)=\delta(D_{1},g)\). As introduced in Section 2, \(\Pr(\mathbf{x_{y}})\) denotes the probability of \(\mathbf{x}\) given a hard intervention (\(\mathbf{Y}=\mathbf{y}\)) and \(\Pr(\mathbf{x_{y}}\mid\mathbf{z})\) represents the counterfactual probability of \(\mathbf{x}\) had \(\mathbf{y}\) been true, given that (in fact) \(\mathbf{z}\) was true. Figure 5: In this paper, we introduce CGs and SCGs. The causal hierarchy (associated, interventional, and counterfactual) forms the vertical axis and the number of agents (0, 1, and \(n\)) forms the horizontal axis. Note that all models in this diagram can also be mechanised. equivalently be written in the mechanised MAID as \(\Pr(\mathbf{V}\mid\mathbf{\mathsf{m}})\) with \(\mathbf{\mathsf{m}}=(\mathbf{\pi},\mathbf{\theta})\). However, in game-theoretic settings, we typically assume only that a _rational outcome_ of the game will be chosen, not some unique \(\mathbf{\pi}\). Moreover, we may not have any reason to favour one rational outcome over another, implying that we ought to evaluate queries with respect to a _set_ of policy profiles. In the definition below, we therefore consider the distribution \(\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{z})\) induced by each rational outcome that is consistent with the observation \(\mathbf{z}\). Note that, as remarked in Section 3.1, there may be many or no such rational outcomes. **Definition 19**.: Given a mechanised MAID \(\mathsf{m}\mathcal{M}\) with rationality relations \(\mathcal{R}\), the **answer to a conditional query** of the probability of \(\mathbf{x}\) given observation \(\mathbf{z}\) is given by the set \(\Pr^{\mathcal{R}}(\mathbf{x}\mid\mathbf{z})\coloneqq\big{\{}\Pr^{\mathbf{\pi}}(\mathbf{x}\mid \mathbf{z})\big{\}}_{\mathbf{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}|\mathbf{z})}\) where \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\mathbf{z})\coloneqq\{\mathbf{\pi}\in\mathcal{R}( \mathsf{m}\mathcal{M}):\Pr^{\mathbf{\pi}}(\mathbf{z})>0\}\) is the set of **conditional rational outcomes**. In general, \(\mathbf{Z}\subseteq\mathbf{V}\cup\mathbf{\mathsf{M}}\) can include mechanism variables, and so we compute \(\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{z})\) in \(\mathcal{M}\) as \(\Pr(\mathbf{x}\mid\mathbf{z},\mathbf{\mathsf{m}}^{\prime})\) in \(\mathsf{m}\mathcal{M}\), where \(\mathbf{\mathsf{M}}^{\prime}=\mathbf{\mathsf{M}}\setminus\mathbf{Z}\) and \(\mathbf{\mathsf{m}}_{\mathbf{D}}=\mathbf{\pi}\). More generally, we can view queries in games as _first-order_ queries defined over formulae in which \(\mathbf{\pi}\) is a free variable, such as \(\varphi(\mathbf{\pi})\equiv\Pr^{\mathbf{\pi}}(\mathbf{x}\mid\mathbf{z})\). The answers to these queries are therefore only well-defined when this free variable becomes bound, or when considering a set of answers, as in Definition 19. For example, we can bind \(\mathbf{\pi}\) by quantifying over it as in \(\exists\mathbf{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}\mid\mathbf{z})\,.\big{(} \varphi(\mathbf{\pi})\sim q\big{)}\) where \(\sim\,\in\{<,\leq,=,\geq,>\}\) and \(q\in[0,1]\) (as is often done in first-order logics for reasoning about and verifying multi-agent systems [11, 53, 96]), or by returning bounds such as \(\max_{\mathbf{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}|\mathbf{z})}\varphi(\mathbf{\pi})\) (as is often done in credal networks [13]). Above, we quantify over \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\mathbf{z})\), but it is also possible to quantify over any desired set of policies, or even to posit a prior distribution \(\Pr(\mathbf{\Pi})\) over policies. These queries strictly generalise those in BNs and models such as settable systems [91, 92], which effectively consider only a single instantiation of \(\mathbf{\Pi}\). By way of illustration, let us return to Example 1 and query 0(a) from earlier in this section. We can interpret this question as asking about the expected utility of the worker given the observation that they went to university, written \(\mathbb{E}_{\mathbf{\pi}}[U^{1}\mid g]\) for some policy \(\mathbf{\pi}\). For the worked examples in this section, we assume that \(p=\Pr(T=h)=\frac{1}{2}\) and that the firm and the worker are playing best responses to one another, i.e., \(\mathcal{R}=\mathcal{R}^{\mathsf{BR}}\). The various rational outcomes induced by this choice (i.e., the NEs of the game) are: 1. The worker always chooses \(\neg g\). The firm chooses \(j\) if the worker chose \(\neg g\), otherwise they choose \(j\) with any probability \(q\in[0,1]\). 2. The worker chooses \(\neg g\) if \(T=\neg h\), and otherwise chooses \(g\) with probability \(\frac{1}{2}\). The firm chooses \(j\) if the worker chose \(g\), otherwise they choose \(j\) with probability \(\frac{4}{5}\). 3. The worker always chooses \(g\). The firm chooses \(j\) if the worker chose \(g\), otherwise they choose \(j\) with any probability \(q\in[0,\frac{3}{5}]\). The expected utilities for the worker under these rational outcomes are \(5\), \(4\), and \(\frac{7}{2}\), respectively. In query 0(a), the conditional rational outcomes \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid g)\) are the NEs consistent with observing \(D^{1}=g\). To answer the query, we must therefore compute \(\Pr^{\mathcal{R}}(u^{1}\mid g)\), which yields \(\{\mathbb{E}_{\mathbf{\pi}}[U^{1}\mid g]\}_{\mathbf{\pi}\in\mathcal{R}(\mathsf{m} \mathcal{M}|g)}=\{\frac{7}{2},4\}\). As noted above, Definition 19 can also be employed when the observations made are of the _mechanism variables_. For example, in query 0(b) we condition on the observation that the worker's strategy is given by \(\delta(D^{1}\),\(g)\), which we refer to as \(\bar{\pi}_{D^{1}}\), i..e, the worker _always_ decides to go to university. Based on this, we can use the mechanised MAID to compute \(\Pr^{\mathcal{R}}(u^{1}\mid\bar{\pi}_{D^{1}})\) and thus that \(\{\mathbb{E}[U^{1}\mid\bar{\pi}_{D^{1}}]\}_{\mathbf{\pi}\in\mathcal{R}(\mathsf{m} \mathcal{M}|\bar{\pi}_{D^{1}})}=\{\frac{7}{2}\}\). Note that this set is distinct from the answer to query 0(a) as observations of mechanism and object-level variables provide us with different information, i.e., \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\bar{\pi}_{D^{1}})\neq\mathcal{R}( \mathsf{m}\mathcal{M}\mid g)\). In particular, observations of mechanism variables serve primarily to rule out certain rational outcomes by conditioning on decision rules (conditioning on _parameter_ variables tells us nothing as they have deterministic distributions and no parents). One advantage of computing predictions in MAIDs (as opposed to in EFGs, for instance) is that we may exploit the conditional independencies in the graph. For example, if we were interested in how likely a worker is to be hard-working given that they went to university and were hired, then \(T\perp_{\mathcal{G}}D^{2}\mid D^{1}\) implies that \(\Pr^{\pi}(h\mid g,j)=\Pr^{\pi}(h\mid g)\) for any policy profile \(\pi\). When answering queries over object-level variables using mechanised MAIDs, we implicitly condition on the values of the mechanism variables to represent the fact the game and policy under consideration are fixed. For example, the query \(\Pr^{\pi}(h\mid g,j)\) in \(\mathcal{M}\) is given by \(\Pr(h\mid g,j,\m)\) in \(\mM\), where \(\m_{D}=\pi\). Hence, although \(T\not\perp_{\mM}D^{2}\mid D^{1}\), we do have \(T\perp_{\mM}D^{2}\mid D^{1},\m\), as expected. ### Interventions Interventional queries concern the effect of causal influences from outside a system. This becomes especially interesting in the case of games, when interventions affect not only the environment but also how the self-interested agents adapt their policies in response. In order to answer such queries, the edges in a MAID must reflect the causal structure of the world. This gives rise to the following definition, which can be viewed simply as a CBN without parameters for the decision variables. Note that as causal games are a form of MAID, they also support the associational queries introduced in the preceding section (just as CBNs may also be used to compute both interventional and associational queries). **Definition 20**.: A **causal game (CG)**\(\mM=(\mathcal{G},\mathbf{\theta})\) is a MAID such that for _any_ parameterisation of the decision variable CPDs \(\pi\), the induced model with joint distribution \(\Pr^{\pi}(\mathbf{V})\) is a CBN. Unlike CBNs, CGs let us ask about the effect of an intervention _before_ or _after_ a policy profile has been selected, which we refer to as _pre-_ and _post-policy_ queries respectively. Asking about the effect of an intervention after a particular policy profile \(\pi\) has been selected (as in query 2a) is simply the same as performing an interventional query on the CBN with joint distribution \(\Pr^{\pi}\). Asking about the effect of an intervention _before_ a policy profile has been selected (as in query 2b) means that agents are made aware of the intervention before selecting their decision rules, and thus they may react to its effects. In other words, the intervention can be viewed as producing a slightly different game that the agents then (knowingly) play. Our key observation is that pre-policy interventions can be modelled as interventions on the _mechanism variables_ in the mechanised CG, which ensures that the effects are propagated through the processes via which agents select their decision rules. This is because the additional mechanism variables and their outgoing edges in a mechanised CG represent causal (though potentially non-deterministic) processes via which parameterisations for the object-level variables are selected. Post-policy interventions, in turn, can be modelled as standard interventions on object-level variables. We write \(\mM_{\mathcal{I}}\) for an intervention \(\mathcal{I}\) which may contain both pre-policy and post-policy interventions. This unification of pre- and post-policy interventions is one of the key benefits of mechanised models. Indeed, post-policy interventions, and pre-policy interventions on parameter variables, are defined exactly as in CBNs, while a pre-policy intervention on a decision rule variable \(\Pi_{D}\) corresponds to replacing \(r_{D}:\textit{dom}(\Pa_{\Pi_{D}})\rightarrow\textit{dom}(\Pi_{D})\) by some new \(r_{D}^{*}:\textit{dom}(\Pa_{\Pi_{D}}^{*})\rightarrow\textit{dom}(\Pi_{D})\), where we may have \(\Pa_{\Pi_{D}}^{*}\neq\Pa_{\Pi_{D}}\). As for conditional queries, in our definition (which mirrors that introduced for cyclic causal models [7], but where the cyclic dependencies are governed by relations instead of functions) we quantify over the set of rational outcomes that are consistent with the given intervention. **Definition 21**.: Given a mechanised CG \(\mathsf{m}\mathcal{M}\) with rationality relations \(\mathcal{R}\), the **answer to an interventional query** of the probability of \(\mathbf{x}\) given intervention \(\mathcal{I}\) on variables \(\mathbf{Y}\) is given by the set \(\Pr^{\mathcal{R}}(\mathbf{x}_{\mathcal{I}})\coloneqq\big{\{}\Pr^{\mathbf{\pi}}(\mathbf{x}_{ \mathcal{I}})\big{\}}_{\mathbf{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I }})}\) where \(\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}})\) is the set of **interventional rational outcomes** in the mechanised MAID \(\mathsf{m}\mathcal{M}_{\mathcal{I}}\) with rationality relations \(\mathcal{R}^{*}\coloneqq\{r_{D}^{*}\}_{\Pi_{D}\in\mathbf{Y}}\cup\{r_{D}\}_{\Pi_{D} \notin\mathbf{Y}}\) and parameters determined by \(\Pr(\mathbf{\Theta}_{\mathcal{I}})\). Note that if \(\mathcal{I}\) is fully post-policy, then the rational outcomes remain the same, i.e., \(\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}})=\mathcal{R}(\mathsf{m} \mathcal{M})\) when \(\mathbf{Y}\subseteq\mathbf{V}\). To illustrate these ideas, let us return to Example 1 and queries 1(a) and 1(b). Query 1(a) concerns a _post-policy_ intervention since the worker is forced to go to university unbeknownst to the firm; in other words, the firm does not observe this intervention before selecting their policy. To compute the worker's expected utility, we must calculate \(\Pr^{\mathcal{R}}(u_{g}^{1})\) and thus perform a hard intervention \(\text{do}(D^{1}=g)\) in the mechanised game (shown in Figure 5(a)). As the set of rational outcomes does not change under a post-policy intervention, we have that \(\Pr^{\mathcal{R}}(u_{g}^{1})=\big{\{}\Pr^{\mathbf{\pi}}(u_{g}^{1})\big{\}}_{\mathbf{ \pi}\in\mathcal{R}(\mathsf{m}\mathcal{M})}\), which results in \(\big{\{}\mathbb{E}_{\mathbf{\pi}}[U_{g}^{1}]\big{\}}_{\mathbf{\pi}\in\mathcal{R}( \mathsf{m}\mathcal{M})}=\{-\frac{3}{2},\frac{7}{2}\}\). Note that unlike query 0(a), the fact that \(D^{1}=g\) tells us nothing about the value of \(T\), as it is causally upstream of the intervention on \(D^{1}\). Therefore, the wellbeing of the worker may decrease because they may be sent to university even when they are lazy. To answer query 1(b), concerning a _pre-policy_ intervention, we must compute \(\Pr(u_{\pi_{D^{1}}}^{1})\) where \(\hat{\pi}_{D^{1}}(g\mid h)=\hat{\pi}_{D^{1}}(g\mid\neg h)=\frac{1}{2}\) represents the aforementioned lottery system, which selects students to attend university randomly with probability \(\frac{1}{2}\). This time, the firm observes the intervention before they decide on their policy and so, under such an intervention, denoted by \(\mathcal{I}\) and shown in Figure 5(b), the new set of rational outcomes is given by \(\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}})=\{(\hat{\pi}_{D^{1}},\pi_{D^ {2}}):\pi_{D^{2}}\in r_{D^{2}}(\text{\bf pa}_{\Pi_{D^{2}}})\}\). In other words, we set \(\Pi_{D^{1}}=\hat{\pi}_{D^{1}}\) using a hard intervention and then allow the firm to best respond to this decision rule using \(r_{D^{2}}\). Note that \(\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}})\neq\mathcal{R}(\mathsf{m} \mathcal{M}\mid\hat{\pi}_{D^{1}})=\varnothing\), as there is no NE in the game that contains decision rule \(\hat{\pi}_{D^{1}}\). The lottery system removes any signalling effect of going to university, resulting in an optimal policy for the the firm of always offering a job to the worker and expected utility \(\big{\{}\mathbb{E}_{\mathbf{\pi}}[U_{\pi_{D^{1}}}^{1}]\big{\}}_{\mathbf{\pi}\in \mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}})}=\{\frac{17}{4}\}\). **Remark 2**.: _In previous work, soft interventions on object-level variables \(V\) have been modelled as hard interventions on its mechanism variable \(\mathsf{M}_{V}\)[17]. While these can be viewed as essentially equivalent in the single-agent case,14 possible dependencies between mechanism variables in mechanised games means that these two types of intervention may have markedly different effects in the multi-agent setting. The difference between pre- and post- Figure 6: (a) A mechanised game showing the hard post-policy intervention \(\text{do}(D^{1}=g)\), where incoming edges to \(D^{1}\) are severed. (b) A mechanised game showing the hard pre-policy intervention \(\text{do}(\Pi_{D^{1}}=\hat{\pi}^{1})\), where incoming edges to \(\Pi_{D^{1}}\) are severed. from a difference in the _information_ that is available to agents when they make their decisions, rather than to the chronology of play in a game (as is also the case for the structure of EFGs)._ ### Counterfactuals The final type of question we investigate arises when we combine predictions and interventions, as in query 3a: 'Given that the worker didn't go to university, what would be their wellbeing if they had?'. Such questions are _counterfactual_, as they combine observations made in the actual world (in which the worker didn't go to university), with questions pertaining to a counterfactual world (where they did go to university). Answering these queries in games is significantly more nuanced and complex than those of the preceding subsections. To do so, we must first consign all stochasticity to a set of exogenous variables, one for each variable in the causal game. Just as in an SCM, each variable \(V\) is thus associated with an exogenous variable \(\mathsf{E}_{V}\), and is governed by a deterministic CPD \(\Pr^{\mathbf{\pi}}(V\mid\mathbf{Pa}_{V})\), where \(\mathsf{E}_{V}\in\mathbf{Pa}_{V}\). **Definition 22**.: A (Markovian) **structural causal game (SCG)**\(\mathcal{M}=(\mathcal{G},\mathbf{\theta})\) is a causal game over exogenous and endogenous variables \(\mathsf{E}\cup\mathbf{V}\) such that for _any_ (deterministic) parameterisation of the decision variable CPDs \(\hat{\mathbf{\pi}}\), the induced model with joint distribution \(\Pr^{\mathbf{\pi}}(\mathbf{V},\mathsf{E})\) is an SCM. An SCG can be seen as an SCM without parameters for the decision variables. Given a policy \(\mathbf{\pi}\), we recover an SCM, as we explain in more detail below. Meanwhile, mechanised SCGs can be viewed as (a special case of) cyclic causal models [7], if we were to generalise such models to use _many-valued_ structural functions. When we mechanise SCGs, although we introduce mechanism variables for the exogenous variables (as can be seen in Figure 6(b)), we view them and their object-level exogenous children as beyond the realm of observation and intervention, just as in SCMs. As such, mechanism variables of exogenous variables can largely be ignored. As in SCMs, interventional distributions \(\mathcal{I}(V\mid\mathbf{Pa}_{V}^{*})\) must be deterministic, and soft interventions may be defined by introducing a new exogenous variable \(\mathsf{E}_{V}^{*}\) to \(\mathbf{Pa}_{V}^{*}\)[12]. With these caveats in place, both pre-policy and post-policy predictions and interventions may be defined in this model as described in 4.1 and 4.2 respectively. While computing predictions and interventions in SCGs is therefore relatively straightforward, there are two main difficulties that arise when computing counterfactuals. The first is the choice of how to represent stochastic decision rules using structural functions and exogenous variables, Figure 7: (a) A (Markovian) SCG representing Example 1. Note that we have included exogenous variables for \(U^{1}\) and \(U^{2}\), although as neither is stochastic, this is not strictly necessary. (b) The \(\mathcal{R}^{\text{BR}}\)-relevance graph of this game. and the second is the problem of updating our beliefs about the policy profile played in the counterfactual world given our evidence about the policy profile played in the actual world. We resolve each of these difficulties in turn. #### Decision Rules as Structural Functions We assume that agents play the same kind of game regardless of the level in the causal hierarchy at which we model them. In these games, (behavioural) play equates to selecting decision rules which stochastically sample a decision conditional on the value of some (non-exogenous) parents. In _structural_ causal games, we represent these decision rules using structural functions and exogenous variables. One proposal would therefore be to view each agent as choosing both a'structural decision rule' \(\dot{\pi}_{D}(D\mid\mathbf{Pa}_{D})\) and a distribution \(\Pr(\mathsf{E}_{D})\), with a shared mechanism parent \(\Pi_{D}\) for both \(D\) and \(\mathsf{E}_{D}\). This, however, leads to a different type signature for decision rules, and moreover leads to a formalism in which (pre-policy) interventions can be made upstream of stochastic variables, which are ruled out in SCMs. We therefore propose an equivalent formulation in which each agent controls only their decision variables and not their exogenous parents. Unfortunately, while our assumptions about the rationality of the agents tell us what CPDs are assigned to their decision variables, they are insufficient for telling us what precise deterministic mechanisms the agents use to implement these CPDs (as a function of some stochastic exogenous variable). In fact, unless we chose to explicitly restrict the form of the mechanism, such as by stipulating that it belongs to some parametric class, there will typically be _infinitely_ many deterministic functions that induce a particular distribution over a decision variable [5]. Without specifying such functions, it will not (in general) be possible to answer counterfactual queries in games, and yet the precise form of these functions may impact the answers to these queries [17]. In essence, the choice of how to represent a decision rule \(\pi_{D}\in\Delta(D\mid\mathbf{Pa}_{D}\setminus\{\mathsf{E}_{D}\})\) using a stochastic exogenous variable \(\mathsf{E}_{D}\) and a deterministic mechanism \(\dot{\pi}_{D}\in\Delta(D\mid\mathbf{Pa}_{D})\) is the choice of what part of the decision rule we assume remains fixed across counterfactual worlds (\(\mathsf{E}_{D}\)) and what part may vary (\(\dot{\pi}_{D}\)). Assuming that we have no pre-existing knowledge about this representation, we propose that to stay true to the spirit of behavioural policies by viewing each agent's randomisation as independent between both: * _Decision rules_, in the sense that learning about an agent's random choice under one decision rule \(\pi_{D}\) is uninformative in settings where the agent is using a different decision rule \(\pi_{D}^{\prime}\); * _Decision contexts_, in the sense that an agent's decision rule \(\pi_{D}\) can naturally be interpreted as independently sampling an action \(d\)_after_ seeing an assignment \(\overline{\mathbf{pa}}_{D}\) of the non-exogenous parents \(\overline{\mathbf{Pa}}_{D}\coloneqq\mathbf{Pa}_{D}\setminus\{\mathsf{E}_{D}\}\). We formalise this assumption by representing each exogenous variable \(\mathsf{E}_{D}\) as a set containing a variable \(\mathsf{E}_{D}^{\pi_{D},\overline{\mathbf{pa}}_{D}}\) for each \(\pi_{D}\in\Delta(D\mid\overline{\mathbf{Pa}}_{D})\) and \(\overline{\mathbf{pa}}_{D}\in\textit{dom}(\overline{\mathbf{Pa}}_{D})\), where \(\textit{dom}(\mathsf{E}_{D}^{\pi_{D},\overline{\mathbf{pa}}_{D}})=\textit{dom} (D)\) (i.e., \(\mathsf{E}_{D}\) is a random field with independently distributed elements). Given a stochastic decision rule \(\pi_{D}(D\mid\overline{\mathbf{Pa}}_{D})\), we may then define a canonical structural representation by setting: \[\Pr(\mathsf{E}_{D}^{\pi_{D},\overline{\mathbf{pa}}_{D}}=d) \coloneqq\pi_{D}(d\mid\overline{\mathbf{pa}}_{D}), \tag{3}\] \[\dot{\pi}_{D}(D=d\mid\overline{\mathbf{pa}}_{D},\mathsf{e}_{D}) \coloneqq\delta(D,\mathsf{e}_{D}^{\pi_{D},\overline{\mathbf{pa}}_{D}}),\] where note that \(\dot{\pi}_{D}\) is effectively parameterised by \(\pi_{D}\), i.e., we have \(\dot{\pi}_{D}(D\mid\overline{\mathbf{Pa}}_{D},\mathsf{E}_{D};\pi_{D})\). The joint distribution over \(\mathsf{E}_{D}\) is simply the _product of probability distributions_ over \(\mathsf{E}_{D}^{\pi_{D},\overline{\mathbf{pa}}_{D}}\)[6]. Proposition 4 below then follows immediately,15 and means that we may continue to interpret decision rules, policy profiles, and rationality relations as we do in MAIDs and CGs, where each agent plays some \(\pi_{D}\in r_{D}(\mathbf{pa}_{\Pi_{D}})\subseteq\Delta(D\mid\overline{\mathbf{Pa} }_{D})\) and each \(\pi_{D}\) parameterises \(\dot{\pi}_{D}\). Moreover, this additional structure allows us to generalise the definition of counterfactuals in SCMs to counterfactuals in games. Footnote 15: Though note that there are many constructions that would result in these properties; we merely present one particularly simple example. **Proposition 4**.: _For distributions over \(\mathsf{E}_{D}\) and \(D\) as governed by equations (3), there is a one-to-one correspondence between the set of stochastic decision rules \(\Delta(D\mid\overline{\mathbf{Pa}}_{D})\) and the set of deterministic decision rules \(\text{dom}(\Pi_{D})\subset\Delta(D\mid\mathbf{Pa}_{D})\). Moreover, given two such corresponding decision rules \(\pi_{D}\) and \(\dot{\pi}_{D}\), then \(\int_{\text{dom}(\mathsf{E}_{D})}\dot{\pi}_{D}(d\mid\overline{\mathbf{pa}}_{D },\mathsf{e}_{D})\Pr(\mathsf{e}_{D})\,d\mathsf{e}_{D}=\pi_{D}(d\mid\overline{ \mathbf{pa}}_{D})\)._ #### Counterfactual Rational Outcomes The second difficulty of answering counterfactual queries in games arises due to the possible existence of multiple rational outcomes. If we have evidence that the equilibrium \(\boldsymbol{\pi}\) was played in the actual world, how and to what extent should that inform us of the equilibrium \(\boldsymbol{\pi}^{\prime}\) played in the counterfactual world where the values of some mechanism variables may have changed? To answer this question, we begin by introducing our approach to answering counterfactual queries in SCGs, which mirrors Pearl's approach for SCMs (described in Section 2.1). That is, we condition, intervene, and then compute the resulting distribution, though we generalise this to observations and interventions on both object-level and mechanism variables. In particular, we compute the set \(\Pr^{\mathcal{R}}(\boldsymbol{x}_{\mathcal{I}}\mid\boldsymbol{z})\) as follows: 1. For every _actual_ rational outcome \(\boldsymbol{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}\mid\boldsymbol{z})\), update \(\Pr(\mathsf{e})\) to \(\Pr^{\boldsymbol{\pi}}(\mathsf{e}\mid\boldsymbol{z})\) ('abduction'); 2. Apply the intervention \(\mathcal{I}\), on variables \(\boldsymbol{Y}\), recomputing any rational responses to form \(\boldsymbol{\pi}^{\prime}\) and adding new exogenous variables \(\mathsf{E}^{*}\) to form \(\mathsf{E}^{\prime}=\mathsf{E}\cup\mathsf{E}^{*}\) where required ('action'); 3. Return each marginal distribution \(\int_{\text{dom}(\mathsf{E}^{\prime})}\Pr^{\boldsymbol{\pi}^{\prime}}( \boldsymbol{z}\mid\mathsf{e}^{\prime})\Pr(\mathsf{e}^{\prime})\,d\mathsf{e}^ {\prime}\) in this modified model for each new _counterfactual_ rational outcome \(\boldsymbol{\pi}^{\prime}\) ('prediction'). In the first step, we update our beliefs about the exogenous and decision rule variables in the actual world under \(\boldsymbol{\pi}\); in the second, we apply an intervention \(\mathcal{I}\) to form the counterfactual world (and recompute the rational outcomes based on this change); and in the third, we return the new set of distributions that are consistent with our beliefs from the first step and the results of the intervention made in the second step. By reading these steps carefully, one notices a difficulty in SCGs that does not arise in an SCM: when we condition on \(\boldsymbol{z}\) in the first step we obtain a set of policies \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\boldsymbol{z})\) that are rational in the _actual_ world. Then in step two, we compute new rational responses \(\boldsymbol{\pi}^{\prime}\), and it is \(\boldsymbol{\pi}^{\prime}\) rather than \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\boldsymbol{z})\) that features in the _counterfactual_ world of the final step. This raises the question of the extent to which knowledge of the rational policies \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\boldsymbol{z})\) should be used to compute \(\boldsymbol{\pi}^{\prime}\). Let us first note that if \(\mathcal{I}\) is a _post_-policy intervention, then this has no impact on the rational outcomes of the counterfactual world - they are simply the same as the actual world, given by \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\boldsymbol{z})\) - and hence there is no difficulty. The issue only arises in _pre_-policy counterfactuals, such as query 3b ('Given that the worker never decides to go to university, what would be their wellbeing if they always decided to go to university?'), where the intervention (in query 3b, on the worker's decision rule) means that the set of rational outcomes in the counterfactual world will be different from those in the actual world. Because each policy profile \(\mathbf{\pi}\) is made up of decision rules \(\pi_{D}\), we can formalise this question by asking which decision rule variables \(\mathbf{\Pi}(\mathcal{I})\subseteq\mathbf{\Pi}\) are _invariant_ to the intervention \(\mathcal{I}\). Written in terms of the three-step process above, we must have \(\mathbf{\pi}(\mathcal{I})=\mathbf{\pi}^{\prime}(\mathcal{I})\), i.e., for any invariant decision rule variable \(\Pi\in\mathbf{\Pi}(\mathcal{I})\) then the counterfactual decision rule \(\pi^{\prime}_{D}\) is equal to the actual decision rule \(\pi_{D}\). Those that are not invariant must have their values recomputed in the new counterfactual model. For instance, as argued above, when \(\mathcal{I}\) is post-policy, \(\mathbf{\Pi}(\mathcal{I})=\mathbf{\Pi}\). That is, none of the values of the decision rule variables need to be recomputed. How should we choose \(\mathbf{\Pi}(\mathcal{I})\) when \(\mathcal{I}\) is not (fully) post-policy? There are multiple principles that could be invoked in order to make this choice. The simplest - let us call it the _simplicity principle_ - is to recompute the values of _all_ decision rule variables, i.e., \(\mathbf{\Pi}(\mathcal{I})=\varnothing\). In other words, the intervention means that 'all bets are off' after the intervention is made, and the actual rational outcomes \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\mathbf{z})\) have no bearing on the counterfactual rational outcomes. Under this principle, computing pre-policy counterfactuals is reminiscent of the approach taken in existing work on cyclic causal models [7], where two sets of solutions are induced by two halves of a twin graph. The problem with this principle is that it may require us to ignore information gathered from our observation \(\mathbf{z}\). For example, if \(\mathbf{Z}=\mathbf{z}\) implies that \(\Pi_{D}=\pi_{D}\) in the actual world and \(\mathcal{I}\) is _causally downstream_ of \(\Pi_{D}\) (and hence can have no effect on the value of \(\Pi_{D}\)), then it appears we know that \(\Pi_{D}=\pi_{D}\) in the counterfactual world, i.e., \(\Pi_{D}\in\mathbf{\Pi}(\mathcal{I})\)? To solve this problem, we can instead invoke the _closest possible world principle_, where we retain as much information as possible from our knowledge of the rational policies \(\mathcal{R}(\mathsf{m}\mathcal{M}\mid\mathbf{z})\). While the values of some decision rules may still need to be recomputed, by keeping \(\mathbf{\Pi}(\mathcal{I})\) as large as possible, we avoid the need for re-solving the entire game, and can provide a more accurate answer to counterfactual queries. The process of computing \(\mathbf{\Pi}(\mathcal{I})\) under this principle is slightly more complicated, however, as it involves propagating the effects of an intervention through models that contain both cycles and non-determinism. In the remainder of the paper we therefore employ the simplicity principle above, which is also more in keeping with prior work.16 An algorithm for computing \(\mathbf{\Pi}(\mathcal{I})\) according to the closest possible world principle is, however, provided in Appendix B.1 for reference. Footnote 16: In practice, for queries 3a and 3b, it happens to be the case that both principles lead to the same answer. #### Defining Counterfactuals in Games Given a set of invariant decision rule variables \(\mathbf{\Pi}(\mathcal{I})\), the answer produced by the three-step process given above can be written as follows. **Definition 23**.: Given a mechanised SCG \(\mathsf{m}\mathcal{M}\) with rationality relations \(\mathcal{R}\), the **answer to a counterfactual query** of the probability of \(\mathbf{x}\) given observation \(\mathbf{z}\) and intervention \(\mathcal{I}\) on variables \(\mathbf{Y}\) is given by the set: \[\Pr^{\mathcal{R}}(\mathbf{x}_{\mathcal{I}}\mid\mathbf{z})\coloneqq\left\{\int_{dom( \mathsf{E}^{\prime})}\Pr^{\mathbf{\pi}^{\prime}}(\mathbf{x}_{\mathcal{I}}\mid\mathbf{\rm e},\mathbf{\rm e}^{*})\Pr(\mathbf{\rm e}^{*})\Pr^{\mathbf{\pi}}(\mathbf{\rm e}\mid\mathbf{z})\,d\mathbf{ \rm e}^{\prime}\right\}_{(\mathbf{\pi},\mathbf{\pi}^{\prime})\in\mathcal{R}(\mathsf{ m}\mathcal{M}_{\mathcal{I}}|\mathbf{z})},\] where \(\mathsf{E}^{*}=\mathsf{E}^{\prime}\setminus\mathsf{E}\) are any newly added exogenous variables as a result of a soft intervention, \[\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}}\mid\mathbf{z})\coloneqq\left\{(\bm {\pi},\mathbf{\pi}^{\prime})\in\mathcal{R}(\mathsf{m}\mathcal{M}\mid\mathbf{z})\times \mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}}):\mathbf{\pi}(\mathcal{I})=\mathbf{ \pi}^{\prime}(\mathcal{I})\right\}\] is the set of **actual-counterfactual rational outcomes**, and \(\mathbf{\Pi}(\mathcal{I})\) is the set of **invariant decision rule variables**. Note that if \(\mathcal{I}\) is fully post-policy, then the rational outcomes remain the same, i.e., \(\mathbf{\Pi}(\mathcal{I})=\mathbf{\Pi}\) and \(\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}}\mid\mathbf{z})=\left\{(\mathbf{\pi}, \mathbf{\pi}):\mathbf{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}\mid\mathbf{z})\right\}\) when \(\mathbf{Y}\subseteq\mathbf{V}\). In the defintion of \(\Pr^{\mathcal{R}}(\mathbf{x}_{\mathcal{I}}\mid\mathbf{z})\), for every actual policy \(\mathbf{\pi}\) and counterfactual policy \(\mathbf{\pi}^{\prime}\), we compute the product of three quantities. \(\Pr^{\mathbf{\pi}}(\mathbf{\rm e}\mid\mathbf{z})\) is the updated distribution over the exogenous variables under \(\mathbf{\pi}\), and corresponds to the first step. If \(\mathcal{I}\) is a soft intervention, then we add further variables \(\mathbf{\mathsf{E}}^{*}=\mathbf{\mathsf{E}}^{\prime}\setminus\mathbf{\mathsf{E}}\) to capture the stochasticity of \(\mathcal{I}\), which leads to the term \(\Pr(\mathbf{\mathsf{e}}^{*})\).17 We then condition on both \(\mathbf{\mathsf{e}}\) and \(\mathbf{\mathsf{e}}^{*}\) and compute the value of \(\mathbf{x}_{\mathcal{I}}\) under \(\mathbf{\pi}^{\prime}\), hence the term \(\Pr^{\mathbf{\pi}^{\prime}}(\mathbf{x}_{\mathcal{I}}\mid\mathbf{\mathsf{e}},\mathbf{\mathsf{ e}}^{*})\). Finally, in the third step, we marginalise over all exogenous variables \(\mathbf{\mathsf{E}}^{\prime}\). The set \(\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}}\mid\mathbf{z})\) defines the pairs of policies that we must consider. Namely, actual rational outcomes \(\mathbf{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}\mid\mathbf{z})\) and counterfactual rational outcomes \(\mathbf{\pi}^{\prime}\in\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}})\) such that the decision rules invariant to \(\mathcal{I}\), denoted \(\mathbf{\Pi}(\mathcal{I})\) remain the same: \(\mathbf{\pi}(\mathcal{I})=\mathbf{\pi}^{\prime}(\mathcal{I})\). Footnote 17: Note that the distribution over these ‘fresh’ exogenous variables does not depend on the policy, actual or counterfactual. We briefly demonstrate the three step process above by returning to queries 2(a) and 2(b). To answer 2(a) we must compute \(\Pr^{\mathbf{\pi}}(u_{g}^{1}\mid\neg g)\). First, we note that as this involves a post-policy intervention then we only need to consider the _actual_ rational outcomes, as \(\mathbf{\Pi}(\mathcal{I})=\mathbf{\Pi}\). Thus, for each \(\mathbf{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}\mid\neg g)\) we begin by updating \(\Pr(\mathbf{\mathsf{e}})\) to \(\Pr^{\mathbf{\pi}}(\mathbf{\mathsf{e}}\mid\neg g)\). In this case, such an update amounts to changing \(\Pr^{\mathbf{\pi}}(\mathbf{\mathsf{e}}_{T},\mathbf{\mathsf{e}}_{D^{1}},\mathbf{\mathsf{e}}_{D ^{2}})\) to \(\Pr^{\mathbf{\pi}}(\mathbf{\mathsf{e}}_{T},\mathbf{\mathsf{e}}_{D^{1}},\mathbf{\mathsf{e}}_{D ^{2}}\mid\neg g)\), where note that we independently update each \(\mathbf{\mathsf{E}}_{D}^{\pi_{D},\overline{\mathbf{\mathsf{e}}}\mathbf{\mathsf{a}}_{D}}\) for every such \(\mathbf{\pi}\) and endogenous decision context \(\overline{\mathbf{\mathsf{e}}}\mathbf{\mathsf{a}}_{D}\). Following this, we apply the intervention \(\mathrm{do}(D^{1}=g)\). The final answer to the query is therefore rather simple in this case, and is given by \(\Pr^{\mathcal{R}}(u_{g}^{1}\mid\neg g)=\big{\{}\Pr^{\mathbf{\pi}}(u_{g}^{1}\mid \neg g)\big{\}}_{\mathbf{\pi}\in\mathcal{R}(\mathsf{m}\mathcal{M}\mid\neg g)}\). We thus have \(\big{\{}\mathbb{E}_{\mathbf{\pi}}[U_{g}^{1}\mid\neg g]\big{\}}_{\mathbf{\pi}\in \mathcal{R}(\mathsf{m}\mathcal{M}\mid\neg g)}=\{\frac{\mathcal{I}}{2}\}\). Query 2(b) involves a hard pre-policy intervention \(\mathcal{I}\) that sets \(\Pi_{D^{1}}\) to \(\overline{\pi}_{D^{1}}\). As \(\mathbf{\Pi}(\mathcal{I})=\varnothing\), the answers to the query are given under the interventional outcomes, i.e., \(\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}}\mid\bar{\pi}_{D^{1}})=\big{\{} (\mathbf{\pi}^{\prime},\mathbf{\pi}^{\prime}):\mathbf{\pi}\in\mathcal{R}(\mathsf{m} \mathcal{M}_{\mathcal{I}})\big{\}}\). In this particular game, \(\mathcal{R}(\mathsf{m}\mathcal{M}_{\mathcal{I}})=\mathcal{R}(\mathsf{m} \mathcal{M}\mid\overline{\pi}_{D^{1}})\) and so the answer is the same as the answer to query 2(b). **Remark 3**.: _The reasons for choosing between a CG or an SCG to model a given problem are analogous to the respective reasons for choosing a CBN or an SCM. Using the latter, one can reason about counterfactuals, path-specific effects, and questions of identifiability [4]. The former, however, is a simpler formalism, and requires less knowledge about the precise functions holding between the variables, making CGs a more attractive choice when one does not require any of the features listed above._ ## 5 Solution Concepts and Subgames In Section 3, we explained how a set of rationality relations can be used to capture the process by which agents choose their decision rules, and thus which mechanisms agents need to consider when doing so. In this section, we build on these ideas in three subsections. Firstly, we detail the distinction between mixed and behavioural policies and their relation to NEs in MAIDs. Secondly, we introduce the concept of _subgames_ within MAIDs, which, analogously to their EFG counterparts, allow us to analyse and solve parts of the game independently. Finally, we introduce several _equilibrium refinements_ for MAIDs, which are discussed in relation to their EFG counterparts in Section 6.2. With these contributions, we aim to place causal games on a more equal footing with EFGs as a tool for game-theoretic analysis. Note that as (S)CGs are refinements of MAIDs, the results in this section also apply to these models. Concepts in this section will be explained with the help of the following example, shown in Figure 7(a). **Example 2** (Warehouse Robots).: _Two robots are working together in a warehouse. Robot one is responsible for fulfilling orders; it can decide to move quickly or slowly. It will not break anything if it moves slowly, but might break something if it moves quickly. Robot two is responsible for keeping the warehouse tidy and so must decide whether to patrol or not, but it can only observe what robot one does. If it patrols, robot two can repair broken items, but by doing so it might obstruct robot one and prevent it from completing its order. Robot one is rewarded for fulfilling orders, the quicker the better, and penalised for breaking things. Robot two is rewarded for everything in the warehouse ending up in a state of repair, but incurs a small energy cost for patrolling._ To parameterise this game, let us suppose that robot one breaks something if it moves quickly (\(D^{1}=q\)) with probability \(\frac{1}{3}\) but will not break anything otherwise, and that robot two obstructs robot one with probability \(\frac{1}{2}\) if it patrols (\(D^{2}=p\)) and probability zero otherwise. Finally, we define the utility functions such that robot one receives a reward of \(5\) or \(2\) for completing an order quickly or slowly respectively, but it also incurs a penalty of \(-3\) for breakages. Robot two receives a reward of \(6\) for everything being in a state of repair, but incurs a penalty of \(-1\) for patrolling. Given this parameterisation, we can easily calculate the expected payoff of each agent for the four possible action combinations. For reference, we show these in Figure (c)c using an EFG that only bifurcates on the two robots' decisions; the chance variable \(B\) has been marginalised out to create a reduced EFG (as detailed in Appendix 6.1). ### Nash Equilibria A solution concept aims to identify a subset of the possible outcomes of a game that may occur if agents act rationally. In non-cooperative games, the most fundamental solution concept is a Nash equilibrium (NE) [64], a policy profile such that no agent may benefit by unilaterally deviating. In Example 2, for instance, the policy profile \(\mathbf{\pi}^{\text{NE}}\) in which robot one chooses \(D_{1}=q\) and robot two chooses \(D_{2}=p\) whatever the value of \(D_{1}\) is an NE. Previous work introduced this concept to MAIDs, as recalled in Definition 11[48], but did not fully characterise when an NE is guaranteed to exist in a MAID. This existence depends on which class of policies agents are permitted to choose from. So far in this paper, we have viewed agents as employing _behavioural policies_, where each agent selects decision rules for each of their decisions independently. In contrast, a _mixed policy_ allows an agent to coordinate their choice of decision rules at different decisions; it is a distribution over pure policies. In what follows, we use a dot'to denote the determinism of pure policies and \(\mu\) to denote mixed policies. **Definition 24**.: Let \(\text{\it dom}(\dot{\Pi}_{D})\) be the set of all possible pure decision rules for \(D\), and recall that we write \(\text{\it dom}(\mathbf{V})=\bigtimes_{V\in\mathbf{V}}\text{\it dom}(V)\). A **mixed policy** for agent \(i\) is some \(\mu^{i}\in\Delta(\dot{\mathbf{\Pi}}_{\mathbf{D}^{i}})\), a **behavioural policy** is some \(\mathbf{\pi}^{i}\in\text{\it dom}(\dot{\mathbf{\Pi}}_{\mathbf{D}^{i}})\), and a **pure policy** is some \(\dot{\mathbf{\pi}}^{i}\in\text{\it dom}(\dot{\mathbf{\Pi}}_{\mathbf{D}^{i}})\). Figure 8: (a) A MAID \(\mathcal{M}=(\mathcal{G},\theta)\) representing Example 2 and its \(s\)-relevance graph. (b) The smallest proper \(s\)-subdiagram \(\mathcal{G}^{\prime}\) of \(\mathcal{G}\) and its \(s\)-relevance graph. (c) A reduced EFG where each of the proper subgames corresponds to the two possible \(s\)-subgames for \(\mathcal{G}^{\prime}\), where \(D^{1}=q\) or \(D^{1}=\neg q\) respectively. **Proposition 5**.: _A (behavioural policy) NE is not guaranteed to exist in a MAID._ Nash's theorem establishes that any finite game is guaranteed to have an NE, as long as all agents are allowed to choose mixed policies [64]. However, in general, there is no such guarantee when agents are only permitted to use behavioural policies (for an example, see Appendix B.2). Despite this negative result, behavioural policies are often regarded as more 'natural' due to their representation of agents randomising at each decision point instead of once at the beginning of the game [68]. Moreover, behavioural policies respect the conditional independencies of the MAID's graph. As such, an interesting question is when an NE in behavioural policies is guaranteed to exist. As a first step, it is relatively straightforward to prove an analogue of Kuhn's theorem: if all agents in the game have perfect recall (i.e., agents never forget their past moves nor any of the information they knew previously, as introduced in Definition 13), then an NE in behavioural policies is guaranteed to exist [51]. **Lemma 2**.: _Let \(\mathbf{\pi}^{-i}\in\Delta(\mathbf{D}^{-i}\mid\mathbf{Pa}_{\mathbf{D}^{-i}})\) be a partial (behavioural or mixed) policy profile for agents \(N\setminus\{i\}\) in a MAID \(\mathcal{M}\). If agent \(i\) has perfect recall in \(\mathcal{M}\), then for every mixed policy \(\mu^{i}\) there exists a behavioural policy \(\mathbf{\pi}^{i}\) such that \(\Pr(\mu^{i},\mathbf{\pi}^{-i})(\mathbf{v})=\Pr(\mathbf{\pi}^{i},\mathbf{\pi}^{-i})(\mathbf{v})\)._ **Proposition 6**.: _In any MAID \(\mathcal{M}\) with perfect recall, there exists a (behavioural) policy profile \(\mathbf{\pi}\) that is an NE._ Going further, because MAIDs reveal conditional independencies between variables, only a weaker criterion of _sufficient recall_ is sufficient for the existence of an NE in behavioural policies. K&M implicitly prove this result - included as Proposition 2 - when proving that their Algorithm for finding an NE always converges under certain conditions (as these correspond with sufficient recall) [48]. Appendix B.2 provides an example of a MAID in which all agents have sufficient but imperfect recall. In this example, there exists a mixed policy profile with no behavioural policy resulting in the same distribution over outcomes. There is still, however, an NE in behavioural policies. On the other hand, a MAID without sufficient recall may _not_ have an NE in behavioural policies (an example is again given in Appendix B.2). **Proposition 7**.: _If an agent \(i\) in a MAID \(\mathcal{M}\) has perfect recall, then they also have sufficient recall. However, if an agent has sufficient recall, then they do not always have perfect recall._ ### Subgames Subgames in EFGs represent parts of the game that can be solved independently from the rest; we now introduce the analogous notion of \(\mathcal{R}\)_-subgames_ in MAIDs. These subgames have three uses: they allow us to introduce further equilibrium refinements (in Section 5.3); they can reduce the cost of computing equilibria [39]; and they allow us to analyse agents' decision-making in isolation from other parts of the game, which may be useful for other forms of analysis (such as those discussed in Section 7). In Appendix C.2, we demonstrate this second fact by empirically showing how subgames in MAIDs can be used to compute NEs much more efficiently than in EFGs. However, there are two key differences between subgames in EFGs and those in MAIDs. Firstly, because MAIDs explicitly represent conditional independencies between variables, we can often find more subgames in a MAID than in its corresponding EFG. Secondly, because the ability to solve part of a game independently varies with the solution concept, the \(\mathcal{R}\)-subgames vary with \(\mathcal{R}\). Given a set of graphical criteria, \(\mathcal{R}\)-reachability, such as those in Section 3.2, one can identify the structure of any \(\mathcal{R}\)-subgames - which we refer to as an \(\mathcal{R}\)-subdiagram - using only the underlying graph. **Definition 25**.: Given a mechanised MAID \(\mathsf{m}\mathcal{M}=(\mathsf{m}\mathcal{G},\mathbf{\theta},\mathcal{R})\) and a set of sound and complete graphical criteria for \(\mathcal{R}\)-relevance - ie., \(\mathcal{R}\)-reachability - we refer to the subgraph \((\mathbf{V}^{\prime},\mathscr{E}^{\prime})\) of \(\mathcal{G}\), along with the set of agents \(N^{\prime}\subseteq N\) possessing decision variables in that subgraph, as an \(\mathcal{R}\)-**subdiagram**\(\mathcal{G}^{\prime}=(N^{\prime},\mathbf{V}^{\prime},\mathbf{\phi}^{\prime})\) if: * \(\mathbf{V}^{\prime}\) contains every variable \(Z\) such that \(\mathsf{M}_{Z}\) is \(\mathcal{R}\)-reachable from some \(\Pi_{D}\) with \(D\in\mathbf{V}^{\prime}\); * \(\mathbf{V}^{\prime}\) contains, for all \(X,Y\in\mathbf{V}^{\prime}\), every variable that lies on a directed path \(X\dashrightarrow Y\) in \(\mathcal{G}\). As before, we drop \(\mathcal{R}\) from our notation where unimportant or unambiguous. The first condition on \(\mathbf{V}^{\prime}\) ensures that for any decision variable \(D\) in the subdiagram, any variable whose mechanism may impact the rational response for \(D\) is also included in the graph. This means that the rational responses for the decision rules in this part of the game are independent of what happens elsewhere. The second condition says that additional variables may also be included in the subdiagram as long as mediators are included too. This ensures that the CPDs for all the variables in the subgame remain consistent, and allows us to define a correspondence between subgames in MAIDs and subgames in EFGs (in Section 6.2). To find the subgames for each subdiagram, we must update the parameterisation of the remaining variables to be consistent with the original game and the structure of the graph. **Definition 26**.: Given a mechanised MAID \(\mathsf{m}\mathcal{M}=(\mathsf{m}\mathcal{G},\mathbf{\theta},\mathcal{R})\), an \(\mathcal{R}\)-**subgame** of \(\mathcal{M}\) is a new MAID \(\mathcal{M}^{\prime}=(\mathcal{G}^{\prime},\mathbf{\theta}^{\prime})\) where \(\mathcal{G}^{\prime}\) is an \(\mathcal{R}\)-subdiagram of \(\mathcal{G}\) and \(\mathbf{\theta}^{\prime}\) is defined by \(\Pr(\mathbf{v}^{\prime};\mathbf{\theta}^{\prime})\coloneqq\Pr(\mathbf{v}^{\prime}\mid \mathbf{z};\mathbf{\theta})\), where \(\mathbf{z}\) is some instantiation of the variables \(\mathbf{Z}=\mathbf{V}\setminus\mathbf{V}^{\prime}\).18 An \(\mathcal{R}\)-subgame is **feasible** if there exists a policy profile \(\mathbf{\pi}\) where \(\Pr^{\mathbf{\pi}}(\mathbf{z})>0\). Footnote 18: In fact, it can easily be appreciated that only the setting \(\mathbf{z}\) of the variables that have a child in \(\mathbf{V}^{\prime}\) will matter. \(\mathcal{R}\)-subgames can be found for any \(\mathcal{R}\) using only \(\mathcal{R}\)-reachability. In particular, \(s\)-reachability produces \(s\)-subgames, which in many ways are the most natural form of subgame in MAIDs (because of their connection to subgames in EFGs, as shown formally in Section 6.2). In the remainder of the paper, unless otherwise specified, we therefore focus our attention on this case. Note also that any MAID is an \(\mathcal{R}\)-subgame of itself, and so an \(\mathcal{R}\)-subgame on a strictly smaller set of variables is called a _proper_\(\mathcal{R}\)-subgame. For example, the MAID for Example 1 (shown in Figure 2(b)) has no proper \(\mathcal{R}^{\text{BR}}\)-subgames because \(\Pi_{D^{1}}\) and \(\Pi_{D^{2}}\) are both \(\mathcal{R}^{\text{BR}}\)-reachable from one another. Figure 9: (a) A MAID \(\mathcal{M}=(\mathcal{G},\theta)\) for the modified version of Example 1 – in which the firm’s profits are also function of the worker’s decision but not of their temperament – and resulting \(s\)-relevance graph. The graph \(\mathcal{G}\) is also an (improper) \(s\)-subdiagram and the full game an (improper) \(s\)-subgame. Figures (b), (c), and (d) illustrate the remaining (proper) \(s\)-subdiagrams of \(\mathcal{G}\) and their \(s\)-relevance graphs. #### Identifying More Subgames in MAIDs Before continuing, we note that the conditional dependencies captured in MAIDs allow for a richer and stronger notion of subgames than in EFGs. Not only can different notions of \(\mathcal{R}\)-subgame be introduced for different rationality relations, but it is often possible to identify more subgames (and hence rule out more non-credible threats) in a MAID than in the corresponding EFG. This can be seen by considering a minor variation on Example 1. Suppose that the firm has a new vacancy in which what is important is not whether the worker is hard-working or lazy, but whether they have studied at university. In other words, \(U^{2}\) is a function of \(D^{1}\) and \(D^{2}\) but not \(T\). The game graph and \(s\)-relevance graph for this example are shown in Figure 8(a). Note that the _structure_ of the EFG (shown in Figure 2(a)) does not change, only the payoffs for the firm. As such, there are no proper subgames in the EFG because when the firm is making their decision (\(D^{2}\)), they cannot observe the value of \(T\) and so no proper subtree is closed under both descendants and information sets. In contrast, we can recognise three proper \(s\)-subdiagrams (shown in Figures 8(b), 8(c), and 8(d)) of the equivalent MAID. Each of these \(s\)-subdiagrams has two \(s\)-subgames associated with it owing to the two values that \(T\) can take (for the \(s\)-subdiagram in Figure 8(b)) and the two values that \(D^{1}\) can take (for the \(s\)-subdiagrams in Figures 8(c) and 8(d)). ### Equilibrium Refinements When more than one NE exists, it is useful to specify additional criteria to rule out less plausible outcomes. This corresponds to making additional assumptions about the rationality of agents, which can be encoded as stricter rationality relations. Below, we provide definitions of two of the most important equilibrium refinements - subgame perfect equilibria [81] and trembling hand perfect equilibria [80] - within MAIDs. Later, in Section 6.2, we provide proofs regarding various equivalences between these definitions and those in EFGs. #### Subgame Perfect Equilibrium The concept of a subgame perfect equilibrium (SPE) was introduced into EFGs in order to eliminate NEs containing _non-credible threats_ - choices made by an agent in a sequential game that would not be in their best interest to carry out if given the opportunity [80, 81]. In MAIDs, we can rule out non-credible threats by ensuring that each agent plays a best response in every feasible \(s\)-subgame. In games with sufficient recall, an SPE (in behavioural policies) can always be constructed by performing backwards induction over the \(s\)-subgames and finding an NE in each; it is this technique in Appendix C.2 that allows an NE to computed much more efficiently than in a corresponding EFG. However, if even one agent in the game has insufficient recall, an SPE may not exist even when allowing for mixed policies (see Appendix B.2 for an example). **Definition 27**.: A policy profile \(\mathbf{\pi}\) in a MAID \(\mathcal{M}\) is a **subgame perfect equilibrium (SPE)** if \(\mathbf{\pi}\) is an NE in every feasible \(s\)-subgame of \(\mathcal{M}\), when restricted to that subgame.19 Footnote 19: Note that this notion of subgame perfectness can be generalised to other choices of rationality relations. **Proposition 8**.: _Any MAID \(\mathcal{M}\) with sufficient recall has at least one SPE in behavioural policies._ Recall \(\mathbf{\pi}^{\text{NE}}\) in Example 2, introduced in Section 5.1, in which robot one chooses \(D_{1}=q\) and robot two chooses \(D_{2}=p\) whatever the value of \(D_{1}\). We can immediately see that in the feasible \(s\)-subgame where \(D_{1}=\neg q\) - the \(s\)-subdiagram for the smallest such \(s\)-subgame is shown in Figure 7(b) - then choosing \(D_{2}=p\) is a non-credible threat, resulting in expected utility \(5\) instead of \(6\) for robot two. Instead, an example of an SPE is the policy profile \(\mathbf{\pi}^{\text{SPE}}\) in which robot one chooses \(D_{1}=\neg q\), and robot two chooses \(D_{2}=p\) if and only if \(D_{1}=q\). Under such a policy profile, robot two achieves its optimal expected utility in any of the feasible \(s\)-subgames, and given that robot two is following this policy, robot two receives expected utility \(2\) regardless of whether they move quickly or not. Before continuing, we note that rationality relations allow us to capture arbitrary sets of policy profiles as rational outcomes, including the equilibrium refinements in this section. For example, (when each agent has at most one decision) the rational outcomes \(\mathcal{R}^{\text{BR}}(\mathsf{m}\mathcal{M})\) are simply the NEs of \(\mathcal{M}\), as can easily be seen via inspection of Equation (1) and Definition 11. Similarly, for a MAID \(\mathcal{M}\) let us denote by \(\mathcal{M}(D)\) the set of all feasible \(s\)-subgames containing \(D\), and define: \[\pi_{D}\in r_{D}^{\text{SP}}(\mathbf{pa}_{\Pi_{D}}) \Leftrightarrow \pi_{D}\in\operatorname*{arg\,max}_{\hat{\pi}_{D}\in\text{dom}( \Pi_{D})}\sum_{U\in\mathbf{U}^{i}\cap\mathbf{V}^{\prime}}\mathbb{E}_{(\hat{\pi }_{D},\boldsymbol{\pi}^{\prime}_{-D})}[U]\ \forall\ \mathcal{M}^{\prime}\in\mathcal{M}(D),\] expressing that each agent plays a best response for their decision rule in every \(s\)-subgame containing that decision. In other words, \(\boldsymbol{\pi}\in\mathcal{R}^{\text{SP}}(\mathsf{m}\mathcal{M})\) if \(\boldsymbol{\pi}^{\prime}\in\mathcal{R}^{\text{BR}}(\mathsf{m}\mathcal{M}^{ \prime})\) for each \(s\)-subgame \(\mathcal{M}^{\prime}\) of \(\mathcal{M}\), where \(\boldsymbol{\pi}^{\prime}\) is \(\boldsymbol{\pi}\) restricted to the decision variables in \(\mathcal{M}^{\prime}\). If \(\mathcal{M}\) has sufficient recall, \(\mathcal{R}^{\text{SP}}(\mathsf{m}\mathcal{M})\) are the SPEs of the game, as stated formally below. While such representations may sometimes be slightly more cumbersome, encoding equilibria via rationality relations facilitates the use of \(\mathcal{R}\)-relevance and hence \(\mathcal{R}\)-reachability. This offers a principled way to identify independencies that can be useful both for causal and game-theoretic reasoning. **Proposition 9**.: _Suppose that a MAID \(\mathcal{M}\) has sufficient recall. Then, the set of SPEs of \(\mathcal{M}\) is equal to the set of rational outcomes \(\mathcal{R}^{\text{SP}}(\mathsf{m}\mathcal{M})\)._ #### Trembling Hand Perfect Equilibrium In an SPE, agents make decisions on the assumption that an SPE will be played in all proper subgames. As a result, however, the optimality of their strategies may not be robust to events in which other agents make mistakes, or 'tremble', with some small probability. To solve this problem, we can stipulate that each agent must play a best response (leading to an NE) in each _perturbed game_[80]. Let \(\zeta_{k}\) be a perturbation vector containing, for every \(D\in\boldsymbol{D}\), \(d\in\text{dom}(D)\), and decision context \(\mathbf{pa}_{D}\), a value \(\epsilon_{d}^{\mathbf{pa}_{D}}\in(0,1)\) such that \(\sum_{d\in\text{dom}(D)}\epsilon_{d}^{\mathbf{pa}_{D}}\leq 1\). Then, given a game \(\mathcal{M}\), the perturbed game \(\mathcal{M}(\zeta_{k})\) is defined such that each decision rule \(\pi_{D}\) is forced to have \(\pi_{D}(d\mid\mathbf{pa}_{D})\geq\epsilon_{d}^{\mathbf{pa}_{D}}\). **Definition 28**.: A policy profile \(\boldsymbol{\pi}\) is a **trembling hand perfect equilibrium (THPE)** in a MAID \(\mathcal{M}\) if there is a sequence of perturbation vectors \(\{\zeta_{k}\}_{k\in\mathbb{N}}\) such that \(\lim_{k\to\infty}\|\zeta_{k}\|_{\infty}=0\) and for each perturbed MAID \(\mathcal{M}(\zeta_{k})\) there is an NE \(\boldsymbol{\pi}_{k}\) such that \(\lim_{k\to\infty}\boldsymbol{\pi}_{k}=\boldsymbol{\pi}\). For example, the policy profile \(\boldsymbol{\pi}^{\text{SPE}}\) is _not_ a THPE. To see this, suppose that robot two trembles with probability \(\epsilon>0\) when \(D_{1}=q\) and \(\epsilon^{\prime}>0\) when \(D_{1}=\neg q\). Then, robot one's expected utility when playing \(q\) is \(2+2\epsilon\) and when playing \(\neg q\) is \(2-\epsilon^{\prime}\). Therefore in any NE of the perturbed game robot one will play \(q\) with probability one. One THPE is given by the policy profile \(\boldsymbol{\pi}^{\text{THPE}}\) in which robot one chooses \(D_{1}=q\) and robot two chooses \(D_{2}=p\) if and only if \(D_{1}=q\). Note that in any two-agent game, THPEs rule out all weakly dominated policies. In this example, robot one's policy of always choosing \(D^{1}=\neg q\) is weakly dominated by the policy of always choosing \(D^{1}=q\) (which itself is not weakly dominated by any other policy). ## 6 Connections to EFGs EFGs are perhaps the most widely studied model of dynamic strategic decision-making. Despite their intuitive appeal, these tree-based models can be less concise and reveal less of the underlying structure of a game than the DAG-based models we use in this paper. With that said, a natural question is whether the game-theoretic definitions for MAIDs in Section 5 successfully capture the familiar concepts of their EFG counterparts, or whether we might lose something by working with MAIDs - and hence (S)CGs - instead. In this section, we show that such concerns are unwarranted: these game-theoretic concepts are preserved when we convert between representations. We begin by briefly describing two algorithms, maid2efg and efg2maid, that implement these conversions. Using these procedures, we then provide equivalence results for the definitions from Section 5. Following this, we briefly discuss how tree-based models can also be used for causal reasoning in certain cases. ### Transformations We briefly summarise two procedures for converting between MAIDs and EFGs. A more formal treatment of both can be found in Appendix A.1. #### From MAID to EFG There are many ways to convert a MAID \(\mathcal{M}=(\mathcal{G},\mathbf{\theta})\) into an EFG \(\mathcal{E}\), but these differ in their computational costs [48, 70]. The basic idea, as employed by K&M, is to use a topological ordering \(\prec\) over the variables of \(\mathcal{G}\) to construct \(\mathcal{E}\)'s game tree by splitting on each of the variables in order. This splitting is required due to the fact that a variable in \(\mathcal{G}\) defines what happens given any context (a setting of the parent variables), whereas a node in \(\mathcal{E}\) defines what happens only in one context (the path taken to reach that node). As such, we may end up with exponentially more nodes in \(\mathcal{E}\) than variables in \(\mathcal{G}\). By querying the CPDs of each variable, branches from chance nodes are labelled with probabilities based on the path taken from the root of the tree to that node, and similarly for the utilities assigned to each leaf; branches from decision nodes are labelled with the possible actions available. Given two nodes corresponding to the same decision variable \(D\) in \(\mathcal{G}\), we assign them to the same information set if and only if the values taken by their ancestors along the paths from the root of the tree to each node agree on all those variables in \(\mathbf{Pa}_{D}\). Note that because there can be more than one topological ordering \(\prec\), we regard the output of maid2efg as a _set_ of EFGs - one for each possible ordering. **Remark 4**.: _This procedure can be made more efficient by marginalising out all variables not in \(\mathbf{U}\cup\mathbf{Fa}_{D}\), such as in the reduced EFG for Example 2 (shown in Figure (c)c), which only has \(2^{2}\) leaves, as opposed to the \(2^{3}\) leaves that would have resulted had we retained all the variables in the original MAID (in Figure (a)a). Importantly, the information in this reduced EFG is sufficient for computing its equilibria, and can thus offer significant efficiency gains (since the cost of solving an EFG depends on its size, which is exponential in the length of \(\prec\)). In our codebase [26], we therefore implement this more efficient transformation (originally described by K&M), which can be used in conjunction with Gambit, a popular tool for solving EFGs [59]._ #### From EFG to MAID By encoding the CPDs for each variable in the MAID using trees as opposed to tables, MAIDs can represent any game using at most the same (but often exponentially less) space than an EFG [48]. In general, there are many MAIDs that can represent a given EFG. For instance, upon converting the EFG representation (in Figure (a)a) of Example 1 to a MAID, we could naively create a decision variable for each information set. Alternatively, we could recognise, for example, that \(V_{1}^{1}\) and \(V_{2}^{1}\) correspond to the same real-world variable - the worker's choice to go to university or not - and thus combine them (as shown in Figure (b)b, in which the two information sets for the second agent have also been combined). Whether two nodes represent the same variable in an EFG is a matter of domain knowledge external to the EFG, but this knowledge typically exists when creating a model of a game and is then lost by viewing the interaction as an EFG. By formalising this notion as a question of whether nodes belong to the same _intervention set_, we provide a procedure \(\mathtt{efg2maid}\) (described fully in Appendix A.1), which maps an EFG to a _unique_, canonical MAID. **Definition 29**.: An **intervention set**\(J\) in an EFG \(\mathcal{E}\) is either a set of chance nodes, information sets belonging to the same agent, or leaves, such that: * Every node \(V\in J\) or \(V\in I_{J}^{i}\in J\) has the same number of children; * No path from the root of \(\mathcal{E}\) to one of its leaves passes through \(J\) more than once. Moreover, for a valid partition of information sets and chance variables into intervention sets, we require that any path to a node \(V\in J\) or \(V\in I_{J}^{i}\in J\) passes through the same intervention sets before reaching \(J\). In \(\mathtt{efg2maid}\), we assume that intervention sets are given; in the absence of such knowledge, one can simply choose each intervention set to be a singleton. The resulting MAID will remain equivalent to the EFG (in the sense described in the following subsection), but may not be as compact as it otherwise could be. The basic idea behind the conversion is to first view the game tree as a MAID, then to add missing edges to each node from its ancestors whilst merging nodes that are in the same intervention set into single variables. Incoming edges from any variable \(V\) whose value is not observed by a decision \(D\) are then removed from \(\mathbf{Pa}_{D}\), along with any duplicate edges produced by merging nodes. Each leaf intervention set is split into one utility variable for each agent. The distributions over each variable \(V\) are then formed for each context \(\mathbf{pa}_{V}\) by summing the relevant probabilities of sets of merged edges. ### Equivalences Using these transformations, we derive equivalence results between EFGs and MAIDs to demonstrate that the fundamental game-theoretic notions of subgames and various equilibrium refinements in Section 5 are _preserved_ when converting between representations. We begin by defining what we mean for two game representations to be 'equivalent'; the underlying idea is that the (behavioural) policy and strategy spaces in each game should be the same and each corresponding (behavioural) policy and strategy profile should lead to the same expected utility for each agent in both games. An immediate consequence of this is that NEs are also preserved. **Definition 30**.: We say that a MAID \(\mathcal{M}\) is **equivalent** to an EFG \(\mathcal{E}\) (and vice versa) if there is a bijection for each agent \(f^{i}:\Sigma^{i}\to\mathit{dom}(\mathbf{\Pi}^{i})/\sim\) between their strategies in \(\mathcal{E}\) and a partition of their policies in \(\mathcal{M}\) (the quotient set of \(\mathit{dom}(\mathbf{\Pi}^{i})\) by an equivalence relation \(\sim\) where \(\boldsymbol{\pi}^{i}\sim\hat{\boldsymbol{\pi}}^{i}\) if and only if \(\boldsymbol{\pi}^{i}\) and \(\hat{\boldsymbol{\pi}}^{i}\) differ only on infeasible decision contexts) such that for every \(\boldsymbol{\pi}\in f(\sigma)\) and every agent \(i\), we have \(\mathbb{E}_{\sigma}\big{[}U(\rho[\boldsymbol{L}])[i]\big{]}=\sum_{U\in \boldsymbol{U}^{i}}\mathbb{E}_{\boldsymbol{\pi}}[U]\), where \(f(\sigma)\coloneqq\bigtimes_{i\in N}f^{i}(\sigma^{i})\). We refer to any such \(f\) as a **natural mapping** between \(\mathcal{E}\) and \(\mathcal{M}\). **Lemma 3**.: _Let \(f\) be a natural mapping between \(\mathcal{E}\) and \(\mathcal{M}\). Then \(\sigma\) is an NE in \(\mathcal{E}\) if and only if every \(\boldsymbol{\pi}\in f(\sigma)\) is an NE in \(\mathcal{M}\)._ The reason we use an equivalence relation on the space of policies is that \(\mathtt{efg2maid}\) can introduce additional infeasible decision contexts: those \(\mathbf{pa}_{D}\) such that \(\Pr^{\boldsymbol{\pi}}(\mathbf{pa}_{D})=0\) for all \(\boldsymbol{\pi}\), corresponding to paths in the EFG that do not exist. An agent's choice in such decision contexts has no bearing on the outcome of the game, meaning we may safely abstract away from them. Any natural mapping is therefore sufficient for preserving the essential game-theoretic features of each representation, as we show below, though it may not be unique. We begin with a supporting lemma that justifies the correctness of the procedures \(\mathtt{maid2efg}\) and \(\mathtt{efg2maid}\), and forms the basis of our other results. **Lemma 4**.: _If \(\mathcal{E}\in\mathtt{maid2efg}(\mathcal{M})\) or \(\mathcal{M}=\mathtt{efg2maid}(\mathcal{E})\), then \(\mathcal{E}\) and \(\mathcal{M}\) are equivalent._ This lemma follows directly from the construction of a natural mapping \(f\) using the two procedures, maid2efg and efg2maid respectively. The intuition is that the information sets in an EFG correspond to the feasible decision contexts in a MAID, and thus a behavioural strategy profile \(\sigma\) in the EFG corresponds to a behavioural policy profile \(\mathbf{\pi}\) in the MAID, and vice versa. The following result is a direct corollary of Lemma 3 and Lemma 4. **Corollary 1**.: _If \(\mathcal{E}\in\mathtt{maid2efg}(\mathcal{M})\) or \(\mathcal{M}=\mathtt{efg2maid}(\mathcal{E})\), then there is a natural mapping \(f\) between \(\mathcal{E}\) and \(\mathcal{M}\) such that \(\sigma\) is an NE in \(\mathcal{E}\) if and only if every \(\mathbf{\pi}\in f(\sigma)\) is an NE in \(\mathcal{M}\)._ For a subgame \(\mathcal{E}^{\prime}\) in an EFG, it can be shown that the variables outside \(\mathcal{E}^{\prime}\) are not \(s\)-relevant to those in the corresponding \(s\)-subgame \(\mathcal{M}^{\prime}\). This means that subgames in EFGs have equivalent counterparts in their equivalent MAID, as established by the following proposition. A minor caveat here is that some utility variables may not occur in \(\mathcal{M}^{\prime}\), and so we must add their value (under the setting of the variables outside \(\mathcal{G}^{\prime}\) that defines \(\mathcal{M}^{\prime}\)) to the payoff for the agents in order to equate their expected utilities with that of \(\mathcal{E}^{\prime}\). For any subgame \(\mathcal{M}^{\prime}\), however, this change in value is constant for each agent under any policy profile, and so has no effect on their decisions. **Proposition 10**.: _If \(\mathcal{E}\in\mathtt{maid2efg}(\mathcal{M})\) or \(\mathcal{M}=\mathtt{efg2maid}(\mathcal{E})\), then there is a natural mapping \(f\) between \(\mathcal{E}\) and \(\mathcal{M}\) such that, for every subgame \(\mathcal{E}^{\prime}\) in \(\mathcal{E}\) there is an \(s\)-subgame \(\mathcal{M}^{\prime}\) in \(\mathcal{M}\) that is equivalent (modulo a constant difference between the utilities for each agent under any policy in \(\mathcal{M}^{\prime}\)) to \(\mathcal{E}^{\prime}\) under the natural mapping \(f\) restricted to the strategies of \(\mathcal{E}^{\prime}\)._ This restriction of \(f\) to the strategies in \(\mathcal{E}^{\prime}\) can be made precise by considering only those feasible decision contexts that correspond to the information sets contained in \(\mathcal{E}^{\prime}\). By first applying Proposition 10 and then Lemma 3 to each of the resulting subgames, we see that any SPE in \(\mathcal{M}\) is carried over to \(\mathcal{E}\). We note, however, that as there may be more \(s\)-subgames in a MAID than in its equivalent EFG, the criterion of subgame perfectness may be slightly stronger in the MAID. In other words, not all SPEs in an EFG may be SPEs in the equivalent MAID. This additional strength can be useful in ruling out non-credible threats even when they do not fall under a particular subgame in the EFG. **Corollary 2**.: _If \(\mathcal{E}\in\mathtt{maid2efg}(\mathcal{M})\) or \(\mathcal{M}=\mathtt{efg2maid}(\mathcal{E})\), then there is a natural mapping \(f\) between \(\mathcal{E}\) and \(\mathcal{M}\) such that if every \(\mathbf{\pi}\in f(\sigma)\) is an SPE in \(\mathcal{M}\), then \(\sigma\) is an SPE in \(\mathcal{E}\)._ Finally, we derive an equivalence between the THPEs in EFGs and those in MAIDs. In order to do so, it suffices to prove an equivalence between perturbed versions of the corresponding games \(\mathcal{E}(\zeta_{k})\) and \(\mathcal{M}(\zeta_{k})\), which can easily be done by construction using maid2efg and efg2maid, and then by applying Lemma 4 and Lemma 3 to each of these perturbed games. **Proposition 11**.: _If \(\mathcal{E}\in\mathtt{maid2efg}(\mathcal{M})\) or \(\mathcal{M}=\mathtt{efg2maid}(\mathcal{E})\), then there is a natural mapping \(f\) between \(\mathcal{E}\) and \(\mathcal{M}\) such that \(\sigma\) is a THPE in \(\mathcal{E}\) if and only if every \(\mathbf{\pi}\in f(\sigma)\) is a THPE in \(\mathcal{M}\)._ This series of equivalence results serves to justify MAIDs as an appropriate choice of game representation. Not only do they provide computational advantages over EFGs, they preserve some of the most fundamental game-theoretic concepts commonly employed in EFGs. ### Causality in EFGs One alternative to causal games would be to define causal concepts directly via EFGs. In this section, we explain how this can be done for a limited set of causal queries by extending methods designed for probability trees [31], and explain some shortcomings of this approach relative to the models we focus on in this paper. ### Interventions Intuitively, interventions in EFGs correspond to replacing the probability distribution(s) governing some node(s) in the game tree. More formally, consider an EFG \(\mathcal{E}\) with intervention sets (if each intervention set is a singleton, we recover the case where each EFG variable is treated separately). As in CGs, we can apply both pre- and post-strategy interventions, and both operations have the same form. We make an intervention \(\mathcal{I}\) on an intervention set \(J\) by replacing \(P_{j}\) or \(\sigma_{j}^{i}\) with an arbitrary probability distribution \(P_{j}^{*}(V_{j}\mid\cap_{V_{j}\in J}\mathbf{Anc}_{V_{j}})\) or \({\sigma_{j}^{i}}^{*}(V_{j}\mid\cap_{V_{j}\in J}\mathbf{Anc}_{V_{j}})\), for each \(V_{j}\in J\). Hard interventions represent the special case when this distribution is given by \(\delta(V_{j},v_{j})\), where \(v_{j}\) corresponds to an outgoing edge from each node \(V_{j}\in J\). Pre-strategy interventions result from performing an intervention on the game tree _before_ agents choose their strategies - in essence, forming a new game \(\mathcal{E}_{\mathcal{I}}\) - whereas post-strategy interventions result from intervening on the probability tree that results _after_ agents have chosen their strategies - in other words, forming a new distribution \(P_{\mathcal{I}}^{\sigma}\) over paths in the tree. While the graph of an EFG is typically viewed as encoding informational constraints upon the decisions of agents, as opposed to temporal or causal structure, it can also be given a causal interpretation in much the same way that a MAID or BN can, by restricting the EFG to be consistent with the set of all possible (hard) interventional distributions. This restriction produces a level two model in which the interventional queries described in the paragraph above are semantically meaningful. Due to the total ordering imposed by EFGs over the variables in a game, however, only a restricted subset of interventions are available (those whose distributions \(\mathcal{I}\) condition only on _ancestors_ of the node in question). Similarly, some queries (be they probabilistic or causal) over EFGs are not well-defined, due to the fact that, in general, not all variables are assigned a value on a path \(\rho\) from the root to a leaf. ### Counterfactuals Answering counterfactual queries is slightly more complicated. We briefly describe an existing procedure for probability trees, and refer the interested reader to this work for further details [31]. Suppose that we wish to calculate \(P^{\sigma}(\boldsymbol{x}_{\mathcal{I}}\mid z)\) - the value that \(\boldsymbol{X}=\boldsymbol{V}\setminus\{Y,Z\}\) would have taken if \(Y\) were distributed according to \(\mathcal{I}\), given that in fact \(z\) is true. The first step is to condition on \(z\) by setting \(P_{Z}(z)=1\) or \(\sigma_{Z}^{i}(z)=1\), which is equivalent to first performing a hard intervention \(\text{do}(Z=z)\) and then renormalising the probability distributions over the branches _upstream_ of this intervention. The second step is to perform the soft intervention \(\mathcal{I}\) on \(Y\), without any renormalising. Thirdly and finally, we restore the probability distributions over the branches _downstream_ of this second intervention to the original settings given by \(P^{\sigma}\). The resulting probability tree describes the _post-strategy_ counterfactual distribution \(P^{\sigma}(\boldsymbol{x}_{\mathcal{I}}\mid z)\). By modifying this procedure such that the normalisation upstream of \(Z\) and the restoring of distributions downstream of \(Y\) takes place according to the actual strategy profile \(\sigma\), then allowing agents to play a new counterfactual strategy profile \(\sigma^{\prime}\), we produce a _pre-strategy_ counterfactual distribution (though if conditioning is performed downstream of differences between \(\sigma\) and \(\sigma^{\prime}\), the resulting distribution will not, in general, be correct). Counterfactual queries in EFGs are _not_ invariant to the choice of tree representation, even if the distributions \(P^{\sigma}\) are the same for any strategy profile \(\sigma\). This difficulty arises because standard EFGs do not reside at level three of the causal hierarchy, but may be overcome (as in CBNs and CGs) by relegating all stochasticity to a set of exogenous chance nodes that appear at the top of the game tree, and allowing agents only to select deterministic strategies (over endogenous decision nodes) as a function of some exogenous noise. This also resolves the difficulties of pre-policy counterfactuals in EFGs, as updates made due to conditioning on \(z\) will now be _upstream_ of differences between \(\sigma\) and \(\sigma^{\prime}\). Existing work on probability trees [31], on the other hand, treats stochasticity as endogenous, and only possible to learn about (in the counterfactual world) for those variables that are _upstream_ of the intervention \(\mathcal{I}\). The ordering of variables in the tree therefore implicitly represents a substantive modelling assumption that resolves the subtle complexities surrounding counterfactuals in models with intrinsic stochasticity [16, 69]. ## 7 Applications Having described and analysed the theoretical features of causal games, we now consider potential domains of application. In the first subsection, we provide a case study based on the UK home insurance market. In the second, we discuss how several existing concepts for the analysis of (artificial) agents can be embedded and further developed within our framework. We provide worked examples of these embeddings in Appendix B.3. More substantive results, however, are beyond the scope of this paper, whose fundamental aim is to lay the theoretical foundations for causal games, rather than to explore specific applications. ### Case Study: Insurance Pricing While our primary motivation is to use causal games to analyse AI systems, another natural domain of application is in economics, where both causal and game-theoretic modelling are common. For example, when a regulator in the UK proposes a policy intervention, they are required to conduct a cost-benefit analysis to justify it. This entails making a comparison between the consequences of the intervention and the consequences of no intervention and/or alternative interventions. Some regulators, such as the Financial Conduct Authority (FCA), include a 'causal chain' in their analyses, though these are only heuristic diagrams rather than formal causal models. In this case study, we demonstrate the usefulness of causal games for economic analysis with an example based on the UK home insurance market, worth PS6.31 billion in 2021 [32], which was the focus of a recent FCA report [24]. #### The Model In this setting: some customers are inert and do not change their insurance provider from year to year; firms charge a low price to attract customers in the first year, then increase their prices upon renewal; inert customers, therefore, end up paying more, while savvy customers spend time negotiating or switching provider each year. This allows firms to increase profits at customers' expense, either from increased prices (if inert), or switching costs (if savvy). These practices are especially troubling because it is often more vulnerable customers (such as older or less educated individuals) who are likely to be exploited [58]. A simplified model of this setting is as follows. **Example 3** (Insurance Market).: _A customer in a duopolistic insurance market must choose an insurance provider, and aims to minimise their costs. They already have a contract with one of the two providers, and so must decide whether to renew or switch. The customer is either savvy (in which case they are modelled as having a low switching cost) or inert (in which case they are modelled as having a high switching cost). The two firms set their prices simultaneously, with the knowledge of which firm the customer has an existing contract with, but without the knowledge of the customer's type (savvy or inert), in order to maximise their profits._ We represent this setting as a causal game \(\mathcal{M}\) with three agents (in Figure 9(a)) and parameterise the model using real-world data on pricing strategies and consumer tendencies in the UK home insurance market within the period 2013-18 [24, 25, 58]. The two firms (agents one and two) decide prices \(d^{1},d^{2}\in\mathbb{N}\) for their insurance premiums based on which firm the customer (agent three) currently has a contract with, \(c\in\{1,2\}\).20 The customer uses this information, along with their type \(T\) - savvy (\(s\)) or inert (\(\neg s\)) - to choose a firm \(d^{3}\in\{1,2\}\) to purchase insurance from, or to exit the market (\(\bot\)). Firm \(i\)'s utility (their profit) is assumed to be given by their policy price \(d^{i}\) minus the marginal cost of providing the policy, if selected by the customer, and zero otherwise. The customer's utility is defined as their valuation of the policy, minus the price they pay and switching costs, with an extra \(-\infty\) term if the customer is inert (i.e., to model the searching and switching cost being prohibitively high). The customer's utility is zero if they exit the market. The chance variables \(C\) (the customer's current firm) and \(T\) (the customer's type) are parameterised following the values in Table (b)b. Footnote 20: The \(\mathbf{A}\)-valued game is a game with a set of \(\mathbf{A}\)-valued games, and a game with a set of \(\mathbf{A}\)-valued games. #### The Proposed Intervention In the aforementioned report, the FCA proposes to ban insurance companies from setting different prices for new and existing customers [24]. Because firms are made aware of the intervention before setting their pricing strategies, we represent this as a pre-policy intervention on \(\Pi_{D^{1}}\) and \(\Pi_{D^{2}}\). More concretely, this corresponds to changing the rationality relations \(r_{D^{1}}\) and \(r_{D^{2}}\) by restricting their codomains - _dom_(\(\Pi_{1}\)) and _dom_(\(\Pi_{2}\)) - to be such that \(\pi_{D^{1}}(D^{1}\mid C)=\pi_{D^{1}}(D^{1})\) and \(\pi_{D^{2}}(D^{2}\mid C)=\pi_{D^{2}}(D^{2})\), effectively removing the edges \(C\to D^{1}\) and \(C\to D^{2}\) (shown as dashed in Figure (a)a). In what follows we denote this intervention by \(\mathcal{I}\). The FCA offers a number of hypotheses about the likely effects of this intervention, namely that: 1. The percentage of customers switching will reduce; 2. Switching customers will end up paying higher prices, and renewing customers will end up paying lower prices; 3. Overall, customers will end up better off on average. Representing the intervention within a causal game \(\mathcal{M}\) allows us to express these hypotheses formally by measuring and quantifying the causal effects of \(\mathcal{I}\), which cannot, in general, be done without a formal causal model. For the purpose of this analysis, we set \(p=0.6\) and assume that agents play a pure THPE, which, while not truly realistic, illustrates the real-world applications of causal games (which can also be used to capture boundedly rational agents). In what follows, we denote the rationality relations describing this assumption as \(\mathcal{R}\). Before addressing the three hypotheses, it is a simple exercise to see that for each \(\boldsymbol{\pi}\in\mathcal{R}(\mathcal{M})\), firms do indeed set minimal prices (i.e., equal to their marginal cost) for new customers and higher prices for renewing customers, up to the point of driving away sales due to switching Figure 10: (a) A causal game representing Example 3. (b) Data from the UK home insurance market within the period 2013-18 [24, 58, 25], used to parameterise the game. costs from customers. In other words, we have that \(\pi_{D^{i}}(D^{i}\mid C=3-i)=\delta(D^{i},206)\) and \(\pi_{D^{i}}(D^{i}\mid C=i)=\delta(D^{i},244)\) for each \(i\in\{1,2\}\). For their part, customers always seek the lowest price, taking into account switching costs, and are indifferent when the price at an alternative firm is the same as the price at their current firm plus the switching cost. After the intervention \(\mathcal{I}\), the customer's policy stays the same, but we have \(\pi_{D^{1}}(D^{1}\mid C)=\delta(D^{1},288)\) and \(\pi_{D^{2}}(D^{2}\mid C)=\delta(D^{2},250)\) for the two firms, respectively, for any \(\boldsymbol{\pi}\in\mathcal{R}(\mathcal{M}_{\mathcal{I}})\). The rational outcomes of the original game correspond to whether savvy customers (at each of the two firms) switch to the other firm or not. Depending on whether customers with neither firm, the first firm, the second firm, or both firms switch, then we have 0%, 43.2%, 28.8%, or 72% of customers switching, respectively. After intervention \(\mathcal{I}\), customers already with the second firm have no incentive to switch, and so the percentage of customers switching is either 0% or 43.2%. If we were to assume a uniform prior over rational outcomes, for instance, then we would see that the causal effect of \(\mathcal{I}\) on the percentage of customers switching insurance provider is given by: \[\frac{1}{|\mathcal{R}(\mathcal{M}_{\mathcal{I}})|}\sum_{\boldsymbol{\pi}\in \mathcal{R}(\mathcal{M}_{\mathcal{I}})}\mathrm{switch}(\boldsymbol{\pi})- \frac{1}{|\mathcal{R}(\mathcal{M})|}\sum_{\boldsymbol{\pi}\in\mathcal{R}( \mathcal{M})}\mathrm{switch}(\boldsymbol{\pi})=21.6\%-36\%=-14.4\%,\] where \(\mathrm{switch}(\boldsymbol{\pi})\coloneqq\sum_{i}\mathrm{Pr}^{\boldsymbol{ \pi}}(D^{3}=i\mid C=3-i)\) is the probability that a customer switches provider. Thus, hypothesis i) is confirmed. For the first part of hypothesis ii), we wish to measure the causal effect of \(\mathcal{I}\) on the quantity \(\mathbb{E}_{\boldsymbol{\pi}}[D^{i}\mid C=i,D^{3}=3-i]\) for each \(i\in\{1,2\}\), in settings such that \(\mathrm{Pr}^{\boldsymbol{\pi}}(C=i,D^{3}=3-i)>0\). For \(\boldsymbol{\pi}\in\mathcal{R}(\mathcal{M})\), this quantity is equal to 206, which may occur when either \(i=1\) or \(i=2\). For \(\boldsymbol{\pi}\in\mathcal{R}(\mathcal{M}_{\mathcal{I}})\) this quantity is equal to 250, which occurs when \(i=2\); the customer never switches to firm one. This confirms the first half of the hypothesis. For the second part, a similar analysis shows that renewing customers pay \(\pounds 244\) before the intervention either \(\pounds 250\) or \(\pounds 288\) after the intervention, which is _contrary_ to the hypothesis. We discuss the reasons for this in the following paragraph, as well as how another proposal can be employed to achieve the desired result. Hypothesis iii) concerns simply the causal effect of \(\mathcal{I}\) on \(\mathbb{E}_{\boldsymbol{\pi}}[U^{3}]\). The customer's expected utility is the same under any of the rational outcomes and interventional rational outcomes respectively, so we can write simply: \[\mathbb{E}_{\boldsymbol{\pi}}[U^{3}_{\mathcal{I}}]-\mathbb{E}_{\boldsymbol{ \pi}}[U^{3}]=23.12-52=-28.88.\] Aa in the case for the second half of hypothesis ii), this _disconfirms_ the hypothesis (in our model). The reason for both of these results is that the relatively high switching costs for customers (\(\pounds 38\)) and the relatively high percentage of inert customers (28%) mean that firms are better off sticking to higher prices to exploit customers who will not switch, rather than lower prices in order to attract customers away from other firms. To achieve the desired effects, we can instead consider another intervention proposed by the FCA, which is to reduce switching costs for customers (for example, by allowing them to stop their insurance policy from auto-renewing more easily) [24]. If we therefore consider a second intervention \(\mathcal{I}^{\prime}\) that makes the same change as \(\mathcal{I}\) but also modifies \(\Theta_{U^{3}}\) to set the switching costs of customers to \(\pounds 0\), for example, then we have that: \[\mathbb{E}_{\boldsymbol{\pi}}[U^{3}_{\mathcal{I}}]-\mathbb{E}_{\boldsymbol{ \pi}}[U^{3}]=72-52=20,\] as \(\pi_{D^{i}}(D^{i}\mid C)=\delta(D^{i},224)\) for \(i\in\{1,2\}\) in any \(\boldsymbol{\pi}\in\mathcal{R}(\mathcal{M}_{\mathcal{I}^{\prime}})\). Returning to hypothesis ii), this implies that all customers now pay \(\pounds 224\) when renewing as opposed to \(\pounds 244\), thus completing the set of all desired outcomes. ### Extensions As shown above, not only can our analysis be used to investigate the qualitative hypotheses put forwards by the FCA, it provides quantitative results too. Nevertheless, we emphasise that the model we present is highly simplified, and considers only a very simple intervention. More complex models and queries are beyond the scope of the present paper, but represent a natural avenue for further applied work. Before continuing, we briefly remark on possible extensions. First, we could extend the model to make it more realistic. This might include: increasing the number of firms; different marginal costs for different firms; different customer valuations of insurance policies from different firms; small costs to firms from customers switching; including intermediary firms; not making savviness a binary variable; and modelling the market over time instead of as a one-shot game. With these added complexities to the model come greater opportunities to exploit the richness of causal games. In particular, in the simple model above, the lack of more intricate causal dependencies between the agents' actions means that applying a pre-policy intervention essentially reduces to re-solving a slightly different game (where the dashed edges in Figure 9(a) are removed). In larger games, it is both easier and more important to avoid this cost, which cannot (in general) be avoided in models such as EFGs that do not explicitly represent the causal structure of a game. We could also use the model for more advanced analysis, such as answering counterfactuals, identifying subgames, and computing mixtures of pre- and post-policy queries. ### Blame, Intent, Incentives, and Fairness Many of the most important concepts for reasoning about the safe and ethical use of AI systems are implicitly causal in nature. Moreover, in recent years, substantial progress has been made on formalising these concepts using causal models [21, 36, 52, 61, 76]. Because causal games are the first causal framework to explicitly capture multi-agent and game-theoretic reasoning, they open up the possibility of further work in these directions. In what follows, we explain how causal games might be applied to existing concepts in order to arrive at richer and more general results. Concrete examples of how these concepts can be modelled using causal games are provided in Appendix B.3. #### Blame and Intent SCMs have been fruitfully employed in order to formally define the notion of _actual causation_ (i.e., what it means for some event \(c\) to, in fact, cause some event \(e\)) [34, 37]. Such ideas are not only of philosophical interest, but have been argued as being crucial for building AI systems that can automatically generate _explanations_, either of their own workings or that of other agents [38, 61]. In recent years, this underlying theory has been used to formalise the concepts of planeworthiness and intention [27, 36]. As these works have highlighted, issues of blame and intent become particularly acute in multi-agent settings. While these concepts can be accommodated within (structural) causal games (see Appendix B.3), they also allow for an even richer formalism of blame and intent in multi-agent settings. More specifically, there are at least four generalisations we might make. Firstly, the use of sets of utility variables that define _multiple_ utility functions, one for each agent, would mean that costs (and hence both blame and intent) can be viewed from the perspective of multiple agents. Secondly, we could consider blame and intent with respect to _strategies_ rather than single actions, which corresponds to considering soft (post-policy) interventions as opposed to hard (post-policy) interventions. Thirdly, we might wish to consider _pre-policy_ interventions by using mechanised games. For example, if a pre-policy change to one agent's decision means that the only rational response for the second agent is something that causes a bad outcome, we might think that the first agent is at least somewhat to blame for this turn of events (although these concepts become more nuanced when there are cyclic dependencies between decision rule variables). Finally, all of these extensions can also be viewed in terms of _coalitions_ of agents and their strategies, which has already begun to be explored in prior work [27]. #### Incentives In order to build intelligent agents that do not behave in undesirable ways, it is useful to be able to reason about their _incentives_[54, 18, 21] and _reasoning patterns_[2, 72]. For example, given a (possibly multi-agent) decision-making scenario, we might be interested in asking whether an agent has an incentive to base its policy on a protected attribute, or to influence a variable that we would prefer to be left alone. Similarly, notions of fairness are often naturally expressed in terms of whether a different outcome would result in some counterfactual situation. Recent work has sought to formalise incentives by modelling such scenarios as single-agent causal games (i.e., causal influence diagrams). This work has developed a number of sound and complete graphical criteria for identifying Value of Information (VoI) [41] and Value of Control (VoC) [83]. Sound and complete graphical criteria have also been introduced for two novel concepts: _response incentives_, which indicate when a decision-maker can benefit from responding to a variable, and _instrumental control incentives_, which establish whether an agent can benefit from manipulating a variable [21]. Existing work, however, has been limited to the single-agent, single-decision setting; it is our hope that causal games will provide the basis for generalising this work to the multi-agent, multi-decision setting and that the use of mechanised games may lead to additional insights. For instance, for some incentive concept \(C\) we may wish to ask questions such as 'is there any \(\mathcal{R}\)-rational outcome in which agent \(i\in N\) has a \(C\) incentive with respect to a particular variable \(X\) in the game?'. Given the fact that many proposals for safe AI systems are multi-agent [22], this generalisation to the multi-agent setting marks an important next step in analysing agent incentives. We provide an example of such a proposal - cooperative inverse reinforcement learning [33] - modelled as a causal game, in Appendix B.3. #### Fairness Another important and useful application of incentives is for reasoning about fairness, a number of popular and influential definitions of which are based explicitly on causal frameworks [3, 46, 52, 63, 98]. Indeed, it can be shown that all optimal policies \(\boldsymbol{\pi}^{*}\) in a single-decision SCIM are _counterfactually unfair_[52] with respect to a protected attribute \(A\) (meaning that a change to the protected attribute would change the decision made) if and only if there is an RI on \(A\)[21]. The question of fairness arguably becomes even more important and interesting in the multi-agent setting, in which one not only has to ask whether or not a process is fair, but fair to _whom_, and whether we might be forced to trade off fairness with respect to different agents. Interestingly, despite the large number of existing works investigating fairness in games (see, e.g., the seminal papers of Rabin [75] or Fehr and Schmidt [23]) and the recent insights gained from using causal definitions of fairness, to the best of our knowledge there has been no application of these definitions to the multi-agent setting. One possible (partial) explanation for this is that researchers have, until now, been lacking precisely the kind of models that we introduce. ## 8 Discussion In this paper, we have introduced a framework that we argue provides a unifying formalism for reasoning about causality in games. As mentioned in Section 1, combinations of causal and game-theoretic reasoning have long been considered by various research communities, and so we conclude with a brief summary of the relative advantages and disadvantages of causal games compared to other models, before offering some final thoughts about future directions. ### Advantages and Disadvantages of Causal Games The primary benefit of causal games compared to prior work is that no previous framework captured both game-theoretic and causal features in a general, principled way. Alongside our discussion of related work in Section 1.1, we expand briefly upon this claim by considering other causal and game-theoretic models in turn. #### Causal Models Standard causal models do not take into account the presence of rational, self-interested agents. Doing so requires more than a simple re-labelling of some of the variables as decisions and utilities, as the presence of strategic, decision-making agents violates the standard assumption of independent causal mechanisms [71], represented by the edges between mechanism variables in mechanised games. Alternative causal models that do include agents, such as CIDs [17, 21, 42], typically only consider a single agent. To the best of our knowledge, the only exception to this rule is settable systems [91, 92], though these models do not capture the multiplicity of equilibria that arise in game-theoretic analyses, and thus do not naturally support the analysis we seek to do in this paper. The focus in these works is instead on capturing lower-level algorithmic details, which may make them more appropriate when reasoning about the data and attributes of, say, an optimisation or machine learning process. Our introduction of _non-deterministic_ mechanisms (arising whenever agents are indifferent between alternatives) also serves to generalise existing work on cyclic causal models [7, 35]. However, the cyclic dependencies in causal games are of a specific form due to the fact that they represent game-theoretic equilibria, which means that causal games are not necessarily an appropriate model for analysing the more arbitrary dynamical systems and sets of simultaneous equations studied in these works. The fact that we build on top of MAIDs means that causal games inherit some of the existing game-theoretic concepts introduced in these models, such as NEs, while being consistent with the more standard probabilistic graphical models upon which MAIDs are based. This further allows us to derive notions of subgames and other equilibrium refinements, which are critical for game-theoretic reasoning and yet are not supported by other causal models. Such concepts have been historically underexplored in the context of graphical games when compared to, say, EFGs or strategic-form games. #### Game-Theoretic Models The most natural game-theoretic model with which to compare causal games is EFGs, which are tree-based models as opposed to DAG-based. For our purposes, the most important benefit of DAG-based models is that they can be more readily used to natively support a wide range of causal queries. This is bolstered by the use of \(\mathcal{R}\)-relevance graphs, which form an explicit representation of the causal dependencies between agents' decision rules; there is no such representation for these dependencies in EFGs.23 Mechanised games can be used to define a wide range of both pre- and post-policy queries in games. In contrast, as explained in Section 6.3, there are many causal queries that _cannot_ be natively answered in EFGs. DAG-based models also more compactly and explicitly represent dependencies between variables, which can often only be understood in an EFG through inspecting the parameterisation of the game. Moreover, a DAG-based representation of a game need never be bigger than the corresponding EFG and can be exponentially smaller. On the other hand, if a game is highly asymmetric in its play paths (such as when if play proceeds down one path the game stops immediately, and on the other path it continues for several more moves), then this structure is not immediately observable from a causal game, and may effectively make many valuations of the variables irrelevant due to their inconsistency with the shorter game path. In addition to their transparency, MAIDs allow us to further exploit the conditional independencies between variables using d-separation. Using \(\mathcal{R}\)-reachability, we may construct the \(\mathcal{R}\)-relevance graph for any game and find more subgames in a MAID than in its equivalent EFG. This can significantly reduce the computational complexity of solving a game (as shown by the example in Appendix C.2), offers analytical benefits, and provides a way to define a stronger subgame perfectness condition. On the other hand, in the case of _context-specific independencies_ - such as when \(A\) sometimes depends on \(B\), and \(B\) sometimes depends on \(A\) - it is well-known that DAG-based models are a less natural choice than tree-based models [9].24 Footnote 24: Though it is possible to support said independencies in DAGs via tree-based representations of the CPDs, which can graphically capture different independencies on different branches. Finally, though we have significantly extended the number of standard game-theoretic concepts for MAIDs (and hence causal games) and proved their equivalence to their EFG counterparts, EFGs remain a more well-investigated representation. Thus, if one is interested in more exotic equilibrium refinements, for example, EFGs are likely to be a more suitable model. It is our hope that further research on MAIDs and causal games will reduce this last difference. ### Future Work Our priority is to use causal games to further analyse incentives in multi-agent systems, which has important applications in ensuring that we build AI systems that are safe and fair. As mentioned in Appendix 7.2, existing work has already characterised some of these incentives using CIDs. Therefore, a natural next step is to extend these definitions to multi-agent (and multi-decision) scenarios, though this is by no means trivial. For example, in single agent-settings VoI is always non-negative, whereas in multi-agent settings this need not be the case (if other agents are aware of the information gain) [74]. Further, we might wish to rule certain incentives in or out based on whether or not they occur under all or some policy profiles satisfying a particular equilibrium refinement, or more generally, falling within the set of rational outcomes. Other specific applications for which causal games may prove fruitful are, for example: designing mechanisms for auctions and other multi-agent systems, or analysing possible interventions on those mechanisms; generalising counterfactual fairness to multi-agent settings; providing artificial agents with the means to more easily provide explanations and reason about qualitative concepts (such as blame and intent or reasoning patterns) that can be defined using causal models of games; and deriving new definitions for similar concepts. We might also hope to extend the framework presented here with: model variations that can more easily capture dynamic settings, fine-grained subjective beliefs, or optimisation; definitions capturing other classic equilibrium refinements such as perfect Bayesian equilibrium [28] or sequential equilibrium [50]; and methods of causal discovery for games. Given that we propose this paper and our accompanying codebase as a robust foundation for reasoning about causality in games, we believe our work presents many other interesting avenues for further research. We hope that the advantages causal games confer based on their generality, explainability, and succinctness (not to mention their compatibility with existing mainstream models) make them an attractive choice for researchers and practitioners alike who are interested in the intersection of causality and game theory. ## Acknowledgements This paper is a significantly expanded version of a previous publication [39]. We thank Zac Kenton, Jon Richens, Ilya Shpitser, Colin Rowat, Chris van Merwijk, Patrick Forre, David Reber, Joe Halpern, Paul Harrenstein, Will Lee, Vincent Conitzer, and several anonymous reviewers for their helpful comments and discussions while completing this work. Hammond was supported by an EPSRC Doctoral Training Partnership studentship (Reference: 2218880), Fox was supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems (Reference: EP/S024050/1), and Wooldridge was supported by a UKRI Turing AI World Leading Researcher Fellowship (Reference: EP/W002949/1).
2310.02465
Disk Cooling and Wind Lines As Seen In the Spectral Line Evolution of V960 Mon
We follow up our photometric study of the post-outburst evolution of the FU Ori object V960 Mon with a complementary spectroscopic study at high dispersion that uses time series spectra from Keck/HIRES. Consistent with the photometric results reported in Carvalho et al. 2023, we find that the spectral evolution of V960 Mon corresponds to a decrease in the temperature of the inner disk, driven by a combination of decreasing accretion rate and increasing inner disk radius. We also find that although the majority of the absorption lines are well-matched by our accretion disk model spectrum, there are several strong absorption line families and a few emission lines that are not captured by the model. By subtracting the accretion disk model from the data at each epoch, we isolate the wind/outflow components of the system. The residuals show both broad and highly blueshifted profiles, as well as narrow and only slightly blueshifted profiles, with some lines displaying both types of features.
Adolfo S. Carvalho, Lynne A. Hillenbrand, Jerome Seebeck
2023-10-03T22:20:33Z
http://arxiv.org/abs/2310.02465v2
# Disk Cooling and Wind Lines as Seen in the Spectral Line Evolution of V960 Mon ###### Abstract We follow up our photometric study of the postoutburst evolution of the FU Ori object V960 Mon with a complementary spectroscopic study at high dispersion that uses time series spectra from the Keck\(/\)HIgh Resolution Echelle Spectrograph. Consistent with the photometric results reported in Carvalho et al., we find that the spectral evolution of V960 Mon corresponds to a decrease in the temperature of the inner disk, driven by a combination of a decreasing accretion rate and an increasing inner disk radius. We also find that although the majority of the absorption lines are well matched by our accretion disk model spectrum, there are several strong absorption line families and a few emission lines that are not captured by the model. By subtracting the accretion disk model from the data at each epoch, we isolate the wind and outflow components of the system. The residuals show both broad and highly blueshifted profiles, as well as narrow and only slightly blueshifted profiles, with some lines displaying both types of features. Subject headings:_Author1 Footnote 1: institutetext: Department of Astronomy, California Institute of Technology, Pasadena, CA 91125, USA Department of Astronomy, University of Maryland, College Park, MD 20742, USA _Received 2023 August 3; revised 2023 September 29; accepted 2023 September 30; published 2023 November 20_ ## 1 Introduction FU Ori outbursts produce photometric brightenings that reach optical amplitudes of \(\Delta V\)\(\sim\) 4-6 mag, and are associated with episodes of significantly enhanced accretion (Hartmann & Kenyon, 1996) in young stellar objects (YSOs). The accretion rates during these outbursts may increase by factors \(10^{2}\)-\(10^{4}\), leading to proposals that YSOs may accrete a significant fraction of their mass during the events (see Fischer et al., 2023 for a review). In 2014 December, the relatively unknown YSO V960 Mon underwent a large outburst, which was initially reported as a suspected FU Ori object by Maehara et al. (2014). V960 Mon has several other YSOs nearby, is surrounded by diffuse dust emission, and its status as an FU Ori object was quickly confirmed by follow-up observations (Hackstein et al., 2014; Hillenbrand, 2014; Reipurth & Gonnelley, 2015). The outburst peaked at \(V\)\(\sim\) 11.2 mag and had a relatively flat outburst amplitude across the spectrum, with \(\Delta B\)\(\sim\) 3 reported by Kospal (2015) and a \(\Delta\)W2 \(\sim\) 2.2 from the Widefield Infrared Survey Explorer (Mainzer et al., 2011). Carvalho et al. (2023; hereafter Paper I) Figure 1 provides the full multiband lightcurve to date. In the months and years following the outburst, the target faded approximately exponentially, eventually reaching a plateau of \(B\)\(\sim\) 14 (\(V\)\(\sim\) 13) after 2018, as can be seen in Figure 1. Though the target faded rapidly postoutburst, since reaching the plateau in 2018 its brightness has remained essentially unchanged and is still 1.2 mag brighter in the \(B\) band than preoutburst. In Paper I, we presented a disk model that successfully reproduces the color-magnitude evolution of the target during its exponential fade. We used photometry gathered near the outburst epoch and a single high-dispersion spectrum from the same time to determine the best-fit system parameters in the pure-accretion scenario. The stellar and disk parameters that best explain the outburst peak are \(M_{\ast}\) = 0.59 \(M_{\odot}\), \(R_{\rm inner}\) = 2.11 \(R_{\odot}\), and \(\dot{M}\) = \(10^{-4.59}\)\(M_{\odot}\) yr\({}^{-1}\), corresponding to \(T_{\rm max}\)= 8240 K, \(L_{\rm acc}\) = 113 \(L_{\odot}\), and \(v_{\rm max}\) = 60 km s\({}^{-1}\). Our analysis in Paper I assumed a distance to the target of 1120 pc (Kuhn & Hillenbrand, 2019). The model described in detail in Paper I is used here to study the spectral evolution of the V960 Mon system at high dispersion. We followed the exponential decline over several years, from the initial outburst to the beginning of the plateau, as indicated in Figure 1. In this paper, we present the spectra, gathered with the Keck\(/\)HIgh Resolution Echelle Spectrograph (HIRES), and demonstrate that our accretion disk model is able to reproduce the spectral evolution accurately over a broad range of optical wavelengths during the fade. We then isolate the excess absorption and emission in the spectrum by subtracting our high-resolution model from the data. In this way, we are able to analyze a spectrum of the nondisk components in the system. We begin by discussing our reduction and continuum normalization of the HIRES spectra in Section 2. We then give a brief summary of our disk model from Paper I, followed by a discussion of the spectral line evolution in the system in Section 3. We show evidence for a cooling inner disk and how that is traced by the absorption lines in the HIRES spectra (and reproduced by our model) in Section 4. Once we have subtracted the model spectra from the data, we analyze the excess absorption and emission spectrum, which is presented in Section 5. We identify several forbidden emission lines in the spectrum, which grow as the target fades, and we show those in Section 6. We discuss our results in the context of existing FU Ori object and other young star literature in Section 7 and summarize our conclusions in Section 8. ## 2 Data We obtained visible range high-dispersion spectra from the Keck Observatory's HIRES (Vogt et al., 1994), covering 4780-9220 A. Table 1 gives the epochs, instruments, and signal-to-noise ratios (S\(/\)Ns) for the spectra. The spectra were processed with the 2008 version of the MAKEE pipeline reduction package written by Tom Barlow.3 Footnote 3: [https://sites.astro.caltech.edu/](https://sites.astro.caltech.edu/)\(-\)fb/makee/ We normalize the spectra by fitting the continuum using a regularized asymmetric least-squares technique (Eilers & Boolens, 2005). The regularization parameter allows for more or less flexible continuum fits and the technique is more robust to the edges of the spectrum than polynomial fitting. Orders with emission lines (e.g., H\(\alpha\), weak forbidden emission lines, and the Ca ii infrared triplet (IRT)) need special treatment. We mask emission lines in the spectrum and use the linear interpolation from the redward continuum point to the blueward continuum point on either side of the lines as the continuum under those lines. Several orders from the continuum-normalized HIRES outburst epoch spectrum are illustrated in Figure 2. As is mentioned in Paper I, we compute the half-width at half-depth (HWHD) of several absorption lines in the outburst spectrum (taken 2014 December 9) across the optical range, and find no correlation between wavelength of the line and HWHD. The measurements are discussed and shown in Section 3.2. The mean and standard deviation of the measurements are \(44\pm 5\) km s\({}^{-1}\), consistent with the line width measurements reported by Park et al. (2020). We compute the equivalent widths (EWs) of select lines via direct integration, taking the continuum to be 1.0 following our normalization described above. The procedure for the EW measurement and uncertainty estimation is described in detail in Carvalho & Hillenbrand (2022) and our results for V960 Mon are presented in Section 4. We measure a heliocentric systemic velocity of \(+43.0\) km s\({}^{-1}\), which is roughly consistent with the \(v_{\rm{LSR}}\) = 23.81 km s\({}^{-1}\) (\(v_{\rm{helit}}\) \(\sim\) 40 km s\({}^{-1}\)) measured by Cruz-Saenz de Miera et al. (2023). The seven epochs of HIRES data, along with the residuals from the high-dispersion disk model described below, are shown in Figure 2. ## 3 Modeling the High-dispersion Spectra We use the disk model described in Paper I to model the high-dispersion data. We briefly summarize the model below, as well as the technique we adopt to model the evolution of the system from outburst to later epochs. We describe how temperature-sensitive lines behave in the model in terms of their presence and broadening We also discuss the effect of \begin{table} \begin{tabular}{l c c c} \hline \hline Date & Instrument & S\(/\)N at 7100 Å & Exposure Time (s) \\ \hline 2014-12-09 & HIRES & 170 & 600 \\ 2014-12-10 & HIRES & 117 & 315 \\ 2015-02-09 & HIRES & 98 & 245 \\ 2015-10-27 & HIRES & 97 & 300 \\ 2016-02-02 & HIRES & 147 & 600 \\ 2016-10-14 & HIRES & 80 & 180 \\ 2017-01-13 & HIRES & 52 & 180 \\ \hline \hline \end{tabular} \end{table} Table 1Spectroscopic Observations Log Figure 1: The epochs of our HIRES spectra are shown (vertical solid lines) relative to the AAVSO B (black circles) and Gaia BP (gray connected points) lightcurves of V960 Mon. The lightcurve illustrates the rapid postoutburst fading and eventual plateauing in later epochs. The quiescent \(B\) magnitude reported in Köspal (2015) is shown as the black horizontal line for reference. The 2014 December 10 HIRES epoch is marked with a dotted–dashed line due to overlap with the 2014 December 9 epoch. varying the accretion rate and innermost radius on the high-dispersion models and the strong agreement with what we observe in the HIRES spectra. ### A Recap of the Disk Model We assume that the system can be well approximated by a thin accretion disk, following the Shakura & Sunyaev (1973) model, with the change that for radii interior to that of the maximum temperature, \(T_{\rm max}\), we impose a flat profile, following Kenyon et al. (1988). Therefore, the radial temperature profile in the model is given by: \[T_{\rm eff}^{4}(r)=\frac{3GM_{\rm w}\dot{M}}{8\pi\sigma r^{3}}\Bigg{(}1-\sqrt{ \frac{R_{\rm inner}}{r}}\Bigg{)}, \tag{1}\] for \(r\geq\frac{49}{36}\)\(R_{\rm inner}\) and \(T_{\rm eff}^{4}(r)=T_{\rm max}\) for \(r<\frac{49}{36}\)\(R_{\rm inner}\). Here, \(M_{\rm w}\) is the mass of the central star, \(\dot{M}\) is the accretion rate, \(R_{\rm inner}\) is the innermost radius of the accretion disk, \(\sigma\) is the Stefan-Boltzmann constant, and \(G\) is the gravitational constant. We use PHOENIX (Husser et al., 2013) model atmosphere spectra4 corresponding to the \(T_{\rm eff}\) of each annulus of the disk, following the temperature profile in Equation (1). We find that using only the log \(g=1.5\) atmospheres gives a better match to the high-dispersion spectra than the \(g(r)\) model implemented in Paper 1. The change does not affect the spectral energy distribution (SED) fits from Paper I. Footnote 4: Downloadable at [http://svo2.cab.inta-csic.es/theory/newov2/index.php](http://svo2.cab.inta-csic.es/theory/newov2/index.php). To account for turbulence in the disk like that seen in the simulation of FU Ori presented by Zhu et al. (2020), we apply 20 km s\({}^{-1}\) of spherical broadening to the atmospheres using the direct integration method in Carvalho & Johns-Krull (2023). The broadening is similar to stellar rotational broadening, but without limb darkening. This initial broadening step helps to match the individual line profiles better, which do not show the narrow double peaks typical of Keplerian rotation in a thin disk, but instead have a flat-bottomed, box-like profile. We then apply the disk Keplerian broadening as described in Paper I. In Paper I, we found that as the target fades, the color-magnitude evolution is well matched by varying both \(\dot{M}\) and \(R_{\rm inner}\). The exact scaling between the two quantities is that given by the canonical truncation radius equation (Equation (6) of Paper I), which yields \(R_{\rm inner}\propto\dot{M}^{-2/7}\). We use the color-temperature calculated in Paper I at each HIRES epoch from the AAVSO photometry to estimate the appropriate \(\dot{M}\), assuming the scaling \(T\propto\dot{M}^{13/28}\). We compute a \(T_{\rm max}\) at each epoch ranging from 8300 to 6600 K. See Figure 11 of Paper I for the resulting temperature profiles. The high-dispersion disk models computed from the light-curve evolution are generally very good fits to the HIRES spectra. The models reproduce the absorption lines across the entire spectral range well, with the exception of certain features we believe trace nondisk absorption and emission components (including "wind" lines like H\(\alpha\) and Ca ii, and high-excitation potential (EP) lines like Si ii and C i, see Section 5). The typical rms of the residuals, excluding these and other key excess features that are not accounted for in the disk model, is \(<\)3%. ### Temperature-sensitive Lines: Differential Broadening In the canonical FU Ori disk model (Kenyon et al., 1988; Calvet et al., 1993), the spectral line broadening is given by the Keplerian rotation of the gas disk. Therefore, for a given Figure 2: The HIRES spectra shown in a time series, with bluer to redder spectra indicating earlier to later epochs. The lower set of curves in each panel are the residuals after subtracting the disk model appropriate to each epoch. See Figures 2–4 of Paper I for a direct comparison between the outburst epoch spectrum (blues line here) and the disk model. Strong excess absorption in certain lines (marked) is attributed to wind contributions in the spectra; see Section 5. Outside of these lines, the typical rms value of the residuals is \(<\)3%. The remaining 28 orders are shown in Appendix C. spectral line, the broadening should be proportional to \(\sqrt{GM_{\rm s}/r_{\rm line}}\), where \(r_{\rm line}\) is the radius in the disk where the line is expected to form. For lines with higher EP, one might expect \(r_{\rm line}\sim R_{\rm inner}\), whereas lower-EP lines are expected to form further out in the disk, on average. While this may be the case, we also find that the \(\dot{M}\) in the model plays a role in determining the final observed line broadening. We find that for lower-EP lines belonging to neutral species such as Fe i and Ca i, lower values of \(\dot{M}\) produce broader line profiles and higher \(\dot{M}\) values produce narrower line profiles. In fact, the effect is so strong in these lines that it overwhelms the effect of varying \(R_{\rm inner}\), as shown in Figure 3. This is because for higher values of \(\dot{M}\), the \(T_{\rm eff}\) in the fastest-moving annuli is high enough that low-EP lines from Ca i (e.g., Ca i \(\lambda\)6439) and Fe i (e.g., Fe i \(\lambda\)6393) are extremely weak or totally absent. The effect is especially pronounced in the \(T_{\rm eff}>7000\) K annuli. In this case, the low-EP lines will not be broadened as significantly as lines that still appear in those hottest annuli, such as Si ii \(\lambda\)6347 and \(\lambda\)6371. As \(\dot{M}\) decreases, we see in Figure 3 that the Ca i \(\lambda\)6439 line grows broader because the fastest-moving, closest-in annuli are cool enough to show lower-EP lines in absorption. The higher-EP lines like Si ii \(\lambda\)6347 remain the same width throughout, because the innermost annuli do not get hot enough for the lines to disappear. Decreasing \(\dot{M}\) only decreases the depth of the lines. This is clear in the lower left panel of Figure 3. We see then, that this dependence on \(\dot{M}\) in the line broadening is in fact a dependence on \(T_{\rm max}\). This implies that changes to \(R_{\rm inner}\) should elicit the same effect. Varying \(T_{\rm max}\) via \(R_{\rm inner}\), as we propose in Paper I, however, is expected to affect the rotational broadening of all lines directly by changing the maximum Keplerian velocity in the disk. The question becomes: are changes in the rotational broadening of lines due to changes in the maximum Keplerian velocity distinguishable from those due to changes in \(T_{\rm max}\)? The two panels in the right column of Figure 3 show our investigation of this in the Ca i \(\lambda\)6439 and the Si ii \(\lambda\)6347 lines. In the Ca i line, the \(T_{\rm max}\) effect dominates, working to broaden the line as \(R_{\rm inner}\) increases due to the overall decrease in \(T_{\rm max}\). In the Si ii line, the decreasing maximum Keplerian velocity dominates and we see the line narrows as we increase \(R_{\rm inner}\). We find this is consistent with the broadening we see in the high-dispersion spectra. Figure 4 shows the Si ii and Ca i lines over time as observed in the HIRES spectra at different epochs. The Ca i line remains at a relatively constant width, tending toward being slightly broader at later epochs. This is what we expect from a decreasing \(\dot{M}\) and increasing \(R_{\rm inner}\), as seen in Figure 3. The Si ii line narrows rapidly toward later epochs, as we might expect from the discussion above and as is shown in Figure 3. As we discuss in Section 5.2, the Si ii line does not arise entirely from the disk, so its rapid narrowing is not only due to the increase in \(R_{\rm inner}\). ### Line Broadening as a Function of Wavelength HWHD measurements have been used by many authors to argue for the presence (Welty et al., 1990; Park et al., 2020) or absence (Herbig et al., 2003) of disk-broadened spectral lines in FU Ori stars. However, due to the differential broadening of different spectral lines based on their location of formation in the disk, as just discussed (Section 3.2), it is not straightforward to connect an HWHD versus wavelength (or even EP) relation to the physics of the disk. We measured the HWHD values for several lines in the observed outburst spectrum and in outburst epoch model spectrum of V960 Mon (shown in Figure 5). To estimate the uncertainty of the HWHD measurements, we fit disk profiles to the lines and multiplied the fractional \(1\sigma\) uncertainty in the width parameter of the fit by our HWHD value. While there is a generally decreasing (though not statistically significant) trend in HWHD versus wavelength over broad regions of the spectrum, there is significant scatter in the measurements of Figure 4: The Ca i \(\lambda\)6439 (upper profiles) and Si ii \(\lambda\)6347 (lower profiles) absorption lines shown for the different epochs of the HIRES spectra. Notice the Si ii line becomes narrower, in addition to weaker, consistent with an increase in \(R_{\rm inner}\), while the broadening of the Ca i line remains relatively unchanged. Figure 3: The effect of varying \(\dot{M}\) and \(R_{\rm inner}\) on the width and depth of absorption lines with different EPs. Left: models of the Ca i \(\lambda\)6439 (upper panel) and Si ii \(\lambda\)6347 (lower panel) lines for different values of \(\dot{M}\). Notice the Ca i \(\lambda\)6439 line broadens as \(\dot{M}\) decreases, whereas the Si ii \(\lambda\)6347 line only changes in depth. Right: models of the same two lines for different values of \(R_{\rm inner}\). Notice here that the \(T_{\rm max}\)-dependent broadening still dominates in the Ca i line as \(R_{\rm max}\) decreases, making the line counterintuitively broader. However, in Si ii, the broadening decreases with increasing \(R_{\rm inner}\) as expected due to the decrease in the maximum Keplerian velocity. even neighboring lines. This is in agreement with the results presented in Zhu et al. (2009), which also show that for FU Ori, the disk model predicts almost no wavelength dependence on the HWHD measurements made in the optical. Both the slightly decreasing trend with wavelength and the large scatter are consistent with the high-resolution disk model, as shown in Figure 5. The HWHD measurements from our high-resolution model have a slightly lower median than those in the HIRES data. However, attempting a model with slightly greater median broadening gives a worse fit to the data, with greater chi-squared values. For a more direct comparison between the two data sets, we scale the model HWHD values to have the same median as the HIRES values. Linear fits to the two HWHD versus wavelength relations give slopes of \(-3.8\times 10^{-4}\) km s\({}^{-1}\) A\({}^{-1}\) and \(-9.3\times 10^{-4}\) km s\({}^{-1}\) A\({}^{-1}\) with low-significance Pearson test \(p\) values of 0.33 and 0.40 for the data and model, respectively. The scatter of 2.1 km s\({}^{-1}\) in the HIRES HWHD measurements is greater than any wavelength-dependent change in width predicted by the disk model over the 5000-9000 A wavelength range. ## 4 Evidence of Disk Cooling in the High-dispersion Spectra We look to the behavior of the temperature-sensitive absorption lines in the high-dispersion spectra to confirm the temperature evolution of the disk that is seen in the photometric models presented in Paper I. We will focus on two sets of lines: those with very high EP values (\(>\)7 eV) and those with relatively lower-EP values (\(<\)3 eV). In general, we find that the high-EP lines weaken rapidly as the target fades, while the low-EP lines become deeper over the same time. This is expected for a target that is cooling, as the higher energy levels depopulate and fill the lower energy levels. One challenge to interpreting the behavior of the highest-EP lines in the HIRES spectra is that some lines are significantly stronger than the models predict them to be. We believe this is evidence that the highest-EP lines, such as Si ii \(\lambda\)6347 and \(\lambda\)6371, O i \(\lambda\)8446, Ca ii \(\lambda\lambda\)8912, 8927, and C i \(\lambda\)9111, trace a nondisk (potentially outflowing) component in the system. We will discuss these lines in detail in Section 5.2. The high-EP lines we will focus on in this section are those that do not appear to be influenced by wind. To illustrate the general time evolution of the lines in our HIRES spectra of V960 Mon, we have chosen four lines to highlight: two high-EP and two low-EP lines. The high-EP regime is represented by the Paschen series lines of H i (hereafter HP), with the HP \(\lambda\)8862 line serving as a specific example, and the Fe ii \(\lambda\)5316 line. These are both isolated features that show dramatic decreases in line strength as the target fades. The low-EP lines are represented by the Fe i \(\lambda\)5328 line and the Ca i \(\lambda\)6439 line. The Fe i \(\lambda\)5328 feature (which is in reality a blend of low-EP Fe i lines) is especially temperature sensitive, increasing in strength dramatically as the target fades (see Figure 2). The Ca i \(\lambda\)6439 line has the advantage of being isolated, so it clearly shows the flat-bottomed line profile characteristic of FU Ori objects (though the profile develops a rest-velocity excess absorption feature over time, as discussed in Section 5.3). It also grows significantly in time, as seen in the profiles shown in Figure 4. We quantify the weakening or strengthening of lines we will discuss in this section by computing their EWs at each HIRES epoch. The measurements are shown in Figure 6 with the \(V\)-band lightcurve from Paper I plotted alongside the measurements as a reference for the brightness evolution of the target. In the figure, we see that the shape of the EW curves qualitatively either follows or mirrors that of the \(V\)-band lightcurve, depending on the line. The high-EP HP line and the Fe ii \(\lambda\)5316 line both show decreasing EW measurements over the HP line closely follows the lightcurve, with a greater slope matching the initial rapid fade of the target, then plateauing at later epochs. The lower-EP lines, Fe i \(\lambda\)5328 and Ca i \(\lambda\)6439, both mirror the lightcurve, growing rapidly during the early epochs and plateauing at larger values later. ### The Time Evolution of the HP Series As can be seen in Figure 7, our disk model is a good match to the line strength of the HP \(\lambda\)8862 line at outburst and its subsequent time evolution. We interpret this to mean three things: the HP lines are good tracers of disk temperature; our outburst model \(T_{\rm max}\sim 8300\) K is sufficiently high to describe the initial HP depth; and the line evolution supports our proposed temperature evolution of the disk. We note that the HP lines blueward of 8350 A are not detected, which is consistent with our model predictions, due to blending with other features and minimal time evolution. Of the lines blueward of 8350 A, only the 8345.5 A line varies by more the \(\sim\)1%. The exponential decrease of the HP \(\lambda\)8862 line is a strong indication that the line is closely tracing the decrease of \(T_{\rm max}\), which is driving the brightness decrease in V960 Mon. We note that this differs somewhat from the behavior of the Fe ii \(\lambda\)5316 line, which also decreases in strength, but shows a more linear decline in time. The difference in evolution can be explained by the relative temperature sensitivities of the two lines. The HP lines are strongest in very hot atmospheres and will weaken quickly as the hottest components of the disk cool. This makes the HP lines very sensitive to the \(T_{\rm max}\) in the disk. The Fe ii lines span a broader range of temperatures and are relatively temperature insensitive in our disk models. Therefore, as the Figure 5: The HWHD measurements of several isolated absorption lines in the HIRES outburst (2014 December 9) spectrum (black) and the spectral model (red). Empty symbols show lines that are isolated and easily measured in the data but appear severely blended in the models. Although there are some line-by-line inconsistencies (largely due to imperfections in the PHOENIX models), neither set of measurements is significantly correlated with wavelength. The data and the disk model also have similar standard deviations and the scatter in the HWHD measurements does not vary with wavelength. The error bars shown reflect the uncertainty in the width parameter fit used to compute the HWHD measurements. disk cools, the EW of the Fe ii lines may decrease but the annuli in which the line largely forms will simply shift radially inward, slowing the weakening of the line. ### The Time Evolution of the Low-EP Metal Lines We also see evidence of disk cooling in the behavior of temperature-sensitive low-EP lines, such as Fe i \(\lambda\)5328 and Ca i \(\lambda\)6439. The region of the disk from which the optical continuum arises is at 5000-7000 K, which is hot enough that the majority of the Ca and Fe is ionized. This makes the Ca i and Fe i lines particularly sensitive temperature tracers (Gray, 2008). One supporting argument for the cooling disk was presented in Section 3.2, where we attribute the slight broadening of the Ca i \(\lambda\)6439 line in time to cooling of the innermost annuli. Another argument is that the Fe i and Ca i lines increase in strength significantly as the target fades. The line strength increase is seen clearly in the EW measurements of the Fe i \(\lambda\)5328 and Ca i \(\lambda\)6439 lines in Figure 6. In order to constrain the disk properties from the EW measurements of the low-EP lines more directly, we compute the EW ratios of these lines to neighboring lines. Using EW ratios allows us to investigate the local gravity and temperature from the regions where the lines were emitted without worrying about contributions from continuum opacity. We focus our analysis on two line ratios in particular: the \(\lambda\)6439/\(\lambda\)6456 ratio, between the Ca i \(\lambda\)6439 line and the Fe ii \(\lambda\)6456 line, and the \(\lambda\)5328/\(\lambda\)5316 ratio, between the Fe i \(\lambda\)5328 line and the Fe ii \(\lambda\)5316 line. The denominators of the line ratios are chosen for two reasons: in the case of Fe ii \(\lambda\)6456, the line remains unchanged in the spectra, allowing a good comparison with the evolution of the Ca i \(\lambda\)6439 line, Figure 6: The EW evolution (black squares) of two higher-EP and two lower-EP lines in the spectrum of V960 Mon. The AAVSO V-band lightcurve is plotted (green points) for reference, showing the evolution of the continuum brightness over the spectral epochs. Left column: the evolution of the \(\lambda\)5316 Fe ii line and \(\lambda\)8862 HP EWs, which both closely follow the lightcurve of the target in their decreasing strength. The EW measurement for the \(\lambda\)8862 HP line is blended with the Fe i \(\lambda\)8866 line, but the Fe i line does not vary in time. Right column: the evolution of the \(\lambda\)5328 Fe i line and \(\lambda\)6439 Ca i line, where both follow an inverse trend to the lightcurve in their growth, similarly strengthening as the target fades and plateaus at later times. Figure 7: The 8750 and 8862 Å HP lines in the HIRES spectra (left column) and the residual spectrum (right column). Notice the lines have been removed to within a few percent in every epoch, indicating the modeled \(T_{\rm max}\) change reproduces the evolution of the interp series accurately. The 8750 Å line is blended with the \(\lambda\)8751 Si i line and the 8862 Å line is blended with an Fe i line at 8866 Å but both are well modeled, do not show any evolution, and do not appear in the residuals. The emission residual in the lower right panel at \(-200\) km s\({}^{-1}\) is due to TiO absorption in the model that is not present in the data. We discuss this discrepancy in Appendix B. whereas for Fe ii\(\lambda\)5316, its evolution is opposite that of Fe i\(\lambda\)5328, potentially making the ratio more temperature sensitive. We study these two line ratios by comparing to computed expected line ratios in the PHOENIX grid, for a range of temperatures (5000-9000 K) and range of gravities (\(1.5\leq\log\leq 4.0\)). This gives us an EW ratio surface, on which we can plot the EW ratios we compute in the HIRES spectra. Placing the EW measurements on their appropriate contours shows the corresponding temperature and gravity for that ratio in the PHOENIX grid. The resulting contour plots are shown in Figures 8 and 9, along with the time series of the ratios for reference. As predicted from our model, both sets of ratios are initially consistent with relatively high temperatures and evolve toward cooler ones. In fact the \(\lambda\)5328/\(\lambda\)5316 ratio is a good match to the predicted \(T_{\rm max}\) evolution of the system. Ultimately, both the high-EP HP lines and the low-EP neutral atomic lines show good agreement with our model of a cooling disk. Both sets of lines also support our \(T_{\rm max}\) estimate for the outburst epoch, particularly in the fact that the HP line depths are well matched for that epoch and those that follow. discussed above persist in the residuals, indicating they are indeed tracing outflow components. The especially fast absorption in the H lines is absent in the other wind tracers. The slower component, however, is visible in both H lines and the Ca ii IRT. At the outburst epoch, there is a distinct absorption minimum at \(-\)30 km s\({}^{-1}\), which slows over time to \(-\)10 km s\({}^{-1}\). The change in velocity of this absorption line is most noticeable in the Ca ii IRT profiles. We note also that the Ca ii IRT profiles appear quite different from one another in the HIRES spectra, despite being a triplet (Figure 11, left column). We find that this is due to the differing levels of HP blending each line experiences, and when the disk (and therefore HP) contribution is subtracted, the lines show similar profiles, as expected (Figure 11, right column). The Na D lines are remarkably featureless compared with the other wind lines. They are saturated from \(-\)10 to 0 km s\({}^{-1}\), indicating the presence of a slow wind that covers the entire optical continuum emission region. The blue wings of the lines extend to \(-\)100 km s\({}^{-1}\), tracing a much slower component than the H\(\alpha\) and H\(\beta\) lines. There is also a component at \(-\)60 km s\({}^{-1}\) that is slightly deeper at earlier epochs, but the change is relatively small compared to that in the other wind lines. H\(\alpha\) shows consistent redshifted and blueshifted emission components at velocities of \(\pm\)60 km s\({}^{-1}\). The blueshifted emission appears in the earliest epochs despite the significant wind absorption, and toward later epochs rises to equal the strength of the redshifted component (see also the 2019 epochs of Park et al., 2020). The red- and blueshifted emission features are also apparent in the profiles of the Ca ii IRT lines in later epochs. On closer inspection, the feature may be discernable as an inflection point in the profiles of the Ca ii IRT lines in early epochs, as well as the H\(\beta\) line profiles. Strangely, the peaks of the emission in the Ca ii IRT lines are consistent with the locations of the peaks in H\(\alpha\) in the data, but after removing the disk contribution, the peaks shift to lower velocities (\(\pm\)40 km s\({}^{-1}\)). The locations of the peaks of these emission components are approximately consistent with the Keplerian velocity at the innermost radii of the accretion disk. We interpret that the emission arises from a hot boundary layer, which may be shocking against the stellar photosphere. The deep, narrow, low-velocity absorption component may trace an outflow with a wide opening angle, as has been observed in T Tauri systems (e.g., Whelan et al., 2021). These low-velocity absorption features are typically attributed to disk winds in T Tauri systems, which may indicate we are also seeing a slower disk wind. In summary, we have demonstrated in this subsection that several of the strong lines traditionally used to diagnose inflowing and outflowing gas in YSOs can have contributions from disk absorption in FU Ori-type systems. Subtraction of the disk component reveals both inflowing possible boundary layer material, and outflowing slow and fast winds. ### Broad Central Absorption The evolution of the especially high-EP lines mentioned briefly in Section 4 (due to Si ii, Si i, O i, Ca ii, and C i) is shown in Figure 12. The lines follow a similar evolution to that of the HP lines, indicating the lines all trace a rapidly cooling hot component of the system. However, unlike the HP lines, these lines are significantly stronger in our spectra than in the disk model (especially in the earliest epochs) and can be clearly seen in the residuals in Figure 2. The profile shapes are also very similar in the residuals after we subtract the disk model. The significant differences in the depths and profile shapes for these lines from the predictions in our disk model indicate the majority of the absorption traced by them is not accounted for in our model. We note that the residual depths are much smaller in the later epochs, indicating the absorption excess relative to the disk model has almost full disappeared by the 2017 epoch. The component traced by the high-EP lines appears to be somewhat dynamically different from the disk. The lines show a largely round-bottomed profile, almost consistent with profiles of high \(v\) sin \(i\) stellar features. The wings of the lines show relatively consistent 60 km s\({}^{-1}\) broadening, which may be attributed to the disk broadening seen in other features. However, the line core shows an initial 40-50 km s\({}^{-1}\) broadening which decreases as the line weakens, narrowing to only 20-30 km s\({}^{-1}\). The lines appear centered at the Figure 11: The Na D \(\lambda\)589 line (top row) and Ca ii IRT lines (lower three rows), shown in the data (left column) and residuals (right column). The Ca ii IRT lines show a weakening fast blueshifted component and a \(\pm\)60 km s\({}^{-1}\) emission component similar to those seen in the H lines. Note that the \(+\)100 km s\({}^{-1}\) absorption is due to HP line blending with the IRT. This is well matched by our disk model and mostly removed in the residuals column. The Na D lines are saturated from \(-\)10 to 0 km s\({}^{-1}\), which matches the narrow absorption seen in the latest epoch, indicating there may be a constant slow wind covering almost the entire visible emission region of the disk. systemic rest velocity, implying they are tracing a particularly slow, albeit hot, outflow. To quantify the evolution of these high-EP lines better and to try to understand the implications for the temperature conditions in this additional component, we again use EW ratios like those described in Section 4. For our investigation, we choose two lines: Si ii \(\lambda\)6347 and O i \(\lambda\)8446. These lines are high EP (8.1 eV and 9.5 eV, respectively), show a dramatic decrease in strength over time, and are well isolated from other photospheric lines and telluric lines. For the denominators in the line ratios, we choose the Fe i \(\lambda\)6400 and \(\lambda\)8688 lines, both of which show almost no time variability. The time series of the \(\lambda\)6347\(/\lambda\)6400 and \(\lambda\)8446\(/\lambda\)8688 ratios are shown in Figures 13 and 14. Both sets of ratios show an initial rapid fade, similar to that of the HP lines, followed by a plateau, closely following the structure of the \(V\)-band lightcurve. The \(T_{\rm eff}\) versus log \(g\) contour plot for the \(\lambda\)6347\(/\lambda\)6400 ratio shows that the line ratio is initially consistent with a \(\sim\)7000 K temperature atmosphere and as the target fades the ratio is more consistent with that seen in \(\sim\)5800 K atmospheres. The contour plot for the \(\lambda\)8446\(/\lambda\)8688 ratios also shows a similar evolution, where the ratio is initially consistent with \(\sim\)5700 K atmospheres and evolves to be more consistent with \(\sim\)5200-5300 K atmospheres. Both sets of ratios indicate then that this excess component is cooling as the target fades. They also show, however, that different wavelengths are indeed tracing regions with different temperatures. This is consistent with the expected \(T_{\rm eff}(\lambda)\) relation expected for accretion disks and indicates that the continuum at 6400 A arises from a warmer region of the disk than that at 8400 A. We can use the temperatures described above and the wavelengths of the line ratios to find that at outburst, \(dT/d\lambda\)\(\sim\) 0.65 K A\({}^{-1}\) whereas at the 2017 epoch, \(dT/d\lambda\)\(\sim\) 0.45 K A\({}^{-1}\). Taking the temperature from the \(\lambda\)6347\(/\lambda\)6400 ratio at outburst to be \(\sim\)7000 K and the temperature from the \(\lambda\)8446\(/\lambda\)8688 ratio to be \(\sim\)5700, we can also estimate that the lines arise from the \(r\)\(\sim\) 2 \(R_{\rm s}\) and \(r\)\(\sim\) 3 \(R_{\rm s}\) annuli respectively. That would in turn mean an average \(\frac{dT}{\lambda}\)\(\sim\) 0.62 K A\({}^{-1}\) across the red range of the optical spectrum. We now turn our attention to some cooler lines, namely moderate-EP lines of Fe ii that also show broad excess absorption relative to the disk model. Similarly to the high-EP lines, the Fe ii lines (shown in Figure 15) also decrease in strength significantly as V960 Mon fades. The effect is not as dramatic as that seen in the high-EP lines such as Si ii \(\lambda\)6347, but it is notable. The Fe ii lines are also much stronger than predicted in the disk models, as can be seen for the Fe ii \(\lambda\)5316 and \(\lambda\)5362 lines in Figure 2. These two facts imply the Fe ii lines may trace the same excess component as the high-EP lines. However, there are also Fe ii lines, especially those blueward of 5200 A, that show almost no time evolution. The Fe ii lines which most closely follow the evolution of the high-EP lines in Figure 12 are those between 5200 and 7000 A having EPs between 3.0 and 4 eV. We note that these Fe ii lines are also typically gravity sensitive in single-atmosphere models, but when varying the log \(g\) of our disk model, we do not see significant gravity sensitivity. In summary, in this subsection we have demonstrated that there is a family of lines with deep broad profiles in V960 Mon that have distinctly deeper depths, and very different Figure 14.— The EW ratio of the O i \(\lambda\)8446 line to the Fe i \(\lambda\)8688 line for the HIRES spectra, showing that the ratio decreases in time and is consistent with a decreasing temperature. The color schemes in the left and right panels both show lighter red for larger EW ratios. Left: the time series of the EW ratios. Right: the EW ratios of the same lines as measured in the PHOENIX grid (blue contoured background) and in the HIRES spectra (red contours). Figure 12.— High-EP lines in the HIRES spectra of V960 Mon, smoothed with a \(\sigma=5\) pixel Gaussian for clarity. Epochs are shown in different colors, where redder indicates later epochs when the target is dimmer and, according to our disk model, cooler. The black vertical lines mark \(\pm\)60 km s\({}^{-1}\), the estimated \(v_{\rm max}\) at outburst, as a reference for the line widths. The C i \(\lambda\)9095 line is significantly contaminated by telluric absorption, but its similar evolution to the other lines is still apparent. Figure 13.— The EW ratio of the Si ii \(\lambda\)6347 line to the Fe i \(\lambda\)6400 line for the HIRES spectra, showing that the ratio decreases in time and is consistent with a decreasing temperature. The color schemes in the left and right panels both show lighter red for larger EW ratios. Left: the time series of the EW ratios. Right: the EW ratios of the same lines as measured in the PHOENIX grid (blue contoured background) and in the HIRES spectra (red contours). broadening shapes relative to the disk model. These lines come exclusively from high- and intermediate-EP species. We speculate that they originate in the hot components of the wind, near its point of origin in the disk. ### Narrow Central Absorption Several lines in the spectra at later epochs show an excess narrow absorption feature centered at the systemic velocity. The feature deepens over time, but does not broaden, maintaining a relatively consistent \(\sim\)20 km s\({}^{-1}\) half-width measured from the intersection of the narrow component with the disk component. The lines exhibiting this feature most prominently are shown in Figure 16. Narrow central absorption lines span most of the optical range covered by the HIRES spectra but seem especially prominent between 4900 A \(<\) \(\lambda\) \(<\) 8400 A. The lines with excess narrow absorption components predominantly occupy a narrow range of somewhat high EPs: 2.5-3.5 eV, such as Ca i (shown in detail in Figure 4) and Fe i though there are a few at lower EPs, including the Ba ii and Li i features (all shown in Figure 16). A comparison with the PHOENIX model spectra shows the depths of many of these features are consistent with \(T_{\rm eff}\) \(\sim\) 5000-7000 K atmospheres. The lines are also generally temperature sensitive and grow deeper in the PHOENIX atmosphere models as \(T_{\rm eff}\) is lowered from 7000 to 5000 K. The somewhat high temperatures at which we expect these features, the fact that they appear at the systemic velocity, and their growth as the high-EP lines discussed in Section 5.2 shrink, imply a connection between the two families of lines. As mentioned in Section 5.2, the high-EP lines shown in Figure 12 are all initially much deeper than predicted by our disk model, but over time their strength decreases to be more consistent with the models at later epochs. We have also shown that the evolution of the lines is consistent with this hot excess component cooling over time. The low velocity of the absorption may be consistent with absorption by a slow outflow at a distance of \(r\) \(\sim\) 4-5 \(R_{\rm inner}\). This region would be consistent with broadening \(<\)35 km s\({}^{-1}\) as we see in this excess. If the outflow cools as the disk cools, the highest-EP levels may depopulate and fill the 2.5-3.5 EP levels, contributing to increased absorption in those species. When we isolate the high- and low-EP excess absorption by subtracting the disk model from the data, we can get a spectrum of this nondisk component. Parts of the residual spectrum are shown in Figure 2, but we reproduce it in better detail in Figure 17. In Figure 17, the residual spectrum is shown compared to two log \(g=1.5\) PHOENIX atmospheres, one with \(T_{\rm eff}\) \(=\) 9000 K and another with \(T_{\rm eff}\) \(=\) 7000 K. For better comparison with the line widths we see in the residuals, we broadened both models to \(v\) sin i \(=\) 30 km s\({}^{-1}\). The outburst epoch residual spectrum possesses features that are quite similar to those seen in the 9000 K spectrum (e.g., Si ii \(\lambda\)6347 and \(\lambda\)6371, Fe ii \(\lambda\)6456, and O i \(\lambda\)8446). As the target fades, features that are similar to those in the 7000 K spectrum (e.g., Ca i \(\lambda\)6439, \(\lambda\)6449, and \(\lambda\)6463) grow stronger. The upper left panel of Figure 17 shows this well, where there are several lines in the 4750-4850 A range that are not in the 9000 K atmosphere but can be seen in the residual spectrum. The lower left panel of Figure 17 shows the same fact in another wavelength range. In this panel, the Fe ii \(\lambda\)6456 line appears in both the 9000 and 7000 K atmospheres, where the Ca i \(\lambda\)6439 and \(\lambda\)6449 lines are not expected in the hotter atmosphere. They are indeed generally weaker initially. However, as the target fades, the Ca i features in the residual spectrum deepen until they are consistent in depth with those seen in the 7000 K spectrum. The inverse case is shown in the upper right panel of the figure, where the Si ii features weaken over time, which is again what we may expect from a residual component cooling. In summary, in this subsection we have demonstrated that there is a family of narrow absorption lines in V960 Mon that grows in strength over time against the "continuum" of the disk photosphere, which produces disk-broadened profiles in many of the same lines. We speculate that the lines may trace a slow-moving, cooler component of the wind that was initially traced by the high-EP excess absorption discussed in Section 5.2. ## 6 Forbidden Emission The [S ii] \(\lambda\)6731 emission feature was reported in the spectrum of V960 Mon by Park et al. (2020), and they discuss the growth of the feature at later epochs. We recover this feature and its growth in our HIRES spectra. We also find the Figure 15: Fe ii lines in the HIRES spectra. Notice their behavior is similar to the high-EP lines shown in Figure 12, rapidly shrinking over time. This effect is strongest in the lines with higher EP, indicating sensitivity to the hot wind component. The Fe ii lines here seem to trace the disk behavior at lower EPs and bluer wavelengths, and trace the wind at higher EPs and redder wavelengths. [O i] \(\lambda\)6300 and [N ii] \(\lambda\)6583 emission features. The maximum normalized flux of the [O i] \(\lambda\)6300 line is a bit greater than that of the [S ii] \(\lambda\)6731 line. The [O i] and [N ii] fluxes also increase relative to the continuum at later epochs. The features as seen in the HIRES spectra are shown in Figure 18. The weaker components of these two doublets, the [S ii] line at 6716 A and [O i] line at 6363 A, are tentatively identifiable in the latest epoch but are barely 1% above the continuum. The doublet of the [N ii] line falls in between orders in the HIRES spectra. Detecting the 6716 and 6363 A emission in the data is complicated by line blending with nearby photospheric lines, namely the Fe i \(\lambda\)6715 and \(\lambda\)6362 absorption lines, which are sufficiently broadened to blend with the neighboring forbidden emission features. Fortunately, the Fe i lines can be removed using our high-dispersion models of the HIRES spectra. Looking at the residuals, the features appear clearly and show similar structure to the 6731 and 6363 A features. The stronger 6731 and 6300 A lines are also clearly seen in the residuals and retain the amplitude we see in the data. The line profiles show a velocity structure different from that we see in the absorption features. The emission is predominantly blueshifted in all three main lines, indicating they are tracing outflows from the FU Ori object. They also show multiple velocity components, the velocities of which differ slightly from line to line. To measure the widths and central velocities of the components, we use the optimize toolkit in scipy to do least-squares fitting of a sum of two Gaussian functions to the line profiles. The fits are quite noisy because the detections of the lines are weak and not all epochs are well described by a sum of two Gaussians. However, the best-fit parameters provide a general picture of the composition of the forbidden emission. Figure 16: Lines in the HIRES spectra which show the central absorption feature shown as a time series progressing from blue to red color. Notice that across the range of lines and EPs, the narrow feature shows a consistent width \(\sim\)20 km s\({}^{-1}\) and is centered at 0 km s\({}^{-1}\). The feature grows as the target fades, absorbing against the disk continuum to additional depth that is almost equivalent to the depth of the disk contribution of many of the lines The [S ii] \(\lambda\)6731 line has two distinct peaks, one at essentially systemic rest velocity and one at \(-30\) km s\({}^{-1}\). Both peaks have an HWHM of \(\sim\)15 km s\({}^{-1}\). The [O i] line also has a peak at systemic rest velocity but its blueshifted component is closer to \(-50\) km s\({}^{-1}\). Its components are also both broader, with HWHM values of \(\sim\)20 km s\({}^{-1}\). The [N ii] line seems dominated by a single component at \(-30\) km s\({}^{-1}\), which has an HWHM of \(\sim\)25 km s\({}^{-1}\), though there may also be a weak rest-velocity component. The \(v\) sin \(i=-30\) km s\({}^{-1}\) peak is consistent with the velocity of the narrow absorption component in the wind lines described in Section 5.1. ## 7 Discussion The accretion disk model presented in Paper I successfully reproduces most of the variability seen in the spectra, as can be seen in the residuals presented in Figure 2. There are also many features in the residuals that are not captured by the disk model, either because they are classic wind tracers, such as H\({}_{\rm o}\), the Ca ii triplet, and the Na D lines, or because they trace some other, unmodeled component of the system (see Section 5). ### Disk Absorption and the Gravity and Temperature Dependencies The observed high-dispersion spectra are reasonably well reproduced by models of an accretion disk photosphere. However, great care is needed in understanding and disentangling the effects of temperature and surface gravity in the disk, from the wind components. The lines that are typically used as gravity indicators cannot be used to determine the gravity of FU Ori objects because they are often sensitive to outflows and are therefore dominated by outflow absorption. This is the case for the Na D lines and many other highly gravity-sensitive features (see Section 5.1 for a discussion of these and other wind lines). We therefore must rely on other gravity indicators in the spectrum of V960 Mon, in the weaker optical atomic lines that are known to trace the disk. To explore the gravity sensitivity of the absorption lines in the spectrum, we computed a grid of high-dispersion models using the outburst epoch best-fit parameters and in each we fixed the log \(g(r)\) to be one of 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, or 3.5. We then compared the change in the spectra due to the variation of log \(g\) with the time evolution of the HIRES spectra. Although we do see some evolution in the high-EP lines discussed in Section 5.2, they remain at least a factor of two deeper in the data than in the models, indicating once again that they are likely tracing excess absorption not accounted for in our disk model. The Fe ii lines are also much deeper in the data than in any of the models, similarly indicating that they may also in part trace this hot excess absorption component. Hartmann & Calvet (1995) show that the Fe ii lines in other FU Ori objects, such as FU Ori itself or V1515 Cyg, are consistent with wind absorption models. We do not see blueshifted Figure 17: Selected regions of the residuals of our fits to the HIRES spectra, compared with log \(g=1.5\), \(T_{\rm eff}=7000\) K (green) and \(T_{\rm eff}=9000\) K (magenta) PHOENIX models. Notice the residual features are consistent with the features seen in these hot model atmospheres, with a tendency for later epochs to more closely resemble the 7000 K atmosphere. This may indicate that the component of the system probed by the residuals is cooling as the disk cools. The HP lines in the lower right panel are strongly affected by Stark broadening in the PHOENIX atmospheres. That we do not see this in the disk model or the data indicates that the contribution from the narrower and weaker HP lines in cooler atmospheres is necessary to match the data. absorption in these profiles, but they may trace a slower outflowing component. For the other gravity-sensitive lines, such as Ti i \(\lambda\)8435, explaining the line evolution seen in the HIRES spectra with only a change in the disk gravity would require a very large log \(g\) increase (at least \(\sim\)2 dex). In general, we find that the overall spectrum is more consistent with the lower gravity models (log \(g\)\(\sim\) 1.5) than models that use a \(g(r)\) profile. However, this should also be tested with the gravity-sensitive features in the near-infrared (NIR), such as the CO (2-0) and (3-1) band heads. Ultimately, we find that it is critical to look at the disk-integrated model spectrum to study the gravity sensitivity of different features, rather than looking at them atmosphere by atmosphere. The gravity sensitivity of a given feature is a function of the effective temperature of the atmosphere and may not extend across broad temperature ranges. Therefore, the temperature blending in the full integrated disk spectrum may counteract or weaken the gravity sensitivity of lines like Ti i \(\lambda\)8435. The evolution of the Fe i and Ca i lines, on the other hand, is insensitive to log \(g\) but consistent with a \(\sim\)1000-2000 K decrease in \(T_{\rm eff}\). The lower-EP Fe i lines are especially temperature sensitive and grow rapidly as the target fades. The higher-EP Fe i lines do not vary as significantly initially, but consistently show a growing narrow central absorption feature at later times. We also see a similar pattern in the Ca i features. The Ca i \(\lambda\)6439 and \(\lambda\)6462 features show little evolution except for growth of the narrow central absorption. The relative lack of the gravity sensitivity and significant temperature sensitivity of these two species in the temperature ranges we expect (5000-7000 K) are indicative that the narrow central absorption is not tracing a higher-gravity component of the system but rather one that is cooling. ### Wind Features The wind lines shown in Section 5.1 show evidence of a multicomponent outflow that evolves over the course of the postoutburst fading. The primary components we identify are a fast-moving (\(-\)200 km s\({}^{-1}\)) component traced by the H\(\alpha\) and H\(\beta\) lines, a slower component (\(-\)50 to \(-\)30 km s\({}^{-1}\)) traced initially by H\(\alpha\), H\(\beta\), and the Ca ii IRT, and a very slow component (\(-\)10 to 0 km s\({}^{-1}\)), traced by all lines (the Na i D doublet in all epochs and the other lines at later epochs). The velocities probed by the emission components in the [O i], [N ii], and [S ii] forbidden emission lines discussed in Section 6 are also consistent with some of the slower components seen in the wind lines. The fastest component of the wind that is visible in the \(-\)200 km s\({}^{-1}\) absorption of the H\(\alpha\) and H\(\beta\) lines is similar in shape to that seen in the Na i D and H\(\beta\) lines of other FU Ori objects like V1057 Cyg and FU Ori (Hartmann & Calvet, 1995). This absorption is consistent with a high mass-outflow rate, \(\dot{M}_{\rm out}\), during the outburst, as demonstrated in models of disk winds during FU Ori outbursts (Calvet et al., 1993; Milliner et al., 2019). As the target fades, however, we see this high-velocity component disappear, indicating the \(\dot{M}_{\rm out}\) is much lower. Following the estimated \(\dot{M}_{\rm out}\)\(\sim\) 0.1 \(\dot{M}_{\rm acc}\) described in Calvet et al. (1993), we would expect that the high-velocity component would remain relatively deep, because we only predict a 40% decrease in \(\dot{M}_{\rm acc}\). To preserve the \(L_{\rm ov}/L_{\rm acc}\) they describe (which is also supported by Zhu et al., 2020), we would require a much more massive wind corresponding to the apparent velocity decrease, which would likely make it optically thick. The slow component of the outflow, visible in H\(\alpha\), H\(\beta\), the Na D lines and the Ca ii IRT, ranges from \(-\)10 to \(-\)30 km s\({}^{-1}\). In the Na i D lines in particular, this component is saturated, indicating it may be a very massive outflow and absorbs against a large area of the disk from which the visible continuum arises. In the other lines, the component is initially more blueshifted (up to \(-\)50 km s\({}^{-1}\)) but slows to \(-\)10 km s\({}^{-1}\) over time. The persistence of the component in Na i, and its appearance in the H\(\alpha\), H\(\beta\), and Ca ii IRT lines at later epochs suggests the outflow is present during the course of the initial outburst, throughout the fade and into the plateau. It may be that the faster outflow traced by the other lines obscures the slower component we see in Na i. This absorption is also consistent in velocity with the \(-\)30 km s\({}^{-1}\) component we see in the [S ii], [O i], and [N ii] forbidden emission (Section 6). This may be evidence of a disk wind similar to that seen in T Tauri stars (Whelan et al., 2021), though it may be much more massive. The forbidden emission lines shown in Section 6 may allow us to differentiate between different outflow components. The Figure 18.— The [O i] \(\lambda\)6300, [N ii] \(\lambda\)6583, and [S ii] \(\lambda\)6731 emission lines in the HIRES spectra. All three features are blueshifted, with emission components ranging from \(-\)60 km s\({}^{-1}\) to \(-\)10 km s\({}^{-1}\). The emission in the [O i] \(\lambda\)6300 and [N ii] \(\lambda\)6583 lines is mostly at higher velocities, whereas the [S ii] \(\lambda\)6731 line shows both a \(-\)40 km s\({}^{-1}\) component and a nearly 0 km s\({}^{-1}\) component. [O i] \(\lambda\)6300 emission line has the highest critical density of the three emission features (\(n_{c}\sim 2\times 10^{6}\) cm\({}^{-3}\)) and also shows the highest-velocity emission feature (\(-50\) km s\({}^{-1}\)). This may be emission from a region closer to \(R_{\rm inner}\), where the wind density and escape velocity may be higher. The [S ii] emission, which has the lowest critical density (\(n_{c}\sim 2\times 10^{4}\) cm\({}^{-3}\)), may in turn probe a region further from \(R_{\rm inner}\), and therefore has a lower velocity. Both the [S ii] and [O i] features also have \(v\sim 0\) km s\({}^{-1}\) emission features at similar strengths to their blueshifted emission. This very-low-velocity emission is consistent with the low-velocity components of the forbidden emission observed in T Tauri disks, which are believed to be trace slow-moving disk winds (as opposed to the "high-velocity components" that are usually attributed to jets; see Pascucci et al. 2023 for a review). It is possible that due to the increase in \(T_{\rm eff}\) and \(L_{\rm bol}\) of the source irradiating the outer disk, the wind that is launched may be more massive than those in T Tauri systems. This would be consistent with the finding by Cruz-Saenz de Miera et al. (2023) that FU Ori outflows observed in cooler molecular emission are indeed be more massive than those of T Tauri systems. Using the dereddened fluxes of our SED models at each HIRES epoch, we are able to estimate the total flux in the emission lines. Though the target fades significantly over time, we find that the fluxes of the [O i] \(\lambda\)6300 and [S ii] \(\lambda\)6731 lines are relatively constant, perhaps increasing slightly over time. The measured median fluxes for the lines are log \(\lambda\)\(F_{\lambda}(6300)=-14.2\pm 0.34\) and log \(\lambda\)\(F_{\lambda}(6731)=-13.9\pm 0.11\) erg s\({}^{-1}\) cm\({}^{-2}\). Using the distance to the target of 1120 pc, we estimate line luminosities of log \(L_{(300/L_{\odot})}=-3.66\pm 0.34\) and log \((L_{6731/L_{\odot})}=-3.36\pm 0.11\). For the [O i] \(\lambda\)6300 line, these values are almost 1.5 dex greater than the log\((L_{6300/L_{\odot})}\sim-5\) reported for several classical T Tauri Stars by Fang et al. (2018). The H\(\alpha\) and Ca ii IRT lines show a strong redshifted emission component, centered at \(+60\) km s\({}^{-1}\), which grows stronger over time. Toward later epochs, the strong absorption features in H\(\alpha\) and Ca ii IRT also become dominated by blueshifted emission at \(-60\) km s\({}^{-1}\). The H\(\alpha\) profiles shown in Park et al. (2020) for their 2018 epochs continue to show the double-peaked H\(\alpha\) emission, and the peaks remain at \(v\sim\pm 60\) km s\({}^{-1}\). The velocities of the emission peaks are consistent with the \(v_{\rm kep}\) sin \(i\) of \(R_{\rm inner}\). This may be evidence of emission from the innermost radius of the disk, tracing the accretion boundary with the star. For the broad/shrinking and narrow/growing low-velocity absorption excess components, we believe these can be attributed to some sort of excess, outflowing material above the disk atmosphere. The rest-velocity absorption profile is consistent with observations of disk winds through [O i] emission profiles in face-on circumstellar disks (Fang et al. 2023). The notion of a hot excess component cooling and a cooler component appearing is also consistent with the behavior in the coronal X-ray emission of the system (Kuhn & Hillenbrand 2019). ## 8 Conclusion In Paper I, we presented a means of modeling the SED of the V960 Mon system at outburst using information from our HIRES spectrum at outburst to constrain the SED fit and break some existing degeneracies between the physical parameters in the model. We also presented a means of estimating the \(\dot{M}\) and \(R_{\rm inner}\) of the system for subsequent epochs as the system faded postoutburst. In this work, we used the disk parameters in Paper I to construct high-spectral-resolution models at each observational epoch in our high-spectral-resolution time series data set, in order to understand the evolution of both disk and nondisk components of the V960 Mon system better. We have shown: 1. Our high-resolution model disk spectrum accurately reproduces the evolution of disk absorption features across the 4000-9000 A range of the HIRES spectra during the postoutburst fading of the V960 Mon system. 2. The HIRES spectra show evidence of temperature evolution that is consistent with our predicted SED evolution of the system. 3. We are able to isolate absorption and emission from nondisk components of the V960 Mon system, including a strong multicomponent outflow, by subtracting our model disk spectrum from the HIRES spectra. 1. We detect [O i], [N ii], and [S ii] emission that is consistent with the multicomponent forbidden emission from classical T Tauri disk systems, though these likely trace much more massive outflows because they tend to be as bright shock emission from jets. 2. At outburst, the spectra show a very massive, high-velocity outflowing wind that weakens and largely disappears as the target fades. 3. Several high-EP lines in the spectra show strong and broad rest-velocity excess absorption, which weakens as the target fades. Correspondingly, the lower-EP lines show a narrow rest-velocity excess that strengthens as the target fades. We interpret this as evidence of a slow-moving, cooling outflow in the system. Further high-resolution follow up of the system will be crucial for understanding how it compares at later epochs to the older, more mature FU Ori outbursts that have been well studied. ## Acknowledgments We thank the Association of Amateur Variable Star Observers for their dedicated high-cadence sampling of the postoutburst lightcurve of this target, which we have reproduced in Figure 1, and discussed for multiple bands in Paper I. We thank the anonymous referee for the detailed comments which helped improve the manuscript. ## Appendix A Optical HWHD Measurements and the Keplerian Disk Model In past studies of FU Ori objects, HWHD or FWHD measurements of spectral lines have been used to estimate the location in the disk from which the spectral lines arise (Herbig et al. 2003; Park et al. 2020). However, repeated investigations of line widths in FU Ori objects have found little variation in them as a function of wavelength, particularly in the optical. Historically, this observation has been raised as evidence against the canonical \(T\propto R^{-3/4}\) and \(v\sim v_{K}\) models used. This is because the line width measurements are assumed to follow a velocity profile similar to that shown in Figure A1. The velocity profile is derived by taking the \(v_{\rm kep}\) at the luminosity-weighted mean radius of each wavelength bin in our outburst SED model. Contrary to these expectations, however, we find that measurements of the HWHD in our Keplerian, thin disk, \(T\!\propto\!R^{-3/4}\) disk model are relatively consistent with those we measure in the HIRES spectra (see Section 3.3). Though there is some slope in the measurements, tending toward narrower lines at redder wavelengths, the correlation is not significant. Furthermore, the widths of individual spectral lines is not sufficient to estimate the radial locations of the annuli from which the lines arise due to the differential broadening effect described in Section 3.2. The limited utility of HWHD measurements of spectral lines in the visible range is also pointed out in Zhu et al. (2009), where the authors argue that much broader wavelength ranges must be used. Zhu et al. (2009) find that lines around 5 \(\mu\)m do indeed have significantly narrower profiles than those in the visible. ## Appendix B Attempts to Fit the TiO Band Heads We generally find some inconsistency between the predictions of the disk model in the NIR and the data, particularly in its ability to match molecular features. In Paper I, we show that our model is capable of reproducing the spectrophotometry of V960 Mon in the NIR near the outburst peak. However, the model fails at later epochs when the H\({}_{2}\)O \(H\)-band features are significantly deeper. The difficulty in matching molecular absorption consistently across our disk model begins near the 8600 A region, where the model predicts multiple strong TiO bands that should be visible between \(I\) and \(Y\) band, but they do not appear in the data. The problem with the red/NIR TiO band heads in FU Ori objects is also mentioned by Herbig et al. (2003) in their attempts to use a disk model to model the spectra of FU Ori and V1057 Cyg. Their model uses only three stellar templates, chosen to represent three different temperatures in the disk, but they find the same problem, indicating that the issue does not arise from our choice of spectral model grid. Herbig et al. (2003) find that in both objects, the model predicts a very strong TiO \(\lambda\)8860 band head, though in FU Ori and V1057 Cyg (as in V960 Mon), only the HP line at \(\lambda\)8860 appears in the object spectra. In our attempts to fit the TiO band head at 8860 A we find that the band head is best replicated by having a disk that truncates at \(R_{\rm outer}\!=\!12~{}R_{\odot}\), which is very compact, extending only to 6 \(R_{\rm inner}\). A comparison between the band head in the data and the models with varying \(R_{\rm outer}\) values is shown in Figure 11. An active disk component like that is too small because if it were truncated at such short radii, would have an outermost temperature of 4500 K. With this as the outermost (minimum) temperature of the active disk, no region of the active component would be cool enough to produce any of the NIR molecular features. ## Appendix C All Orders of the HIRES Spectra and Residuals Figure C1 shows all of the HIRES spectral orders from our seven observations, including those shown in Figure 2. ## ORCID IDs Adolfo Carvalho (c) [https://orcid.org/0000-0002-9540-853X](https://orcid.org/0000-0002-9540-853X)
2303.15021
Propositional superposition logic
We extend classical Propositional Logic (PL) by adding a new primitive binary connective $\varphi|\psi$, intended to represent the "superposition" of sentences $\varphi$ and $\psi$, an operation motivated by the corresponding notion of quantum mechanics, but not intended to capture all aspects of the latter as they appear in physics. To interpret the new connective, we extend the classical Boolean semantics by employing models of the form $\langle M,f\rangle$, where $M$ is an ordinary two-valued assignment for the sentences of PL and $f$ is a choice function for all pairs of classical sentences. In the new semantics $\varphi|\psi$ is strictly interpolated between $\varphi\wedge\psi$ and $\varphi\vee\psi$. By imposing several constraints on the choice functions we obtain corresponding notions of logical consequence relations and corresponding systems of tautologies, with respect to which $|$ satisfies some natural algebraic properties such as associativity, closedness under logical equivalence and distributivity over its dual connective. Thus various systems of Propositional Superposition Logic (PLS) arise as extensions of PL. Axiomatizations for these systems of tautologies are presented and soundness is shown for all of them. Completeness is proved for the weakest of these systems. For the other systems completeness holds if and only if every consistent set of sentences is extendible to a consistent and complete one, a condition whose truth is closely related to the validity of the deduction theorem.
Athanassios Tzouvaras
2023-03-27T09:13:46Z
http://arxiv.org/abs/2303.15021v1
# Propositional superposition logic ###### Abstract We extend classical Propositional Logic (PL) by adding a new primitive binary connective \(\varphi|\psi\), intended to represent the "superposition" of sentences \(\varphi\) and \(\psi\), an operation motivated by the corresponding notion of quantum mechanics, but not intended to capture all aspects of the latter as they appear in physics. To interpret the new connective, we extend the classical Boolean semantics by employing models of the form \(\langle M,f\rangle\), where \(M\) is an ordinary two-valued assignment for the sentences of PL and \(f\) is a choice function for all pairs of classical sentences. In the new semantics \(\varphi|\psi\) is strictly interpolated between \(\varphi\wedge\psi\) and \(\varphi\vee\psi\). By imposing several constraints on the choice functions we obtain corresponding notions of logical consequence relations and corresponding systems of tautologies, with respect to which \(|\) satisfies some natural algebraic properties such as associativity, closedness under logical equivalence and distributivity over its dual connective. Thus various systems of Propositional Superposition Logic (PLS) arise as extensions of PL. Axiomatizations for these systems of tautologies are presented and soundness is shown for all of them. Completeness is proved for the weakest of these systems. For the other systems completeness holds if and only if every consistent set of sentences is extendible to a consistent and complete one, a condition whose truth is closely related to the validity of the deduction theorem. Department of Mathematics Aristotle University of Thessaloniki 541 24 Thessaloniki, Greece e-mail: [email protected] _Mathematics Subject Classification (2010)_: 03B60, 03G12 _Keywords:_ The logical connective for superposition. Choice function for pairs of sentences. Associative, regular, \(\neg\)-decreasing choice functions/orderings. Propositional superposition logics (PLS). Introduction In this paper we present an extension of classical Propositional Logic (PL) (more precisely, an array of extensions of increasing strength), obtained by adding a new binary logical operation, called "superposition", together with a new semantics extending the standard one, inspired and motivated by the corresponding notion of quantum mechanics. That the notion of superposition is central in quantum mechanics is rather well-known. For the sake of completeness let us outline briefly the core of the idea. A quantum system \(A\), for example an electron or a photon, can be only in finitely many possible "states" (or rather "pure states", see [3]) with respect to a certain physical magnitude \(Q\) such as spin, charge, etc. Suppose for simplicity that \(A\) can be only in two possible states, say "spin up" and "spin down". We know that whenever the spin of \(A\) is measured, the outcome will necessarily be either "spin up" or "spin down" but one cannot predict it in advance precisely, except only with a certain degree of probability. While unobserved, \(A\) is thought to be at some kind of a mixture or composition of these states, called _superposition of states._ But as soon as the system \(A\) is scanned and measured, the superposition breaks down, or "collapses" according to the jargon of quantum mechanics, to one of the constituent states. So in a sense, the states "spin up" and "spin down" co-exist and at the same time exclude each other. From its very beginning quantum mechanics had developed an effective and flexible formalism to represent the states of a system (see [3] for a brief overview of the subject), namely as vectors of a Hilbert space. For example the pure states "spin up" and "spin down" are represented by vectors \(\vec{u}_{0}\), \(\vec{u}_{1}\), respectively. Then the "principle of superposition" says that for any complex numbers \(c_{0}\), \(c_{1}\) such that \(|c_{0}|^{2}+|c_{1}|^{2}=1\), the linear combination \(c_{0}\vec{u}_{0}+c_{1}\vec{u}_{1}\) is also a legitimate state of the system \(A\). Moreover, \(|c_{0}|^{2}\), \(|c_{1}|^{2}\) represent the probabilities for \(A\) to be in state \(\vec{u}_{0}\) or \(\vec{u}_{1}\), respectively, when measured. This treatment of superposition as a linear combination of vectors is mainly due to P.A.M. Dirac1, who considered the principle of superposition as one of the most fundamental properties of quantum mechanics. Footnote 1: P.A.M. Dirac, _The Principles of Quantum Mechanics,_ Oxford U.P., 1958. Later on a new approach to quantum mechanics through _quantum logics_ was developed by the work of G. Birkhoff- J. von Neumann2, G. Mackey3 and others. Here the emphasis was in the formalization of _non-distributivity,_ another characteristic phenomenon of quantum mechanics, and it was not clear whether and how non-distributivity and superposition were related to each other. As S. Gudder says4, the problem arose to find a formulation of the principle of superposition in the quantum logic approach, roughly equivalent to Dirac's formulation in the vector-space approach. Various such formulations of superposition can be found in the literature.5 Note that all versions of quantum logic are weaker than classical logic, since they lack the distributivity law. Footnote 4: S.P. Gudder, A superposition principle in physics, _J. Math. Phys._**11** (1970), no. 3, 1037-1040. Footnote 5: See for example S.P.Gudder above and S. Pulmannová, A superposition principle in quantum logics, _Commun. Math. Phys._**49** (1976), no. 3, 47-51. An important source of inspiration for the present work has been E. Schrodinger's 1935 paper [11] containing the "cat paradox", in which the author shows, by his famous thought experiment, how superposition of quantum states might (in principle) be transformed into superposition of _macroscopic situations._ Although Schrodinger refers to the experiment with ridicule, as a "serious misgiving arising if one notices that the uncertainty affects macroscopically tangible and visible things", it is perhaps the first hint towards thinking that the phenomenon could be conceived in a much broader sense, even in contexts different from the original one. And it is this general, abstract and purely _logical content_ of superposition that we are interested in and deal with in this paper.6 Footnote 6: As already said earlier, whether the logical content of superposition, as isolated here, bears actual _connections_ with and/or _applications_ to the existing systems of quantum mechanics and quantum logic is not known at present. Some further comments on this issue are made in the last section. In particular, the purpose of the paper is to offer a simple interpretation of superposition not by means of a variant of quantum logic, but rather by an _extension of classical logic._ The interpretation is absolutely within classical reasoning and common sense, since we do not drop any law of classical logic, but only augment them by new ones concerning the superposition operation. The ingredient that makes it possible to go beyond classical tautologies is the use at each truth evaluation of a _choice function_ acting upon pairs of sentences, a tool originating in set-theory rather than logic. Let \(\varphi_{0}\), \(\varphi_{1}\) denote the statements "\(A\) is at state \(\vec{u}_{0}\)" and "\(A\) is at state \(\vec{u}_{1}\)", respectively, and let \(\varphi_{0}|\varphi_{1}\) denote the statement "\(A\) is at the superposition of states \(\vec{u}_{0}\) and \(\vec{u}_{1}\)". \(\varphi_{0}\), \(\varphi_{1}\) are ordinary statements, so they can be assigned ordinary truth values. But what about the truth values of \(\varphi_{0}|\varphi_{1}\)? Clearly the operation \(\varphi_{0}|\varphi_{1}\) cannot be expressed in classical logic, that is, \(\varphi_{0}|\varphi_{1}\) cannot be logically equivalent to a Boolean combination \(S(\varphi_{0},\varphi_{1})\) of \(\varphi_{0}\), \(\varphi_{1}\).7 However, an intriguing feature of \(\varphi_{0}|\varphi_{1}\) is that it has points in common _both_ with classical conjunction and classical disjunction. In a sense it is a "mixture" of \(\varphi_{0}\wedge\varphi_{1}\) and \(\varphi_{0}\vee\varphi_{1}\), or a property between them, since it bears a conjunctive as well as a disjunctive component. Indeed, \(\varphi_{0}|\varphi_{1}\) means on the one hand that the properties \(\varphi_{0}\) and \(\varphi_{1}\) hold _simultaneously_ (at least partly) during the non-measurement phase, which is clearly a conjunctive component of \(\varphi_{0}|\varphi_{1}\), and on the other, at any particular collapse of the superposed states during a measurement, \(\varphi_{0}|\varphi_{1}\) reduces to either \(\varphi_{0}\)_or_\(\varphi_{1}\), which is a disjunctive component of the operation. The interpretation of \(\varphi_{0}|\varphi_{1}\) given below justifies in fact this meaning of \(\varphi_{0}|\varphi_{1}\) as "something between \(\varphi_{0}\wedge\varphi_{1}\) and \(\varphi_{0}\vee\varphi_{1}\)". Footnote 7: As is well-known there exist precisely 16 classical binary operations \(S(\varphi_{0},\varphi_{1})\), definable in terms of \(\wedge\), \(\vee\) and \(\neg\) (including \(\wedge\), \(\vee\) themselves, and also \(\rightarrow\), \(\leftrightarrow\), their negations, as well as other trivial ones), none of which can express the logical content of \(|\). Let us consider a propositional language \(L=\{p_{0},p_{1},\ldots\}\cup\{\wedge,\vee,\neg\}\), where \(p_{i}\) are symbols of atomic propositions, whose interpretations are usual two-valued truth assignments \(M:Sen(L)\rightarrow\{0,1\}\) respecting the classical truth tables. Let us extend \(L\) to \(L_{s}=L\cup\{|\}\), where \(|\) is a new primitive binary connective. For any sentences \(\varphi,\psi\) of \(L_{s}\), \(\varphi|\psi\) denotes the superposition of \(\varphi\) and \(\psi\). Then an interpretation for the sentences of \(L_{s}\) can be given by the help of a truth assignment \(M\) for the sentences of \(L\), together with a _collapsing mapping_\(\mathsf{c}\) from the sentences of \(L_{s}\) into those of \(L\). The mapping \(\mathsf{c}\) is intended to represent the collapsing of the superposed \(\varphi|\psi\) to one of its components. The basic idea is that the collapse of the composite state \(c_{0}\vec{u}_{0}+c_{1}\vec{u}_{1}\) to one of the sates \(\vec{u}_{0}\), \(\vec{u}_{1}\) can be seen, _from the point of view of pure logic,_ just as a (more or less random) choice from the set of possible outcomes \(\{\vec{u}_{0},\vec{u}_{1}\}\). This is because from the point of view of pure logic probabilities are irrelevant or, which amounts to the same thing, the states \(\vec{u}_{0}\) and \(\vec{u}_{1}\) are considered equiprobable. In such a case the superposition of \(\vec{u}_{0}\) and \(\vec{u}_{1}\) is unique and the outcome of the collapse can be decided by a coin tossing or, more strictly, by a _choice function_ acting on pairs of observable states, which in our case coincide with pairs of sentences of \(L\). This of course constitutes a major deviation from the standard treatment of superposition, according to which there is not just one superposition of \(\vec{u}_{0}\) and \(\vec{u}_{1}\) but infinitely many, actually as many as the number of linear combinations \(c_{0}\vec{u}_{0}+c_{1}\vec{u}_{1}\), for \(|c_{0}|^{2}+|c_{1}|^{2}=1\). So the logic presented here is hardly the logic of superposition as this concept is currently used and understood in physics today. It is rather the logic of superposition, when the latter is understood as the "logical extract" of the corresponding physics concept. Whether it could eventually have applications to the field of quantum mechanics we don't know. The elementary requirements for a collapsing map \(\mathsf{c}\) are the following: (a) it must be the identity on classical sentences, that is, \(\mathsf{c}(\varphi)=\varphi\) for every \(L\)-sentence \(\varphi\). (b) It must commute with the standard connectives \(\wedge\), \(\vee\) and \(\neg\), that is, \(\mathsf{c}(\varphi\wedge\psi)=\mathsf{c}(\varphi)\wedge\mathsf{c}(\psi)\), \(\mathsf{c}(\varphi\vee\psi)=\mathsf{c}(\varphi)\vee\mathsf{c}(\psi)\) and \(\mathsf{c}(\neg\varphi)=\neg\mathsf{c}(\varphi)\). (c) \(\mathsf{c}(\varphi|\psi)\) must be _some_ of the sentences \(\mathsf{c}(\varphi)\), \(\mathsf{c}(\psi)\), which is chosen by the help of a choice function \(f\) for pairs of classical sentences, that is, \[\mathsf{c}(\varphi|\psi)=f(\{\mathsf{c}(\varphi),\mathsf{c}(\psi)\}).\] Since every sentence of \(L_{s}\) is built from atomic sentences all of which belong to the initial classical language \(L\), it follows that \(\mathsf{c}\) is _fully determined_ by the choice function \(f\), and below we shall write \(\mathsf{c}=\overline{f}\). Therefore choice functions \(f\) for pairs of sentences of \(L\) are the cornerstone of the new semantics. Given a truth assignment \(M\) for \(L\) and a choice function \(f\) for \(L\), a sentence \(\varphi\) of \(L_{s}\) is _true in \(M\) under the choice function \(f\),_ denoted \(\langle M,f\rangle\models_{s}\varphi\), if and only if \(\overline{f}(\varphi)\) is (classically) true in \(M\). That is: \[\langle M,f\rangle\models_{s}\varphi\text{ iff }M\models\ \overline{f}(\varphi).\] Since \(\overline{f}\) is generated by \(f\), special conditions on \(f\) induce special properties for \(\overline{f}\) that in turn affect the properties of \(\models_{s}\). Such a condition is needed, for instance, in order for \(|\) to be associative. The above truth concept \(\models_{s}\) extends the classical one and induces the notions of s-logical consequence, \(\varphi\models_{s}\psi\), and s-logical equivalence \(\sim_{s}\), which generalize the corresponding standard relations \(\models\) and \(\sim\). A nice feature of the new semantics is that for all sentences \(\varphi\), \(\psi\), \[\varphi\wedge\psi\models_{s}\varphi|\psi\models_{s}\varphi\vee\psi, \tag{1}\] where the relations \(\models_{s}\) in both places are strict, that is, they cannot in general be reversed (see Theorem 2.8 below). It means that \(\varphi|\psi\) is _strictly interpolated_ between \(\varphi\wedge\psi\) and \(\varphi\vee\psi\), a fact that in some sense makes precise the above expressed intuition that \(\varphi|\psi\) is a "mixture" of \(\varphi\wedge\psi\) and \(\varphi\vee\psi\). In particular, \[\varphi\wedge\neg\varphi\models_{s}\varphi|\neg\varphi\models_{s}\varphi\lor \neg\varphi,\] which means that the superposition of two contradictory situations, like those in Schrodinger's cat experiment [11] mentioned above, is neither a contradiction nor a paradox at all (see Corollary 2.9 below). Another nice feature of the semantics is that in order for \(|\) to be associative with respect to a structure \(\langle M,f\rangle\), that is, \(\langle M,f\rangle\models_{s}\varphi|(\psi|\sigma)\leftrightarrow(\varphi| \psi)|\sigma\), it is necessary and sufficient for \(f\) to coincide with the function \(\min_{<}\) induced by a total ordering \(<\) of the set of sentences of \(L\). Such an \(f=\min_{<}\) picks from each pair \(\{\alpha,\beta\}\) not a "random" element but the _least_ one with respect to \(<\). This kind of choice functions will be the dominant one throughout the paper. No knowledge of quantum mechanics or quantum logic is required for reading this paper. The only prerequisite is just knowledge of basic Propositional Logic (PL), namely its semantics, axiomatization and soundness and completeness theorems, as well as some elementary set-theoretic facts concerning choice functions for sets of finite sets, total orderings etc. For example [4] is one of the many logic texts that contain the necessary material. Nevertheless, some familiarity with non-classical logics, their axiomatization and their semantics, would be highly helpful. Also for the subject of choice functions and choice principles the reader may consult [7]. Finally, I should mention some other current treatments of superposition from a logical point of view, although one can hardly find to them points of overlapping and convergence with the present one. Such logical approaches are contained in [2], [8] and [1], to mention the most recent ones. The main difference of theses approaches from the present one is that they are all based on some non-classical logical system, while our point of departure is the solid ground of classical propositional logic. For instance [2] relies heavily on paraconsistent logic that allows one to accommodate contradictions without collapsing the system. In fact superposition is captured in [2] as a "contradictory situation": if a quantum system \(S\) is in the state of superposition of the states \(s_{1}\) and \(s_{2}\), this is expressed by the help of a two-place predicate \(K\) and the conjunction of axioms \(K(S,s_{1})\), \(\neg K(S,s_{1})\), \(K(S,s_{2})\) and \(\neg K(S,s_{2})\). (Here the negation \(\neg\) is "weak" and the conjunction of these claims is not catastrophic.) Analogously, [8] uses a version of modal logic in an enriched language that, besides \(\neg\), \(\wedge\), \(\vee\) and \(\diamond\) (possibility operator), contains a binary connective \(\star\) for the superposition operation and a unary connective \(M\) for "measurement has been done". Also a Kripke semantics is used, and the basic idea, as I understood it, is to avoid the contradiction arising e.g. from Schrodinger's cat, by "splitting" it, after the measurement, between two different possible worlds, one containing the cat alive and one containing the cat dead. Finally [1] is more syntactically oriented. It treats superposition syntactically by employing a version of sequent calculus called "basic logic" (developed in [10]), which encompasses aspects of linear logic and quantum logic. **Summary of Contents.** Section 2 contains the semantics of \(|\) based on choice functions for pairs of sentences of \(L\). More specifically, in subsection 2.1 we give the basic definitions of the new semantics and the corresponding notions of logical consequence \(\models_{s}\) and logical equivalence \(\sim_{s}\). The models for the sentences of \(L_{s}\) are structures of the form \(\langle M,f\rangle\), where \(M\) is a truth assignment to sentences of \(L\) and \(f\) is an arbitrary choice function for \(L\). We prove the basic facts, among which that \(\varphi\wedge\psi\models_{s}\varphi|\psi\models_{s}\varphi\vee\psi\models_{s}\). The properties of \(|\) supported by such general structures are only \(\varphi|\varphi\leftrightarrow\varphi\) (idempotence) and \(\varphi|\psi\leftrightarrow\psi|\varphi\) (commutativity). In order to obtain further properties for \(|\) we need to impose additional conditions on the choice functions employed which entail more and more refined truth notions. In general if \(\mathcal{F}\) is the set of all choice functions for \(L\), for any nonempty \(X\subseteq\mathcal{F}\) the relations \(\models_{X}\), of \(X\)-logical consequence, and \(\sim_{X}\), of \(X\)-logical equivalence, are naturally defined by employing models \(\langle M,f\rangle\) with \(f\in X\) (rather than \(f\in\mathcal{F}\)). For each such \(X\subseteq\mathcal{F}\) the set of \(X\)-tautologies \(Taut(X)=\{\varphi:\models_{X}\varphi\}\) is defined. In the next subsections of SS2 we focus on certain natural such subclasses \(X\subseteq\mathcal{F}\) and the corresponding truth notions. In subsection 2.2 we introduce the class _Asso_ of _associative_ choice functions and a simple and elegant characterization of them is given, as the functions \(\min_{<}\) with respect to total orderings \(<\) of the set of \(L\)-sentences. The term comes from the fact that if \(f\in Asso\), then \(|\) is associative with respect to every structure \(\langle M,f\rangle\). A kind of converse holds also: If \(|\) is associative with respect to \(\langle M,f\rangle\), then \(f\) is "essentially associative". In subsection 2.3 we introduce the class \(Reg\) of _regular_ choice functions, as well as the finer class \(Reg^{*}=Reg\cap Asso\). Regularity guarantees that the truth relation \(\models_{Reg}\), as well as \(\models_{Reg^{*}}\), is "logically closed", that is, for any subsentence \(\sigma\) of \(\varphi\) and any \(\sigma^{\prime}\sim_{Reg}\sigma\), \(\varphi[\sigma^{\prime}/\sigma]\) and \(\varphi\) are equivalent in \(\langle M,f\rangle\), with \(f\in Reg\). In subsection 2.4 we introduce the even finer class \(Dec\) of \(\neg\)_-decreasing_ regular associative functions, that is, \(Dec\subset Reg^{*}\). A total ordering of \(Sen(L)<\) is \(\neg\)-decreasing if and only if for all \(\alpha\), \(\beta\), \(\alpha<\beta\Leftrightarrow\neg\beta<\neg\alpha\). \(f\) is \(\neg\)-decreasing if and only if \(f=\min_{<}\) for some \(\neg\)-decreasing total ordering \(<\). The existence of \(\neg\)-decreasing regular total orderings of \(Sen(L)\) is shown and a syntactic characterization of \(\neg\)-decreasingness is given. In subsection 2.5 we consider the dual connective \(\varphi\circ\psi:=\neg(\neg\varphi|\neg\psi)\) of \(|\) and show that it commutes with \(|\) if and only the choice functions involved are \(\neg\)-decreasing. Section 3 is devoted to the axiomatization of Propositional Superposition Logic(s) (PLS). In the general section we give axiomatizations for the logics based on the sets of choice functions \(\mathcal{F}\), \(Reg\), \(Reg^{*}\) and \(Dec\). In general for every set \(X\subseteq\mathcal{F}\) of choice functions and every set \(K\subseteq Taut(X)\) of tautologies with respect to the truth notion \(\models_{X}\), a logic PLS\((X,K)\) is defined, whose axioms are those of PL plus \(K\) and its semantics is the relation \(\models_{X}\). Within PLS\((X,K)\)\(K\)-consistency is defined and Soundness Theorem is proved for every logic PLS\((X,K)\) with \(K\subseteq Taut(X)\). Next we introduce specific axiomatizations (by finitely many schemes of axioms) \(K_{0}\), \(K_{1}\), \(K_{2}\), \(K_{3}\) for the truth relations defined by the classes \(\mathcal{F}\), \(Reg\), \(Reg^{*}\) and \(Dec\), respectively. The logics PLS\((\mathcal{F},K_{0})\), PLS\((Reg,K_{1})\), PLS\((Reg^{*},K_{2})\), PLS\((Dec,K_{3})\) are sound as a consequence of the previous general fact. There exists an essential difference between the axiomatization of \(\mathcal{F}\), and those of the rest systems \(Reg\), \(Reg^{*}\) and \(Dec\). The difference consists in that \(K_{1}\)-\(K_{3}\) contain an extra inference rule (besides Modus Ponens) because of which the truth of the Deduction Theorem (DT) is open. This has serious effects on the completeness of the systems based on \(K_{1}\)-\(K_{3}\). So we split the examination of completeness for PLS\((\mathcal{F},K_{0})\) on the one hand and for the rest systems on the other. In subsection 3.1 we prove the (unconditional) completeness of the system PLS\((\mathcal{F},K_{0})\). In subsection 3.2 we examine completeness for the logics PLS\((Reg,K_{1})\), PLS\((Reg^{*},K_{2})\) and PLS\((Dec,K_{3})\). The possible failure of DT makes it necessary to distinguish between two forms of completeness, CT1 and CT2, which in the lack of DT need not be equivalent. CT1 implies CT2 but the converse is open. Concerning the systems \(K_{1}\)-\(K_{3}\), we are seeking to prove CT2 rather than CT1. We show that these systems are _conditionally complete_ in the sense that each of these systems is CT2-complete if and only if each \(K_{i}\) satisfies a certain extendibility property \(cext(K_{i})\) saying that every \(K_{i}\)-consistent set of sentences is extended to a \(K_{i}\)-consistent and complete set. This property is trivial for formal systems \(K\) satisfying DT, but is open for systems for which DT is open. Assuming that \(cext(K_{i})\) is true, the proofs of CT2-completeness for the above logics are all variations of the proof of completeness of PLS\((\mathcal{F},K_{0})\). On the other hand failure of \(cext(K_{i})\) implies the failure of CT2-completeness of the corresponding system. In general, the proof of (CT2-)completeness of a logic PLS\((X,K)\), with \(K\subset Taut(X)\), goes roughly as follows: start with a \(K\)-consistent and complete set \(\Sigma\subset Sen(L_{s})\). To prove it is \(X\)-verifiable, pick \(\Sigma_{1}=\Sigma\cap Sen(L)\). Then \(\Sigma_{1}\) is a consistent and complete set in the sense of PL. So by completeness of the latter there exists a two-valued assignment \(M\) such that \(M\models\Sigma_{1}\). Then in order to prove the \(X\)-verifiability of \(\Sigma\), it suffices to define a choice function \(f\) such that \(f\in X\) and \(\langle M,f\rangle\models\Sigma\). Finally in section 4 we describe briefly two goals for future research, namely, (1) the goal to find alternative semantics for the logics PLS, and (2) to develop a superposition extension of first-order logic (FOL) with an appropriate semantics and complete axiomatization. ## 2 Semantics of superposition propositional logic based on choice functions ### Definitions and basic facts Let us fix a propositional language \(L=\{p_{0},p_{1},\ldots\}\cup\{\neg,\wedge\}\), where \(p_{i}\) are symbols of atomic sentences. The other connectives \(\vee\), \(\rightarrow\), \(\leftrightarrow\) are defined as usual in terms of the previous ones.8 Let \(Sen(L)\) denote the set of sentences of \(L\). Throughout \(M\) will denote some _truth assignment_ for the sentences of \(L\), that is, a mapping \(M:Sen(L)\rightarrow\{0,1\}\) that is defined according to the standard truth tables. For a given \(\alpha\in Sen(L)\) we shall use the notation \(M\models\alpha\), \(M\models\neg\alpha\) instead of \(M(\alpha)=1\) and \(M(\alpha)=0\), respectively, for practical reasons. Namely, below we shall frequently refer to the truth of sentences denoted \(\overline{f}(\varphi)\), so it would be more convenient to write \(M\models\overline{f}(\varphi)\) than \(M(\overline{f}(\varphi))=1\). Footnote 8: The functions \(\overline{f}\) defined below are going to respect classical connectives, and hence classical equivalences, so it makes no difference if, e.g., we define \(\varphi\rightarrow\psi\) as \(\neg(\varphi\wedge\neg\psi)\) or \(\neg(\neg\psi\wedge\phi)\). Let \(L_{s}=L\cup\{|\}\), where \(|\) is a new primitive binary logical connective. The set of atomic sentences of \(L_{s}\) are identical to those of \(L\), while the set of sentences of \(L_{s}\), \(Sen(L_{s})\), is recursively defined along the obvious steps: If \(\varphi,\psi\in Sen(L_{s})\), then \(\varphi\wedge\psi\), \(\varphi|\psi\), \(\neg\varphi\) belong to \(Sen(L_{s})\). **Basic notational convention.** To keep track of whether we refer, at each particular moment, to sentences of \(L\) or \(L_{s}\), throughout the letters \(\varphi\), \(\psi\), \(\sigma\) will denote general sentences of \(L_{s}\), while the letters \(\alpha\), \(\beta\), \(\gamma\) will denote sentences of \(L\) only. Also we often refer to sentences of \(L\) as "classical". Throughout given a set \(A\) we let \[[A]^{2}=\{\{a,b\}:a,b\in A\}\] denote the set of all 2-element and 1-element subsets of \(A\). We refer to the elements of \([A]^{2}\) as _pairs_ of elements of \(A\). A _choice function_ for \([A]^{2}\) is as usual a mapping \(f:[A]^{2}\to A\) such that \(f(\{a,b\})\in\{a,b\}\) for every \(\{a,b\}\in[A]^{2}\). To save brackets we write \(f(a,b)\) instead of \(f(\{a,b\})\). So in particular \(f(a,b)=f(b,a)\) and \(f(a,a)=a\).9 Footnote 9: The claim of the existence of a choice function for the set \([A]^{2}\), for every set \(A\), is a weak form of the axiom of choice \((AC)\), denoted C\({}_{2}\) in [7]. In general for every \(n\in\mathbb{N}\), C\({}_{n}\) denotes the principle that every set of \(n\)-element sets has a choice function. The interested reader may consult [7, section 7.4] for various interrelations between such principles, as well as with the Axiom of Choice for Finite Sets (saying that every nonempty set of nonempty finite sets has a choice function). See also Remark 2.16 below. **Definition 2.1**: Given the language \(L\), a choice function for \([Sen(L)]^{2}\), the set of pairs of sentences of \(L\), will be referred to as a _choice function for \(L\)._ Let \[{\cal F}(L)=\{f:\ f\ \mbox{is a choice function for }L\}.\] Throughout we shall write more simply \({\cal F}\) instead of \({\cal F}(L)\). Below the letters \(f,g\) will range over elements of \({\cal F}\) unless otherwise stated. In particular for all \(\alpha,\beta\in Sen(L)\), we write \(f(\alpha,\beta)\) instead of \(f(\{a,b\})\) so \(f(\alpha,\beta)=f(\beta,\alpha)\) and \(f(\alpha,\alpha)=\alpha\). **Definition 2.2**: Let \(f\) be a choice function for \(L\). Then \(f\) generates a _collapsing function_\(\overline{f}:Sen(L_{s})\to Sen(L)\) defined inductively as follows: (i) \(\overline{f}(\alpha)=\alpha\) for every \(\alpha\in Sen(L)\). (ii) \(\overline{f}(\varphi\wedge\psi)=\overline{f}(\varphi)\wedge\overline{f}(\psi)\). (iii) \(\overline{f}(\neg\varphi)=\neg\ \overline{f}(\varphi)\). (iv) \(\overline{f}(\varphi|\psi)=f(\overline{f}(\varphi),\overline{f}(\psi))\). **Remarks 2.3**: (i) Since the connectives \(\vee\) and \(\to\) are defined in terms of \(\neg\) and \(\wedge\), \(\overline{f}\) commutes also with respect to them, that is, \(\overline{f}(\varphi\vee\psi)=\overline{f}(\varphi)\vee\overline{f}(\psi)\). \(\overline{f}(\varphi\to\psi)=\overline{f}(\varphi)\to\overline{f}(\psi)\). (ii) The crucial clause of the definition is of course (iv). It says that for any sentences \(\varphi\), \(\psi\), \(\overline{f}(\varphi|\psi)\) is a _choice_ from the set \(\{\overline{f}(\varphi),\overline{f}(\psi)\}\). In particular, for classical sentences \(\alpha\), \(\beta\) we have \[\overline{f}(\alpha|\beta)=f(\alpha,\beta), \tag{2}\] **Definition 2.4**: (Main Truth definition). Let \(M\) be a truth assignment for \(L\), \(f\) a choice function for \(L\) and \(\overline{f}:Sen(L_{s})\to Sen(L)\) be the corresponding collapsing function. The _truth relation \(\models_{s}\) between the pair \(\langle M,f\rangle\) and a sentence \(\varphi\) of \(L_{s}\)_ is defined as follows: \[\langle M,f\rangle\models_{s}\varphi\mbox{ iff }M\models\ \overline{f}(\varphi).\] More generally, for a set \(\Sigma\subset Sen(L_{s})\) we write \(\langle M,f\rangle\models_{s}\Sigma\), if \(\langle M,f\rangle\models_{s}\varphi\) for every \(\varphi\in\Sigma\). The following facts are easy consequences of the preceding definitions. **Fact 2.5**: _(i) The truth relation \(\models_{s}\) extends the Boolean one \(\models\), that is for every \(\alpha\in Sen(L)\), and every \(\langle M,f\rangle\), \(\langle M,f\rangle\models_{s}\alpha\Leftrightarrow\ M\models\alpha\)._ _(ii) \(\models_{s}\) is a bivalent notion of truth, that is for every \(\langle M,f\rangle\) and every sentence \(\varphi\), either \(\langle M,f\rangle\models_{s}\varphi\) or \(\langle M,f\rangle\models_{s}\neg\varphi\)._ _(iii) For every sentence \(\varphi\) of \(L_{s}\), every structure \(M\) and every collapsing function \(\overline{f}\), \(\langle M,f\rangle\models_{s}\varphi|\varphi\) if and only if \(\langle M,f\rangle\models_{s}\varphi\)._ _(iv) For all \(\varphi,\psi\in Sen(L_{s})\), \(M\) and \(f\), \(\langle M,f\rangle\models_{s}\varphi|\psi\) if and only if \(\langle M,f\rangle\models_{s}\psi|\varphi\)._ _Proof._ (i) Immediate from the fact that by clause (i) of 2.2, \(\overline{f}(\alpha)=\alpha\) for every sentence \(\alpha\in Sen(L)\). Thus \(\langle M,f\rangle\models_{s}\alpha\) if and only if \(M\models\alpha\). (ii) Let \(\langle M,f\rangle\not\models_{s}\varphi\). Then \(M\not\models\overline{f}(\varphi)\), that is, \(M\models\neg\overline{f}(\varphi)\). By clause (iii) of 2.2, \(\neg\overline{f}(\varphi)=\overline{f}(\neg\varphi)\), so \(\langle M,f\rangle\not\models_{s}\varphi\) implies \(M\models\overline{f}(\neg\varphi)\), or \(\langle M,f\rangle\models_{s}\neg\varphi\). (iii): By definition 2.1, \(f(\alpha,\alpha)=\alpha\), for every \(\alpha\). Therefore \(\langle M,f\rangle\models_{s}\varphi|\varphi\Leftrightarrow\mathcal{M}\models f (\overline{f}(\varphi),\overline{f}(\varphi))\Leftrightarrow\mathcal{M}\models \overline{f}(\varphi)\Leftrightarrow\langle M,f\rangle\models_{s}\varphi\). (iv): By 2.1 again \(f(\alpha,\beta)=f(\beta,\alpha)\). So \(\langle M,f\rangle\models_{s}\varphi|\psi\Leftrightarrow M\models f( \overline{f}(\varphi),\overline{f}(\psi))\Leftrightarrow M\models f( \overline{f}(\psi),\overline{f}(\varphi))\Leftrightarrow\langle M,f\rangle \models_{s}\psi|\varphi\). \(\dashv\) Let \(\Sigma\models\alpha\), \(\alpha\models\beta\), (for \(\Sigma\subset Sen(L)\)), and \(\alpha\sim\beta\) denote the classical logical consequence and logical equivalence relations, respectively, for classical sentences. These are extended to the relations \(\varphi\models_{s}\psi\), \(\Sigma\models_{s}\varphi\) (for \(\Sigma\subset Sen(L_{s})\)), and \(\varphi\sim_{s}\psi\) for \(L_{s}\)-sentences as follows. **Definition 2.6**: Let \(\Sigma\subset Sen(L)\), \(\varphi,\psi\in Sen(L_{s})\). We say that \(\varphi\) is an _s-logical consequence_ of \(\Sigma\), denoted \(\Sigma\models_{s}\varphi\), if for every structure \(\langle M,f\rangle\), \(\langle M,f\rangle\models_{s}\Sigma\) implies \(\langle M,f\rangle\models_{s}\varphi\). In particular we write \(\varphi\models_{s}\psi\) instead of \(\{\varphi\}\models_{s}\psi\). We say that \(\varphi\) and \(\psi\) are _s-logically equivalent,_ denoted \(\varphi\sim_{s}\psi\), if for every \(\langle M,f\rangle\), \[\langle M,f\rangle\models_{s}\varphi\ \Leftrightarrow\ \langle M,f\rangle\models_{s}\psi.\] Finally, \(\varphi\) is an _s-tautology,_ denoted \(\models_{s}\varphi\), if \(\langle M,f\rangle\models_{s}\varphi\) for every \(\langle M,f\rangle\). **Fact 2.7**: _(i) \(\varphi\models_{s}\psi\) if and only if \(\models_{s}\varphi\to\psi\)._ _(ii) \(\varphi\sim_{s}\psi\) if and only if \(\models_{s}\varphi\leftrightarrow\psi\)._ _(iii) For \(\alpha,\beta\in Sen(L)\), \(\alpha\models_{s}\beta\) if and only if \(\alpha\models\beta\) and \(\alpha\sim_{s}\beta\) if and only if \(\alpha\sim\beta\)._ _(iv) \(\varphi\sim_{s}\psi\) if and only if for all choice functions \(f\), \(\overline{f}(\varphi)\sim\overline{f}(\psi)\)._ _(v) Let \(\alpha(p_{1},\ldots,p_{n})\) be a sentence of \(L\), made up by the atomic sentences \(p_{1},\ldots,p_{n}\), let \(\psi_{1},\ldots,\psi_{n}\) be any sentences of \(L_{s}\) and let \(\alpha(\psi_{1},\ldots,\psi_{n})\) be the sentence resulting from \(\alpha\) if we replace each \(p_{i}\) by \(\psi_{i}\). Then:_ \[\models\alpha(p_{1},\ldots,p_{n})\ \Rightarrow\models_{s}\alpha(\psi_{1}, \ldots,\psi_{n}).\] _(vi) For all \(\varphi,\psi\), \(\varphi|\varphi\sim_{s}\varphi\) and \(\varphi|\psi\sim_{s}\psi|\varphi\)._ _(vii) If \(\varphi\sim_{s}\psi\), then \(\varphi|\psi\sim_{s}\varphi\)._ _Proof._ (i): Let \(\varphi\models_{s}\psi\). It means that for every \(\langle M,f\rangle\), \(\langle M,f\rangle\models_{s}\varphi\) implies \(\langle M,f\rangle\models_{s}\psi\). Equivalently, \(M\models\overline{f}(\varphi)\) implies \(M\models\overline{f}(\psi)\), or \(M\models(\overline{f}(\varphi)\to\overline{f}(\psi))\), or \(M\models\overline{f}(\varphi\to\psi)\). It means that for every \(M\) and every \(\overline{f}\), \(\langle M,f\rangle\models_{s}\varphi\to\psi\). Thus \(\models_{s}\varphi\to\psi\). The converse is similar. (ii) and (iii) follow from (i). (iv): Note that \(\varphi\sim_{s}\psi\) holds if and only if for all \(\langle M,f\rangle\), \(\langle M,f\rangle\models_{s}\varphi\) if and only if \(\langle M,f\rangle\models_{s}\psi\), or equivalently, \(M\models\overline{f}(\varphi)\) if and only if \(M\models\overline{f}(\psi)\). But this means that for every \(f\), \(\overline{f}(\varphi)\sim\overline{f}(\psi)\). (v): Suppose \(\models\alpha(p_{1},\ldots,p_{n})\). For any choice function \(f\), clearly \[\overline{f}(\alpha(\psi_{1},\ldots,\psi_{n}))=\alpha(\overline{f}(\psi_{1} ),\ldots,\overline{f}(\psi_{n})),\] since \(\alpha\) is classical and \(\overline{f}\) commutes with standard connectives. Moreover \(\models\alpha(\overline{f}(\psi_{1}),\ldots,\overline{f}\psi_{n})\), since by assumption \(\models\alpha(p_{1},\ldots,p_{n})\) and \(\overline{f}(\psi_{i})\) are standard sentences. Thus \(M\models\alpha(\overline{f}(\psi_{1}),\ldots,\overline{f}(\psi_{n}))\), for every \(M\), or \(M\models\overline{f}(\alpha(\psi_{1},\ldots,\psi_{n}))\). It means that \(\langle M,f\rangle\models_{s}\alpha(\psi_{1},\ldots,\psi_{n})\) for every structure \(\langle M,f\rangle\), so \(\models_{s}\alpha(\psi_{1},\ldots,\psi_{n})\). (vi) This follows immediately from clauses (iii) and (iv) of Fact 2.5. (vii) Let \(\varphi\sim_{s}\psi\) and let \(\langle M,f\rangle\models\varphi|\psi\). Then \(M\models f(\overline{f}(\varphi),\overline{f}(\psi))\). By clause (iv) above, \(\overline{f}(\varphi)\sim\overline{f}(\psi)\) since \(\varphi\sim_{s}\psi\). Therefore whatever the choice of \(f\) would be between \(\overline{f}(\varphi)\) and \(\overline{f}(\psi)\), we shall have \(M\models\overline{f}(\varphi)\). Thus \(\langle M,f\rangle\models_{s}\varphi\). \(\dashv\) The following _interpolation property_ of the new semantics is perhaps the most striking one. Notice that it holds for the _general choice functions,_ not requiring any of the additional conditions to be considered in the subsequent sections. **Theorem 2.8**: _For all \(\varphi,\psi\in Sen(L_{s})\),_ \[\varphi\wedge\psi\models_{s}\varphi|\psi\models_{s}\varphi\vee\psi,\] _while in general_ \[\varphi\vee\psi\not\models_{s}\varphi|\psi\not\models_{s}\varphi\wedge\psi.\] _Proof._ Assume \(\langle M,f\rangle\models_{s}\varphi\wedge\psi\). Then \(M\models\overline{f}(\varphi)\wedge\overline{f}(\psi)\), that is, \(M\models\overline{f}(\varphi)\) and \(M\models\overline{f}(\psi)\). But then, whatever \(f\) would choose from \(\{\overline{f}(\varphi),\overline{f}(\psi)\}\), it would be true in \(M\), that is, \(M\models f(\overline{f}(\varphi),\overline{f}(\psi))\). This exactly means that \(\langle M,f\rangle\models_{s}\varphi|\psi\). Therefore \(\varphi\wedge\psi\models_{s}\varphi|\psi\). On the other hand, if \(\langle M,f\rangle\models_{s}\varphi|\psi\) then \(M\models f(\overline{f}(\varphi),\overline{f}(\psi))\). If \(f(\overline{f}(\varphi),\overline{f}(\psi))=\overline{f}(\varphi)\), then \(M\models\overline{f}(\varphi)\). If \(f(\overline{f}(\varphi),\overline{f}(\psi))=\overline{f}(\psi)\), then \(M\models\overline{f}(\psi)\). So either \(M\models\overline{f}(\varphi)\) or \(M\models\overline{f}(\psi)\). Therefore \(M\models\overline{f}(\varphi)\vee\overline{f}(\psi)\). But clearly \(\overline{f}(\varphi)\vee\overline{f}(\psi)=\overline{f}(\varphi\vee\psi)\), since \(\overline{f}\) commutes with all standard connectives. Thus \(M\models\overline{f}(\varphi\vee\psi)\), or equivalently, \(\langle M,f\rangle\models_{s}\varphi\vee\psi\). Therefore \(\varphi|\psi\models_{s}\varphi\vee\psi\). To see that the converse relations are false, pick \(\alpha\in Sen(L)\) and a truth assignment \(M\) such that \(M\models\alpha\). Pick also a choice function for \(L\) such that \(f(\alpha,\neg\alpha)=\neg\alpha\). Since \(\overline{f}(\alpha\vee\neg\alpha)=\alpha\vee\neg\alpha\), \(\langle M,f\rangle\models_{s}\alpha\vee\neg\alpha\). On the other hand, \(M\not\models\neg\alpha\) implies \(M\not\models f(\alpha,\neg\alpha)\), thus \(\langle M,f\rangle\not\models_{s}\alpha|\neg\alpha\). Therefore \(\alpha\vee\neg\alpha\not\models_{s}\alpha|\neg\alpha\). Similarly, if \(M\), \(\alpha\) are as before, but we take a choice function \(g\) such that \(g(\alpha,\neg\alpha)=\alpha\), then \(\langle M,g\rangle\models_{s}\alpha|\neg\alpha\), while \(\langle M,g\rangle\not\models_{s}\alpha\wedge\neg\alpha\). So \(\alpha|\neg\alpha\not\models_{s}\alpha\wedge\neg\alpha\). \(\dashv\) **Corollary 2.9**: _If \(\alpha\) is neither a tautology nor a contradiction, then \(\alpha|\neg\alpha\) is neither an \(s\)-tautology nor an \(s\)-contradiction._ _Proof._ If \(\alpha\) is as stated, then by the proof of Theorem 2.8\(\alpha|\neg\alpha\) is strictly interpolated between \(\alpha\wedge\neg\alpha\) and \(\alpha\vee\neg\alpha\). \(\dashv\) In the semantics \(\models_{s}\) used above, arbitrary choice functions for \(L\) are allowed to participate. This practically means that for any pair \(\{\alpha,\beta\}\), \(f\) may pick an element from \(\{\alpha,\beta\}\) quite randomly, e.g. by tossing a coin. However, if we want \(\models_{s}\) to support additional properties of \(|\), we must refine \(\models_{s}\) by imposing extra conditions to the choice functions. Such a refinement can be defined in a general manner as follows. **Definition 2.10**: For every \(\emptyset\neq X\subseteq{\cal F}\), define the \(X\)_-logical consequence relation_\(\models_{X}\) and the \(X\)_-logical equivalence relation_\(\sim_{X}\) as follows: \(\Sigma\models_{X}\varphi\) if and only if for every truth assignment \(M\) for \(L\) and every \(f\in X\), \[\langle M,f\rangle\models_{s}\Sigma\ \Rightarrow\langle M,f\rangle\models_{s}\varphi.\] Also \(\varphi\sim_{X}\psi\) if and only if \(\varphi\models_{X}\psi\) and \(\psi\models_{X}\varphi\). [The purpose of condition \(X\neq\emptyset\) is to block trivialities. For if \(X=\emptyset\), we vacuously have \(\varphi\models_{\emptyset}\psi\) and \(\varphi\sim_{\emptyset}\psi\) for all \(\varphi,\psi\in Sen(L_{s})\). So all sets \(X\), \(Y\subseteq{\cal F}\) referred to below are assumed to be \(\neq\emptyset\).] Using the above notation, the relations \(\models_{s}\) and \(\sim_{s}\) are alternatively written \(\models_{\cal F}\), \(\sim_{\cal F}\), respectively. The following simple fact reduces \(\sim_{X}\) to the standard \(\sim\). **Lemma 2.11**: _For every \(X\subseteq{\cal F}\), and any \(\varphi,\psi\in Sen(L_{s})\),_ \[\varphi\sim_{X}\psi\ \Leftrightarrow(\forall f\in X)(\overline{f}(\varphi) \sim\overline{f}(\psi)).\] _Proof._ By definition, \(\varphi\sim_{X}\psi\) if for every \(M\) and every \(f\in X\), \[\langle M,f\rangle\models_{s}\varphi\ \Leftrightarrow\ \langle M,f\rangle \models_{s}\psi,\] or, equivalently, if for all \(M\) and \(f\in X\), \[M\models\overline{f}(\varphi)\ \Leftrightarrow\ M\models\overline{f}(\psi).\] The latter is true if and only if for all \(f\in X\), \(\overline{f}(\varphi)\sim\overline{f}(\psi)\). \(\dashv\) The next properties are easy to verify. **Fact 2.12**: _For every \(X,Y\subseteq{\cal F}\):_ _(i) \(\varphi\models_{X}\psi\) if and only if \(\models_{X}\varphi\to\psi\)._ _(ii) \(\varphi\sim_{X}\psi\) if and only if \(\models_{X}\varphi\leftrightarrow\psi\)._ _(iii) If \(X\subseteq Y\), then \(\models_{Y}\subseteq\models_{X}\) and \(\sim_{Y}\subseteq\sim_{X}\)._ _(iv) The restriction of \(\sim_{X}\) to classical sentences coincides with \(\sim\), that is, for all \(\alpha,\beta\in Sen(L)\),_ \[\alpha\sim_{X}\beta\ \Leftrightarrow\ \alpha\sim\beta.\] Before closing this section I should give credit to [6] for some notions introduced above. It was not until one of the referees drew my attention to [6], when I learned (with surprise) that the notion of choice function for pairs of formulas, and, essentially, the germ of the satisfaction relation defined in 2.4 above, were not entirely new but had already been defined independently with some striking similarities in the style of presentation. In fact in Example 3.24.14, p. 479, of [6] we read: "By a _pair selection function_ on a set \(U\) we mean a function \(f\) such that for all \(a,b\in U\), \(f(\{a,b\})\in\{a,b\}\). We write \(f(a,b)\) for '\(f(\{a,b\}\)' and include the possibility that \(a=b\) in which case \(f(a,b)=a=b\). (...) A pair selection function is accordingly a commutative idempotent binary operation which is in addition a _quasi-projection_ or a _conservative_ operation, meaning that its value for a given pair of arguments is always one of those arguments. For the current application consider \(f\) as a pair selection function on the set of formulas of the language generated from the actual stock of propositional variables with the aid of the binary connective \(\circ\). Consider the gcr (=generalized consequence relation) determined by the class of all valuations \(v\) satisfying the condition that for some pair selection function \(f\) we have: For all formulas \(A,B\), \(v(A\circ B)=v(f(A,B))\). Then, if \(\succ\) denotes this gcr, it satisfies the rules: (I) \(A,B\succ A\circ B\), (II) \(A\circ B\succ A,B\) and (IV) \(A\circ B\succ B\circ A\)." Note that rules (I) and (II) are essentially the "interpolation property" of Theorem 2.8, while rule (IV) is the commutativity property (vi) of Fact 2.7. In the next section we consider a first natural subclass \(X\subset\mathcal{F}\), the class of _associative_ choice functions. These are precisely the functions that support the associativity property of the connective \(|\). Clearly associativity is a highly desirable property from an _algebraic_ point of view. However, as one of the referees interestingly observed at this point, we must distinguish between what is algebraically desirable and what is quantum mechanically desirable, i.e., close to the real behavior of a quantum system. In his view, classes of choice functions with not very attractive and smooth properties might also deserve to be isolated and scrutinized. ### Associative choice functions By clause (vi) of Fact 2.7, \(\varphi|\varphi\sim_{s}\varphi\) and \(\varphi|\psi\sim_{s}\psi|\varphi\). These two properties, idempotence and commutativity up to logical equivalence, are in accord with the intended intuitive meaning of the operation \(|\). Another desirable property that is in accord with the meaning of \(|\) is _associativity_, that is, the logical equivalence of \((\varphi|\psi)|\sigma\) and \(\varphi|(\psi|\sigma)\). Is it true with respect to \(\sim_{s}\)? The answer is: not in general. In order to ensure it we need to impose a certain condition on the choice functions. The specific condition does not depend on the nature of elements of \(Sen(L)\), so we prove it below in a general setting. Let \(A\) be an infinite set and let \(f:[A]^{2}\to A\) be a choice for pairs of elements of \(A\). One might extend it to the set \([A]^{3}\), of nonempty sets with at most 3 elements, by setting, say, \[f(a,b,c):=f(\{a,b,c\})=f(f(a,b),c).\] But this does not guarantee that \(f(a,b,c)=f(b,c,a)=f(c,b,a)\), etc, as it would be obviously required, unless \(f(f(a,b),c)=f(a,f(b,c))\) for all \(a,b,c\). This is exactly the required condition. **Definition 2.13**: Let \(f\) be a choice function for \([A]^{2}\). \(f\) is said to be _associative_ if for all \(a,b,c\in A\), \[f(f(a,b),c)=f(a,f(b,c)).\lx@note{footnote}{If we write $a\star b$ instead of $f(a,b)$, then the condition $f(f(a,b),c)=f(a,f(b,c))$ is rewritten $(a\star b)\star c=a\star(b\star c)$, which justifies the term ``associative''.}\] We show below that associative choice functions on \([A]^{2}\) are, essentially, the functions \(\min_{<}\), where \(<\) is a total ordering of \(A\). I do not know if the next theorem is new or a known result. In any case I couldn't find a proof in the current bibliography. **Theorem 2.14**: _(i) If \(<\) is a total ordering on \(A\), then the mapping \(\min_{<}(a,b)\) from \([A]^{2}\) into \(A\) is associative._ _(ii) Conversely, if \(f:[A]^{2}\to A\) is an associative choice function, then it defines a total ordering \(<\) on \(A\) such that for all \(a,b\in A\), \(f(a,b)=\min_{<}(a,b)\)._ _Proof._ (i) Let \(<\) be a total ordering of \(A\). Let \(Fin(A)\) denote the set of all nonempty finite subsets of \(A\) and let \(\min_{<}\) be the function picking the \(<\)-least element of \(x\) for every \(x\in Fin(A)\). Let us write \(\min\) instead of \(\min_{<}\). Obviously \(\min\) is a choice function for \(Fin(A)\). In particular, for all \(a,b,c\in A\), \[\min(a,b,c)=\min(\min(a,b),c)=\min(a,\min(b,c)).\] Thus \(\min\) restricted to \([A]^{2}\) is associative. (ii) Let \(f:[A]^{2}\to A\) be an associative choice function. Define the relation \(<\) on \(A\) as follows: For any \(a,b\in A\), let \(a<b\) if and only if \(a\neq b\) and \(f(a,b)=a\). Obviously \(<\) is total and anti-reflexive (that is, \(a\not<a\) for every \(a\in A\)). Thus in order for \(<\) to be a total ordering it suffices to be also transitive. Let \(a<b\) and \(b<c\). We show that \(a<c\). By the assumptions, we have \(a\neq b\), \(f(a,b)=a\), \(b\neq c\) and \(f(b,c)=b\). It follows from them that \(a\neq c\), for otherwise \(b=f(b,c)=f(b,a)=f(a,b)=a\), a contradiction. It remains to show that \(f(a,c)=a\). By associativity and commutativity of \(f\), \(f(a,f(b,c))=f(b,f(a,c))\). Since \(f(a,f(b,c))=f(a,b)=a\), it follows that \(f(b,f(a,c))=a\) too. If \(f(a,c)=c\), then we would have \(f(b,f(a,c))=f(b,c)=b\neq a\), a contradiction. Therefore \(f(a,c)=a\) and we are done. Thus \(<\) is a total ordering of \(A\), and by definition \(f(a,b)=\min_{<}(a,b)\), for all \(a,b\in A\). \(\dashv\) As an immediate corollary of Theorem 2.14 we obtain the following. **Corollary 2.15**: _If \(f:[A]^{2}\to A\) is an associative choice function, then it defines uniquely a total ordering \(<\) of \(A\) such that \(f(a,b)=\min_{<}(a,b)\). Therefore \(f\) extends uniquely to the choice function \(f^{+}:Fin(A)\to A\), such that for every \(x\in Fin(A)\), \(f^{+}(x)=\min_{<}(x)\). Thus \(f=f^{+}\!\upharpoonright\![A]^{2}\)._ In view of the preceding Corollary, we can without serious loss of precision identify an associative choice function \(f\) on \([A]^{2}\) with the generated choice function \(f^{+}\) on the entire \(Fin(A)\), and write \(f=\min_{<}\) instead of \(f^{+}=\min_{<}\), where \(<\) is the ordering defined by \(f\). **Remark 2.16**: From a set-theoretical point of view, the existence of an associative function is a much stronger statement than the existence of a simple choice function for \([A]^{2}\). As noticed in footnote 9 the latter is identical to the principle \(\mathrm{C}_{2}\). On the other hand, it follows from 2.14 and 2.15 that the existence of an associative choice function for \([A]^{2}\), for every set \(A\), is equivalent to the existence of a total ordering on \(A\), i.e., to the _Ordering Principle_ saying that "Every set can be totally ordered", which is much stronger than \(\mathrm{C}_{2}\). Specifically, it was shown in [9] that the Ordering Principle is strictly stronger than the Axiom of Choice for Finite Sets. The latter is in turn strictly stronger than the conjunction of all axioms \(\mathrm{C}_{n}\), for \(n\geq 2\) (see [7, Theorem 7.11]). Let us now return to the set \(Sen(L)\) of sentences of \(L\). In particular a choice function \(f\) for \(L\) is said to be _associative_ if for all \(\alpha,\beta,\gamma\in Sen(L)\), \[f(f(\alpha,\beta),\gamma)=f(\alpha,f(\beta,\gamma)). \tag{3}\] We often call also the pair \(\langle M,f\rangle\)_associative_ if \(f\) is associative. As an immediate consequence of Theorem 2.14, Corollary 2.15 and the comments following the latter we have the following. **Corollary 2.17**: _A choice function \(f\) for \(L\) is associative if and only if there is a total ordering \(<\) of \(Sen(L)\) such that \(f=\min_{<}\)._ Let \[Asso=\{f\in{\cal F}:f\mbox{ is associative }\},\] denote the set of all associative choice functions for \(L\). We shall also abbreviate the logical consequence relation \(\models_{Asso}\) and the logical equivalence relation \(\sim_{Asso}\) that are induced by \(Asso\) (see definition 2.10 of the previous section), by \(\models_{Asso}\) and \(\sim_{Asso}\), respectively. From the general facts 2.12 (iii) we immediately obtain the following. **Fact 2.18**: _For all \(\varphi,\psi,\Sigma\),_ _(i) \(\Sigma\models_{s}\varphi\ \Rightarrow\ \Sigma\models_{Asso}\varphi\)._ _(ii) \(\varphi\sim_{s}\psi\ \Rightarrow\ \varphi\sim_{Asso}\psi\)._ It is easy to verify that the arrows in the preceding Fact cannot in general be reversible. The main consequence of associativity is the following. **Theorem 2.19**: _Let \(X\subseteq{\cal F}\) be a class of choice functions. If \(X\subseteq Asso\), then \(|\) is associative with respect to the truth notion \(\models_{X}\), that is, for all \(\varphi\), \(\psi\), \(\sigma\), \(\varphi|(\psi|\sigma)\sim_{X}(\varphi|\psi)|\sigma\)._ _Proof._ Let \(X\subseteq Asso\). It suffices to show that for every \(M\) and every \(f\in X\), and any sentences \(\varphi\), \(\psi\), \(\sigma\) of \(L_{s}\), \(\langle M,f\rangle\), \[\langle M,f\rangle\models_{s}(\varphi|\psi)|\sigma\mbox{ iff }\langle M,f \rangle\models_{s}\varphi|(\psi|\sigma).\] Fix some \(M\) and some \(f\in X\). By definition we have: \[\langle M,f\rangle\models_{s}(\varphi|\psi)|\sigma\Leftrightarrow M\models f( \overline{f}(\varphi|\psi),\overline{f}(\sigma))\Leftrightarrow M\models f( f(\overline{f}(\varphi),\overline{f}(\psi)),\overline{f}(\sigma)).\] By assumption \(f\in Asso\), so by the associativity property (3) \[f(f(\overline{f}(\varphi),\overline{f}(\psi)),\overline{f}(\sigma))=f( \overline{f}(\varphi),f(\overline{f}(\psi),\overline{f}(\sigma))).\] Thus, \[\langle M,f\rangle\models_{s}(\varphi|\psi)|\sigma\Leftrightarrow M\models f (\overline{f}(\varphi),\overline{f}(\psi|\sigma))\Leftrightarrow\langle M,f \rangle\models_{s}\varphi|(\psi|\sigma).\] \(\dashv\) If we slightly weaken the property of associativity, the converse of 2.19 holds too. **Definition 2.20**: Let us call a choice function for \(L\)_essentially associative,_ if (3) holds with \(\sim\) in place of \(=\), that is, for all \(\alpha,\beta,\gamma\in Sen(L),\)__ \[f(f(\alpha,\beta),\gamma)\sim f(\alpha,f(\beta,\gamma)). \tag{4}\] Let \(Asso^{\prime}\) denote the class of essentially associative choice functions. **Theorem 2.21**: _If \(X\subseteq{\cal F}\) and \(|\) is associative with respect to \(\models_{X}\), then \(X\subseteq Asso^{\prime}\)._ _Proof._ Let \(X\not\subseteq Asso^{\prime}\). We have to show that \(|\) is not associative with respect to \(\models_{X}\). Pick \(f\in X-Asso^{\prime}\). It suffices to find \(M\) and \(\alpha\), \(\beta\), \(\gamma\) in \(Sen(L)\) such that \[\langle M,f\rangle\models_{s}(\alpha|\beta)|\gamma\not\Leftrightarrow\langle M,f\rangle\models_{s}\alpha|(\beta|\gamma).\] Since \(f\) is not essentially associative, there are \(\alpha\), \(\beta\), \(\gamma\) in \(Sen(L)\), such that \(f(f(\alpha,\beta),\gamma)\not\sim f(\alpha,f(\beta,\gamma))\). Without loss of generality we may assume that \[f(f(\alpha,\beta),\gamma)=\gamma\not\sim\alpha=f(\alpha,f(\beta,\gamma)).\] Since \(\alpha\not\sim\gamma\) there is \(M\) such that \(M\models\alpha\wedge\neg\gamma\) or \(M\models\neg\alpha\wedge\gamma\). Without loss of generality assume that the first is the case. Then \(M\models\alpha=f(\alpha,f(\beta,\gamma))\), so \(M\models\overline{f}(\alpha|(\beta|\gamma))\), therefore \(\langle M,f\rangle\models_{s}\alpha|(\beta|\gamma)\). On the other hand \(M\not\models\gamma=f(f(\alpha,\beta),\gamma)\), which implies \(\langle M,f\rangle\not\models_{s}(\alpha|\beta)|\gamma\). This proves the theorem. \(\dashv\) Obviously \(Asso\subseteq Asso^{\prime}\). Are the two classes distinct? The answer is yes, but the functions in \(Asso^{\prime}-Asso\) behave _non-associatively_ only on sentences \(\alpha,\beta,\gamma\) such that \(\alpha\sim\beta\sim\gamma\). To be precise, let us say that a triple of sentences \(\alpha,\beta,\gamma\)_witnesses non-associativity_ of \(f\), if \(f(f(\alpha,\beta),\gamma)\neq f(\alpha,f(\beta,\gamma))\), or \(f(f(\alpha,\beta),\gamma)\neq f(\beta,f(\alpha,\gamma))\), or \(f(f(\alpha,\gamma),\beta)\neq f(\alpha,f(\beta,\gamma))\). Then the following holds. **Lemma 2.22**: _(i) \(Asso\varsubsetneq\)\(Asso^{\prime}\)._ _(ii) If \(f\in Asso^{\prime}\) and \(\alpha,\beta,\gamma\) are sentences such that \(\alpha\not\sim\beta\), \(\beta\not\sim\gamma\) and \(\alpha\not\sim\gamma\), then \(f\) is associative on \(\alpha,\beta,\gamma\), i.e., \(f(f(\alpha,\beta),\gamma)=f(\alpha,f(\beta,\gamma))=f(f(\alpha,\gamma),\beta)\)._ _(iii) If \(f\in Asso^{\prime}-Asso\), and \(\alpha,\beta,\gamma\) witness the non-associativity of \(f\), then \(\alpha,\beta,\gamma\) are all distinct, and besides \(f(\alpha,\beta)\), \(f(\alpha,\gamma)\), \(f(\beta,\gamma)\) are all distinct._ _(iv) Therefore, if \(f\in Asso^{\prime}-Asso\) and \(\alpha,\beta,\gamma\) witness the non-associativity of \(f\), then \(\alpha\sim\beta\sim\gamma\)._ _(v) Further, if \(f\in Asso^{\prime}-Asso\), then \(f\) is associative on every triple \(\alpha\), \(\beta\), \(\gamma\) such that \(\alpha\sim\beta\not\sim\gamma\)._ _Proof._ (i) Let \(\alpha\sim\beta\sim\gamma\), while all \(\alpha,\beta,\gamma\) are distinct. Let \(f\in{\cal F}\) be such that \(f(\alpha,\beta)=\beta\), \(f(\alpha,\gamma)=\alpha\), \(f(\beta,\gamma)=\gamma\). Then obviously all \(f(f(\alpha,\beta),\gamma)\), \(f(f(\alpha,\gamma),\beta)\), \(f(f(\beta,\gamma),\alpha)\) are equivalent, so \(f\in Asso^{\prime}\). On the other hand, for example, \(f(f(\alpha,\beta),\gamma)\neq f(\alpha,f(\beta,\gamma))\), so \(f\notin Asso\). (ii) If \(\alpha\not\sim\beta\), \(\beta\not\sim\gamma\), \(\alpha\not\sim\gamma\), then it cannot be \(f(f(\alpha,\beta),\gamma)\sim f(\alpha,f(\beta,\gamma))\) unless \(f(f(\alpha,\beta),\gamma)=f(\alpha,f(\beta,\gamma))\). (iii) It is easy to see that for every \(f\in{\cal F}\) and every \(\alpha,\beta\), \(f(f(\alpha,\beta),\alpha)=f(f(\alpha,\beta),\beta)\). This shows that if any two elements of a triple \(\alpha,\beta,\gamma\) are equal, this triple cannot witness the non-associativity of any function. Let \(f\in Asso^{\prime}-Asso\) and suppose \(\alpha,\beta,\gamma\) witness the non-associativity of \(f\). We have just seen that they are all distinct. We show that \(f(\alpha,\beta)\), \(f(\alpha,\gamma)\), \(f(\beta,\gamma)\) are distinct too. Indeed assume that two of the values \(f(\alpha,\beta)\), \(f(\alpha,\gamma)\), \(f(\beta,\gamma)\) are identical. It will follow that \[f(f(\alpha,\beta),\gamma)=f(\alpha,f(\beta,\gamma))=f(f(\alpha,\gamma),\beta), \tag{5}\] which contradicts the fact that \(\alpha,\beta,\gamma\) witness the non-associativity of \(f\). Assume without loss of generality that \(f(\alpha,\beta)=f(\beta,\gamma)\). Since \(f\) is a choice function and \(\alpha\), \(\beta\), \(\gamma\) are distinct, necessarily \(f(\alpha,\beta)=f(\beta,\gamma)=\beta\). Therefore \[f(f(\alpha,\beta),\gamma)=f(\beta,\gamma)=f(\alpha,\beta)=f(\alpha,f(\beta, \gamma))=\beta.\] So two members of (5) are equal. As to the third one, observe that \(f(\alpha,\gamma)\) is either \(\alpha\) or \(\gamma\). In both cases \(f(f(\alpha,\gamma),\beta)=\beta\), as required. (iv) Let \(f\in Asso^{\prime}-Asso\) and let \(\alpha,\beta,\gamma\) witness the non-associativity of \(f\). By (iii) above, \(f(\alpha,\beta)\), \(f(\alpha,\gamma)\), \(f(\beta,\gamma)\) take up all the values \(\alpha,\beta,\gamma\), and therefore so do \(f(f(\alpha,\beta),\gamma)\), \(f(\alpha,f(\beta,\gamma))\), \(f(f(\alpha,\gamma),\beta)\). But since \(f\in Asso^{\prime}\), the latter are all logically equivalent. Therefore \(\alpha\sim\beta\sim\gamma\). (v) If \(f\in Asso^{\prime}-Asso\), \(\alpha\sim\beta\not\sim\gamma\) and \(f\) were not associative on \(\alpha,\beta,\gamma\), the latter triple would witness the non-associativity of \(f\), so, by (iv), \(\alpha\sim\beta\sim\gamma\). A contradiction. \(\dashv\) It follows from the preceding Lemma that every \(f\in Asso^{\prime}\) defines essentially an associative choice function (and hence a total ordering) for the set of pairs of elements of \(Sen(L)/{\sim}=\{[\alpha]:\alpha\in Sen\}\) rather than \(Sen(L)\). By Facts 2.5, 2.18 and Theorem 2.19, we obtain the following. **Corollary 2.23**: _The operation \(|\) is idempotent, commutative and associative with respect to \({\sim}_{Asso}\). That is:_ _(i) \(\varphi|\varphi\sim_{Asso}\varphi\)._ _(ii) \(\varphi|\psi\sim_{Ass}\psi|\varphi\)._ _(iii) \(\varphi|(\psi|\sigma)\sim_{Asso}(\varphi|\psi)|\sigma\)._ It follows that _when confined to truth in associative structures,_ one can drop parentheses from \((\varphi|\psi)|\sigma\) and write simply \(\varphi|\psi|\sigma\) (as with the case of \(\wedge\) and \(\vee\) of in classical PL), and more generally \(\varphi_{1}|\cdots|\varphi_{n}\) for any sentences \(\varphi_{i}\) of \(L_{s}\). Moreover, in view of Theorem 2.14 and Corollary 2.23, when the choice function \(f\) is associative, then \(f=\min_{<}\) for a total ordering \(<\) of \(Sen(L)\). Namely, the following is proved by an easy induction: **Corollary 2.24**: _Let \(\langle M,f\rangle\) be associative, with \(f=\min_{<}\) for a total ordering \(<\) of \(Sen(L)\). Then for every \(n\in\mathbb{N}\) and any \(\{\varphi_{1},\ldots,\varphi_{n}\}\subset L_{s}\),_ \[\langle M,f\rangle\models_{s}\varphi_{1}|\cdots|\varphi_{n}\mbox{ iff }M\models f( \overline{f}(\varphi_{1}),\ldots,\overline{f}(\varphi_{n}))\mbox{ iff }M\models\min_{<}(\overline{f}(\varphi_{1}),\ldots, \overline{f}(\varphi_{n})),\] _where \(f(\sigma_{1},\ldots,\sigma_{n})\) abbreviates \(f(\{\sigma_{1},\ldots,\sigma_{n}\})\). In particular, for classical sentences \(\alpha_{1},\ldots,\alpha_{n}\),_ \[\langle M,f\rangle\models_{s}\alpha_{1}|\cdots|\alpha_{n}\ \mbox{ iff }M\models \min_{<}(\alpha_{1},\ldots,\alpha_{n}).\] ### Regularity For every \(\varphi\in Sen(L)\), let \(Sub(\varphi)\) denote the set of sub-sentences of \(\varphi\). Given \(\varphi\), \(\sigma\in Sub(\varphi)\) and any \(\sigma^{\prime}\), let \(\varphi[\sigma^{\prime}/\sigma]\) denote the result of replacing \(\sigma\) by \(\sigma^{\prime}\) throughout \(\varphi\). **Definition 2.25**: For \(X\subseteq\mathcal{F}\), \(\sim_{X}\) is said to be _logically closed_ if for all \(\varphi\), \(\sigma\in Sub(\varphi)\) and \(\sigma^{\prime}\), \[\sigma\sim_{X}\sigma^{\prime}\ \Rightarrow\ \varphi\sim_{X}\varphi[\sigma^{ \prime}/\sigma].\] Classical logical equivalence \(\sim\) is logically closed of course, but \(\sim_{s}\) and \(\sim_{Asso}\) are not in general. The question is what further condition on \(X\) is required in order for \(\sim_{X}\) to be logically closed. This is the condition of _regularity_ introduced below. Regularity is a condition independent from associativity, yet compatible with it. So it is reasonable to introduce it independently from associativity. **Definition 2.26**: A choice function \(f\) for \(L\) is said to be _regular_ if for all \(\alpha\), \(\alpha^{\prime}\), \(\beta\), \[\alpha\sim\alpha^{\prime}\ \Rightarrow\ \ f(\alpha,\beta)\sim f(\alpha^{ \prime},\beta).\] The following properties are immediate consequences of the definition. **Fact 2.27**: _Let \(f\) be regular. Then for all \(\alpha\), \(\alpha^{\prime}\), \(\beta\), \(\beta^{\prime}\):_ _(i) If \(\alpha\sim\alpha^{\prime}\not\sim\beta\sim\beta^{\prime}\) and \(f(\alpha,\beta)=\alpha\) then \(f(\alpha^{\prime},\beta^{\prime})=\alpha^{\prime}\), while if \(f(\alpha,\beta)=\beta\) then \(f(\alpha^{\prime},\beta^{\prime})=\beta^{\prime}\)._ _(ii) If \(\alpha\sim\alpha^{\prime}\sim\beta\sim\beta^{\prime}\), \(f(\alpha,\beta)\) and \(f(\alpha^{\prime},\beta^{\prime})\) can be any element of the sets \(\{\alpha,\beta\}\), \(\{\alpha^{\prime},\beta^{\prime}\}\), respectively._ Let \[Reg=\{f\in{\cal F}:\ f\ \mbox{is regular}\}\] denote the set of regular choice functions, and let \(\models_{Reg}\), \(\sim_{Reg}\) abbreviate the relations \(\models_{Reg}\), \(\sim_{Reg}\), respectively. Regularity not only guarantees that \(\sim_{Reg}\) is logically closed, but also the converse is true. **Theorem 2.28**: \(\sim_{X}\) _is logically closed if and only if \(X\subseteq Reg\)._ _Proof._ "\(\Leftarrow\)" : We assume \(X\subseteq Reg\) and show that \(\sim_{X}\) is logically closed. For \(\sigma\in Sub(\varphi)\), \(\varphi[\sigma^{\prime}/\sigma]\) is defined by induction on the length of \(\varphi\) as usual, so we prove \[\sigma\sim_{X}\sigma^{\prime}\ \Rightarrow\ \varphi[\sigma^{\prime}/\sigma] \sim_{X}\varphi \tag{6}\] along the steps of the definition of \(\varphi[\sigma^{\prime}/\sigma]\). Actually regularity is needed only for the treatment of case \(\varphi=\varphi_{1}|\varphi_{2}\), since \(\overline{f}\) commutes with standard connectives, so let us treat this step of the induction only. That is, let \(\varphi=\varphi_{1}|\varphi_{2}\), so \[\varphi[\sigma^{\prime}/\sigma]=\varphi_{1}[\sigma^{\prime}/\sigma]|\varphi_ {2}[\sigma^{\prime}/\sigma],\] and assume the claim holds for \(\varphi_{1}\), \(\varphi_{2}\). Let us set for readability, \(\varphi^{\prime}=\varphi[\sigma^{\prime}/\sigma]\), \(\varphi_{1}^{\prime}=\varphi_{1}[\sigma^{\prime}/\sigma]\), \(\varphi_{2}^{\prime}=\varphi_{2}[\sigma^{\prime}/\sigma]\). Then \(\varphi^{\prime}=\varphi_{1}^{\prime}|\varphi_{2}^{\prime}\), and by the induction assumption \(\varphi_{1}^{\prime}\sim_{X}\varphi_{1}\), \(\varphi_{2}^{\prime}\sim_{X}\varphi_{2}\). We have to show that \(\varphi_{1}^{\prime}|\varphi_{2}^{\prime}\sim_{X}\varphi_{1}|\varphi_{2}\). Pick \(f\in X\). By our assumption \(f\in Reg\). Our induction assumptions become \[\overline{f}(\varphi_{1}^{\prime})\sim\overline{f}(\varphi_{1})\ \mbox{and}\ \overline{f}(\varphi_{2}^{\prime})\sim \overline{f}(\varphi_{2}), \tag{7}\] and it suffices to show that\(\overline{f}(\varphi_{1}^{\prime}|\varphi_{2}^{\prime})\sim\overline{f}(\varphi_ {1}|\varphi_{2})\), or equivalently, \[f(\overline{f}(\varphi_{1}^{\prime}),\overline{f}(\varphi_{2}^{\prime}))\sim f (\overline{f}(\varphi_{1}),\overline{f}(\varphi_{2})). \tag{8}\] But since \(f\in Reg\), in view of Fact 2.27, (8) follows immediately from (7). This completes the proof of direction \(\Leftarrow\). "\(\Rightarrow\)": Suppose \(X\not\subseteq Reg\). We have to show that \(\sim_{X}\) is not logically closed. Pick \(f\in X-Reg\). Since \(f\) is not regular, there exist \(\alpha\), \(\alpha^{\prime}\) and \(\beta\) in \(Sen(L)\) such that \(\alpha\sim\alpha^{\prime}\) and \(f(\alpha,\beta)\not\sim f(\alpha^{\prime},\beta)\). In particular this implies that \(\alpha\not\sim\beta\). Moreover, either \(f(\alpha,\beta)=\alpha\) and \(f(\alpha^{\prime},\beta)=\beta\), or \(f(\alpha,\beta)=\beta\) and \(f(\alpha^{\prime},\beta)=\alpha^{\prime}\). Assume \(f(\alpha,\beta)=\alpha\) and \(f(\alpha^{\prime},\beta)=\beta\), the other case being treated similarly. Since \(\alpha\not\sim\beta\) there exists a truth assignment \(M\) for \(L\) such that \(M\models\alpha\wedge\neg\beta\) or \(M\models\neg\alpha\wedge\beta\). Without loss of generality assume \(M\models\alpha\wedge\neg\beta\). Then \(M\models f(\alpha,\beta)\) which means \(\langle M,f\rangle\models\alpha|\beta\). On the other hand, since \(f(\alpha^{\prime},\beta)=\beta\) and \(M\models\neg\beta\), we have \(M\models\neg f(\alpha^{\prime},\beta)\), that is, \(M\not\models f(\alpha^{\prime},\beta)\), which means \(\langle M,f\rangle\not\models\alpha^{\prime}|\beta\). Therefore for some \(M\) and some \(f\in X\), \(\langle M,f\rangle\models\alpha|\beta\) and \(\langle M,f\rangle\not\models\alpha^{\prime}|\beta\). Thus \(\alpha^{\prime}|\beta\not\sim_{X}\alpha|\beta\), while \(\alpha^{\prime}\sim\alpha\), and hence \(\alpha^{\prime}\sim_{X}\alpha\). It follows that \(\sim_{X}\) is not logically closed. \(\dashv\) In general if \(X\subseteq Y\subseteq{\cal F}\) and \(\sim_{X}\) is logically closed, it doesn't seem likely that one can infer that \(\sim_{Y}\) is logically closed (or vice-versa). Yet we have the following generalization of 2.28. **Corollary 2.29**: _If \(X\subseteq Reg\subseteq Y\subseteq{\cal F}\), then for all \(\varphi\), \(\sigma\in Sub(\varphi)\) and \(\sigma^{\prime}\),_ \[\sigma\sim_{Y}\sigma^{\prime}\ \Rightarrow\ \varphi[\sigma^{\prime}/\sigma] \sim_{X}\varphi.\] _Proof._ By Fact 2.12 (iii), \(X\subseteq Y\) implies \(\sim_{Y}\subseteq\sim_{X}\). So given \(X\subseteq Reg\subseteq Y\), we have \(\sim_{Y}\subseteq\sim_{Reg}\subseteq\sim_{X}\). Thus if \(\sigma\sim_{Y}\sigma^{\prime}\), then \(\sigma\sim_{Reg}\sigma^{\prime}\). By theorem 2.28, this implies \(\varphi[\sigma^{\prime}/\sigma]\sim_{Reg}\varphi\), and therefore \(\varphi[\sigma^{\prime}/\sigma]\sim_{X}\varphi\). \(\dashv\) Using the axiom of choice one can easily construct regular choice functions for \(L\). For every \(\alpha\in Sen(L)\), let \([\alpha]\) denote the \(\sim\)-equivalence class of \(\alpha\), i.e., \[[\alpha]=\{\beta:\beta\sim\alpha\}.\] **Proposition 2.30**: (AC) _There exist regular choice functions for \(L\)._ _Proof._ Using \(AC\), pick a representative \(\xi_{\alpha}\) from each equivalence class \([\alpha]\) and let \(D=\{\xi_{\alpha}:\alpha\in Sen(L)\}\). Every \(\alpha\in Sen(L)\) is logically equivalent to \(\xi_{\alpha}\in D\). Let \(f_{0}:[D]^{2}\to D\) be an arbitrary choice function for all pairs of elements of \(D\). Then \(f_{0}\) extends to a regular choice function \(f\) for \(L\), defined as follows: \(f(\alpha,\beta)=\alpha\), if \(\alpha\not\sim\beta\) and \(f_{0}(\xi_{\alpha},\xi_{\beta})=\xi_{\alpha}\). If \(\alpha\sim\beta\), we define \(f(\alpha,\beta)\) arbitrarily (to be precise, by setting \(f(\alpha,\beta)=g(\alpha,\beta)\), where \(g\) is a choice function on all pairs \(\{\alpha,\beta\}\) of sentences such that \(\alpha\sim\beta\)). \(\dashv\) Next we come to _associative regular_ choice functions. **Definition 2.31**: A total ordering \(<\) of \(Sen(L)\) is said to be _regular_ if for all \(\alpha,\beta\), \[\alpha\not\sim\beta\ \&\ \alpha<\beta\ \Rightarrow\ [\alpha]<[\beta],\] (where \([\alpha]<[\beta]\) means that for all \(\alpha^{\prime}\in[\alpha]\) and \(\beta^{\prime}\in[\beta]\), \(\alpha^{\prime}<\beta^{\prime}\)). The following is an immediate consequence of the preceding definitions. **Fact 2.32**: _Let \(<\) be a total ordering of \(Sen(L)\). Then \(<\) is regular if and only if the corresponding associative choice function \(f=\min_{<}\) is regular._ Thus the following simple construction of regular total orderings for \(Sen(L)\) supplements Proposition 2.30 above. **Proposition 2.33**: (AC) _(i) There exist regular total orderings of \(Sen(L)\)._ _(ii) Moreover, for any set \(A\subset Sen(L)\) of pairwise inequivalent sentences, and any partial ordering \(R\) of \(A\), there is a regular total ordering \(<\) of \(Sen(L)\) such that \(R\subseteq<\)._ _Proof._ (i) Let \(Sen(L)/\)\(\sim\) be the set of equivalence classes \([\alpha]\), \(\alpha\in Sen(L)\). For each \([\alpha]\in Sen(L)/\)\(\sim\) pick by \(AC\) a total ordering \(<_{[\alpha]}\) of \([\alpha]\). Pick also a total ordering \(<_{1}\) of \(Sen(L)/\)\(\sim\). These orderings generate a regular total ordering \(<\) of \(Sen(L)\) defined by: \(\alpha<\beta\) if and only if \(\alpha\not\sim\beta\) and \([\alpha]<_{1}[\beta]\), or \(\alpha\sim\beta\) and \(\alpha<_{[\alpha]}\beta\). (ii) Since the elements of \(A\) are pairwise inequivalent, we can think of \(A\) as a subset of \(Sen(L)/\sim\). Since \(R\) is already a partial ordering of \(A\), it suffices to pick (by the help of \(AC\)) the total ordering \(<_{1}\) of the preceding case so that \(R\subset<_{1}\). \(\dashv\) Both associativity and regularity are indispensable for a reasonable notion of truth \(\models_{X}\) that captures the behavior of \(|\). This is because on the one hand without associativity one would have to face unmanageable complexity caused by incomparable sentences of the form \(((\alpha|\beta)|\gamma)|\delta\), \((\alpha|\beta)|(\gamma|\delta)\), \(\alpha|((\beta|\gamma)|\delta)\), etc. On the other hand regularity entails logical closeness, without which one cannot establish even that the sentences, for example, \(\alpha|\beta\) and \(\alpha|\neg\neg\beta\) are essentially identical. Thus a natural class of choice functions to work with is \[Reg^{*}=Reg\cap Asso.\] We abbreviate the corresponding semantic notions \(\models_{Reg^{*}}\), \(\sim_{Reg^{*}}\), by \(\models_{Reg^{*}}\) and \(\sim_{Reg^{*}}\), respectively. Note that in view of regularity (and only in view of that) we can write, for example, \(\varphi|\top\) and \(\varphi|\bot\), where \(\top\) and \(\bot\) denote the classes of classical tautologies and contradictions, respectively. The next question is how the standard connectives act on \(|\) and vice-versa. Specifically we shall examine whether: (a) \(\neg\) can commute with \(|\), (b) \(\wedge\) and \(\vee\) can distribute over \(|\), (c) \(|\) can distribute over \(\wedge\) and \(\vee\). We shall see that all three questions are answered in the negative with respect to the truth relations \(\models_{Reg^{*}}\). Concerning the first question one can construct a choice function \(f\) such that for every truth assignment \(M\) and any sentences \(\varphi\), \(\psi\), \[\langle M,f\rangle\models_{s}\neg(\varphi|\psi)\leftrightarrow\neg\varphi| \neg\psi.\] For that it suffices to define \(f\) so that \(\overline{f}(\neg\varphi|\neg\psi)=\overline{f}(\neg(\varphi|\psi))\), or equivalently \[f(\neg\overline{f}(\varphi),\neg\overline{f}(\psi))=\neg f(\overline{f}( \varphi),\overline{f}(\psi)).\] This can be done by defining \(f(\alpha,\beta)\) by induction along an ordering of the pairs \(\langle r(\alpha),r(\beta)\rangle\), where \(r(\alpha)\) is the usual rank of \(\alpha\). Nevertheless, such an \(f\) supporting \(\neg(\varphi|\psi)\leftrightarrow\neg\varphi|\neg\psi\) would serve just as a counterexample or a curiosity, and could by no means characterize a natural class of functions. Specifically it is easily seen that such an \(f\) cannot be regular. **Fact 2.34**: _If \(f\) is regular then for every \(M\) there are \(\varphi\), \(\psi\) such that \(\langle M,f\rangle\not\models\neg(\varphi|\psi)\leftrightarrow\neg\varphi|\neg\psi\). Thus for regular \(f\), the scheme \(\neg(\varphi|\psi)\leftrightarrow\neg\varphi|\neg\psi\) is always false in \(\langle M,f\rangle\)._ _Proof._ Let \(f\) be regular. Then for every \(\alpha\), \(f(\neg\alpha,\neg\neg\alpha)\sim f(\neg\alpha,\alpha)=f(\alpha,\neg\alpha)\), therefore \(\neg f(\alpha,\neg\alpha)\leftrightarrow f(\neg\alpha,\neg\neg\alpha)\) is a contradiction. Hence for every \(M\), \(M\not\models\neg f(\alpha,\neg\alpha)\leftrightarrow f(\neg\alpha,\neg \neg\alpha)\), which means that \(M\not\models\overline{f}(\neg(\alpha|\neg\alpha)\leftrightarrow\neg\alpha| \neg\neg\alpha)\), or \(\langle M,f\rangle\not\models\neg(\alpha|\neg\alpha)\leftrightarrow\neg \alpha|\neg\neg\alpha\). \(\dashv\) Concerning question (b) above the answer is negative with respect to the semantics \(\models_{X}\) for any \(X\subseteq Reg^{*}\). Let us give some definitions with the purpose to prove later that they are void. **Definition 2.35**: Let \(<\) be a regular total ordering of \(Sen(L)\). \(<\) is said to be: (a) \(\wedge\)_-monotonic,_ if for all \(\alpha,\beta,\gamma\in Sen(L)\) such that \(\alpha\wedge\gamma\not\sim\beta\wedge\gamma\), \[\alpha<\beta\ \Leftrightarrow\alpha\wedge\gamma<\beta\wedge\gamma.\] (b) \(\vee\)_-monotonic,_ if for all \(\alpha,\beta,\gamma\in Sen(L)\) such that \(\alpha\vee\gamma\not\sim\beta\vee\gamma\), \[\alpha<\beta\ \Leftrightarrow\alpha\vee\gamma<\beta\vee\gamma.\] Accordingly, a choice function \(f\in Reg^{*}\) is said to be \(\wedge\)_-monotonic_ (resp. \(\vee\)_-monotonic_) if \(f=\min_{<}\) and \(<\) is \(\wedge\)-monotonic (resp. \(\vee\)-monotonic). **Lemma 2.36**: _(i) If \(<\) is \(\wedge\)-monotonic, then_ \[\min(\alpha\wedge\gamma,\beta\wedge\gamma)\sim\gamma\wedge\min(\alpha,\beta).\] _(ii) If \(<\) is \(\vee\)-monotonic, then_ \[\min(\alpha\vee\gamma,\beta\vee\gamma)\sim\gamma\vee\min(\alpha,\beta).\] _Proof._ (i) If \(\alpha\wedge\gamma\sim\beta\wedge\gamma\) then obviously \(\min(\alpha\wedge\gamma,\beta\wedge\gamma)\sim\gamma\wedge\min(\alpha,\beta)\). So assume \(\alpha\wedge\gamma\not\sim\beta\wedge\gamma\). Then also \(\alpha\not\sim\beta\). Without loss of generality suppose \(\alpha<\beta\), so \(\min(\alpha,\beta)=\alpha\). By \(\wedge\)-monotonicity, \(\alpha\wedge\gamma<\beta\wedge\gamma\), so \(\min(\alpha\wedge\gamma,\beta\wedge\gamma)=\alpha\wedge\gamma=\gamma\wedge\min (\alpha,\gamma)\). (ii) Similar. It is easy to give syntactic characterizations of \(\wedge\)- and \(\vee\)-monotonicity. The proof of the following is left to the reader. **Lemma 2.37**: _Let \(f\in Reg^{*}\). Then:_ _(i) \(f\) is \(\wedge\)-monotonic if and only if for all \(M\) and all \(\varphi,\psi,\sigma\in Sen(L_{s})\):_ \[\langle M,f\rangle\models\varphi\wedge(\psi|\sigma)\leftrightarrow(\varphi \wedge\psi)|(\varphi\wedge\sigma).\] _(ii) \(f\) is \(\vee\)-monotonic if and only if for all \(M\) and all \(\varphi,\psi,\sigma\in Sen(L_{s})\):_ \[\langle M,f\rangle\models\varphi\vee(\psi|\sigma)\leftrightarrow(\varphi \vee\psi)|(\varphi\vee\sigma).\] It follows from the previous Lemma that \(\wedge\)- and \(\vee\)-monotonicity are exactly the conditions under which \(\wedge\) and \(\vee\), respectively, distribute over \(|\). However we can easily see by a counterexample that there are no \(\wedge\)-monotonic or \(\vee\)-monotonic regular functions (or orderings). **Proposition 2.38**: _There is no regular total ordering \(<\) of \(Sen(L)\) which is \(\wedge\)-monotonic or \(\vee\)-monotonic. Consequently there is no \(X\subseteq Reg^{*}\) such that the schemes \(\varphi\wedge(\psi|\sigma)\leftrightarrow(\varphi\wedge\psi)|(\varphi\wedge\sigma)\) and \(\varphi\vee(\psi|\sigma)\leftrightarrow(\varphi\vee\psi)|(\varphi\vee\sigma)\), are not \(\models_{X}\)-tautologies._ _Proof._ Suppose there is a regular total ordering \(<\) of \(Sen(L)\) which is \(\wedge\)-monotonic. Let \(p\), \(q\)\(r\) be atomic sentences such that \(p<q<r\). Consider the formula \(\alpha=p\wedge r\wedge\neg q\). Then by \(\wedge\)- monotonicity \(p<q\) implies \(p\wedge r\wedge\neg q<q\wedge r\wedge\neg q\), or by regularity \(\alpha<\bot\). For the same reason \(q<r\) implies \(p\wedge q\wedge\neg q<p\wedge r\wedge\neg q\), or \(\bot<\alpha\), a contradiction. Working with \(\beta=p\lor r\vee\neg q\) we similarly show that is no regular total ordering \(<\) which is \(\vee\)-monotonic. \(\dashv\) Having settled the question about the distributivity of \(\wedge\) and \(\vee\) over \(|\), we come to the converse question, whether \(|\) can distribute over \(\wedge\) and/or \(\vee\) for some class \(X\) of choice functions such that \(X\subseteq Reg^{*}\). The answer is "no" again with respect to \(Reg^{*}\). Namely: **Proposition 2.39**: _There is no regular total ordering \(<\) of \(Sen(L)\) such that if \(f=\min_{<}\), then for every \(M\), \(\langle M,f\rangle\) satisfies the scheme_ \[(*)\quad\varphi|(\psi\wedge\sigma)\leftrightarrow(\varphi|\psi)\wedge(\varphi |\sigma).\] _Consequently there is no \(X\subseteq Reg^{*}\) such that (*) is a \(\models_{X}\)-tautology. Similarly for the dual scheme_ \[(**)\quad\varphi|(\psi\vee\sigma)\leftrightarrow(\varphi|\psi)\vee(\varphi| \sigma).\] _Proof._ Towards reaching a contradiction assume that there is a regular total ordering \(<\) of \(Sen(L)\) such that if \(f=\min_{<}\), then \((*)\) is true in all models \(\langle M,f\rangle\). Fix some atomic sentence \(p\) of \(L\). By regularity we have either \(p<\bot\) or \(\bot<p\). We examine below some consequences of each of these cases. (i) Let \(p<\bot\). Pick some \(q\neq p\) and an \(M\) such that \(M\models p\wedge q\). Then \[\langle M,f\rangle\models_{s}p|(q\wedge\neg q)\leftrightarrow(p|q)\wedge(p| \neg q),\] or \[\langle M,f\rangle\models_{s}p|\bot\leftrightarrow(p|q)\wedge(p|\neg q). \tag{9}\] Since \(p<\bot\) and \(M\models p\), the left-hand side of the equivalence in (9) is true in \(\langle M,f\rangle\). Thus so is the right-hand side of the equivalence. Since \(M\models p\wedge q\), the conjunct \(p|q\) is true, while the truth of the conjunct \(p|\neg q\) necessarily implies \(p<\neg q\), since \(M\models q\). Then pick \(N\) such that \(N\models p\wedge\neg q\). We have also \[\langle N,f\rangle\models_{s}p|\bot\leftrightarrow(p|q)\wedge(p|\neg q). \tag{10}\] Again the left-hand side of the equivalence in (10) is true in \(\langle N,f\rangle\). So the right-hand side is true too. Since \(N\models p\wedge\neg q\), the conjunct \(p|\neg q\) holds. In order for the conjunct \(p|q\) to hold too we must have \(p<q\), since \(N\models\neg q\). Summing up the above two facts we conclude that if the letters \(p\), \(q\) range over atomic sentences, then \[(\forall p\neq q)(p<\bot\ \Rightarrow p<q\ \&\ p<\neg q). \tag{11}\] (ii) Let now \(\bot<p\). Pick again some \(q\neq p\) and an \(M\) such that \(M\models p\wedge q\). Then (9) holds again, but now the left-hand side of the equivalence in (9) is false \(\langle M,f\rangle\). Thus so is the right-hand side, which, since \(M\models p\), necessarily implies \(\neg q<p\). Then pick \(N\) such that \(N\models p\wedge\neg q\). (10) holds again with the left-hand side of the equivalence being false. The right-hand side is false too and this holds only if \(q<p\), since \(N\models\neg q\). Therefore from these two facts we conclude that \[(\forall p\neq q)(\bot<p\ \Rightarrow q<p\ \&\ \neg q<p). \tag{12}\] Now since there are at least three distinct atoms \(p,q,r\) and \(p,q,r,\bot\) are linearly ordered by \(<\), then at least two of them lie on the left of \(\bot\), or on the right of \(\bot\). That is, there are \(p,q\) such that \(p,q<\bot\) or \(\bot<p,q\). If \(p,q<\bot\), (11) implies that \(p<q\), \(p<\neg q\), \(q<p\) and \(q<\neg p\), a contradiction. If \(\bot<p,q\), then (12) implies that \(q<p\), \(\neg q<p\), \(p<q\) and \(\neg p<q\), a contradiction again. This completes the proof that (*) cannot be a \(\models_{X}\)-tautology for any \(X\subseteq Reg^{*}\). Concerning the scheme (**) we consider the instances \[p|(q\vee\neg q)\leftrightarrow(p|q)\vee(q|\neg q),\] i.e., \[p|\top\leftrightarrow(p|q)\vee(q|\neg q),\] for atomic sentences \(p,q\), and argue analogously as before, by examining the cases \(p<\top\) and \(\top<p\). \(\dashv\) ### \(\neg\)-decreasingness There is still the question of how \(\neg\) behaves with respect to \(|\). As we saw in Fact 2.34, \(\neg\) cannot commute with \(|\) in models \(\langle M,f\rangle\) with regular \(f\). Equivalently, if \(f\in Reg^{*}\) and \(f=\min_{<}\), \(\neg\) cannot be "increasing", that is, cannot satisfy \(\alpha<\beta\Leftrightarrow\neg\alpha<\neg\beta\), for all \(\alpha\), \(\beta\). However it can be "decreasing", and this turns out to be a useful property. **Definition 2.40**: \(<\) is said to be \(\neg\)_-decreasing,_ if for all \(\alpha,\beta\in Sen(L)\) such that \(\alpha\not\sim\beta\), \[\alpha<\beta\Leftrightarrow\neg\beta<\neg\alpha.\] Accordingly a choice function \(f\in Reg^{*}\) is said to be \(\neg\)_-decreasing_ if \(f=\min_{<}\) and \(<\) is \(\neg\)-decreasing. **Lemma 2.41**: \(<\) _is \(\neg\)-decreasing if and only if for all \(\alpha\not\sim\beta\),_ \[\neg\min(\neg\alpha,\neg\beta)\sim\max(\alpha,\beta).\] _Proof._ Let \(<\) be \(\neg\)-decreasing. Then for any \(\alpha\not\sim\beta\), \(\alpha<\beta\Leftrightarrow\neg\beta<\neg\alpha\), so, \(\min(\neg\alpha,\neg\beta)=\neg\max(\alpha,\beta)\), hence \(\neg\min(\neg\alpha,\neg\beta)\sim\max(\alpha,\beta)\). Conversely, suppose \(<\) is not \(\neg\)-decreasing. Then there are \(\alpha\), \(\beta\) such that \(\alpha\not\sim\beta\), \(\alpha<\beta\) and \(\neg\alpha<\neg\beta\). But then \(\neg\min(\neg\alpha,\neg\beta)=\neg\neg\alpha\not\sim\beta=\max(\alpha,\beta)\). \(\dashv\) We can also give a syntactic characterization of \(\neg\)-decreasingness. **Theorem 2.42**: _Let \(f\in Reg^{*}\). Then \(f\) is \(\neg\)-decreasing if and only if for every \(M\) and any \(\varphi\), \(\psi\),_ \[\langle M,f\rangle\models_{s}\varphi\wedge\neg\psi\rightarrow(\varphi|\psi \leftrightarrow\neg\varphi|\neg\psi).\] _Proof._ "\(\Rightarrow\)": Let \(f\) be \(\neg\)-decreasing and \(f=\min_{<}\). Let \(M\) and \(\varphi\), \(\psi\) such that \(\langle M,f\rangle\models_{s}\varphi\wedge\neg\psi\), that is, \(M\models\overline{f}(\varphi)\wedge\neg\overline{f}(\psi)\). It suffices to show that \(\langle M,f\rangle\models_{s}(\varphi|\psi\leftrightarrow\neg\varphi|\neg\psi)\), or equivalently, \[M\models f(\overline{f}(\varphi),\overline{f}(\psi))\leftrightarrow f(\neg \overline{f}(\varphi),\neg\overline{f}(\psi)).\] If we set \(\overline{f}(\varphi)=\alpha\) and \(\overline{f}(\psi)=\beta\), the above amount to assuming that \(M\models\alpha\wedge\neg\beta\) and concluding that \(M\models f(\alpha,\beta)\leftrightarrow f(\neg\alpha,\neg\beta)\), or \[M\models\min(\alpha,\beta)\leftrightarrow\min(\neg\alpha,\neg\beta).\] But since \(M\models\alpha\wedge\neg\beta\), \(M\models\min(\alpha,\beta)\) implies \(\min(\alpha,\beta)=\alpha\). Then, since \(<\) is \(\neg\)-decreasing, \(\min(\neg\alpha,\neg\beta)=\neg\beta\). Therefore \(M\models\min(\neg\alpha,\neg\beta)\). So \(M\models\min(\alpha,\beta)\rightarrow\min(\neg\alpha,\neg\beta)\). The converse is similar. "\(\Leftarrow\)": Let \(f\) be non-\(\neg\)-decreasing, with \(f=\min_{<}\), and let \(\alpha\not\sim\beta\) such that \(\alpha<\beta\) and \(\neg\alpha<\neg\beta\). Without loss of generality there is \(M\) such that \(M\models\neg\alpha\wedge\beta\). Then \(M\not\models\min(\alpha,\beta)\), thus \(\langle M,f\rangle\not\models_{s}\alpha|\beta\), while \(M\models\min(\neg\alpha,\neg\beta)\), or \(\langle M,f\rangle\models_{s}\neg\alpha|\neg\beta\). So \(\langle M,f\rangle\not\models(\alpha|\beta\leftrightarrow\neg\alpha|\neg\beta)\), and therefore \[\langle M,f\rangle\not\models\neg\alpha\wedge\beta\rightarrow(\alpha|\beta \leftrightarrow\neg\alpha|\neg\beta).\] Therefore \(\langle M,f\rangle\) does not satisfy the scheme \(\varphi\wedge\neg\psi\rightarrow(\varphi|\psi\leftrightarrow\neg\varphi| \neg\psi)\). \(\dashv\) Next let us make sure that \(\neg\)-decreasing total orderings exist. **Theorem 2.43**: _There exist regular \(\neg\)-decreasing total orderings of \(Sen(L)\), and hence regular \(\neg\)-decreasing choice functions for \(L\)._ _Proof._ There is a general method for constructing regular and \(\neg\)-decreasing total orderings of \(Sen(L)\) that makes use of the Axiom of Choice. This is the following. Let \(P=\{\{[\alpha],[\neg\alpha]\}:\alpha\in Sen(L)\}\). Pick by AC a choice function \(F\) for \(P\), and let \(A=\bigcup F\)"\(P\) and \(B=Sen(L)-A\). Both \(A\), \(B\) are \(\sim\)-saturated, that is, \(\alpha\in A\Rightarrow[\alpha]\subset A\), and similarly for \(B\). As in the proof of Proposition 2.33 pick a regular total ordering \(<_{1}\) of \(A\). By the definition of \(A\), \(B\), clearly for every \(\alpha\in Sen(L)\), \[\alpha\in A\Leftrightarrow\neg\alpha\in B,\] so \(<_{1}\) induces a regular total ordering \(<_{2}\) of \(B\) by setting \[\alpha<_{2}\beta\Leftrightarrow\neg\beta<_{1}\neg\alpha.\] Then define \(<\) of \(Sen(L)\) as follows: \(\alpha<\beta\) if and only if : \(\alpha\in A\) and \(\beta\in B\), or \(\alpha,\beta\in A\) and \(\alpha<_{1}\beta\), or \(\alpha,\beta\in B\) and \(\alpha<_{2}\beta\). It is easy to verify that \(<\) is a total regular and \(\neg\)-decreasing ordering of \(Sen(L)\). \(\dashv\) We can further show that every regular \(\neg\)-decreasing total ordering is constructed by the general method of Theorem 2.43. Let us give some definitions. A set \(X\subset Sen(L)\) is said to be _selective_ if of every pair of opposite sentences \(\{\alpha,\neg\alpha\}\)\(X\) contains exactly one. Recall also that \(X\) is \(\sim\)_-saturated_ if for every \(\alpha\), \(\alpha\in X\Rightarrow[\alpha]\subset X\). Note that familiar examples of selective and \(\sim\)-saturated sets are the consistent and complete sets \(\Sigma\subset Sen(L)\) (as well as their complements \(Sen(L)-\Sigma\)). However not every selective and \(\sim\)-saturated set is of this kind. For instance the sets \(A\), \(B\) in the proof of 2.43 are selective and \(\sim\)-saturated forming a partition of \(Sen(L)\). Moreover \(A\) is an initial segment and \(B\) is a final segment of \(\langle Sen(L),<\rangle\). We shall see that such a partition exists for every regular \(\neg\)-decreasing total ordering. **Proposition 2.44**: _Let \(<\) be a regular \(\neg\)-decreasing total ordering of \(Sen(L)\). Then \(Sen(L)\) splits into two \(\sim\)-saturated sets \(I\) and \(J\) which are selective, hence_ \[\alpha\in I\Leftrightarrow\neg\alpha\in J,\] _and \(I<J\), that is, \(I\) is an initial and \(J\) a final segment of \(<\)._ _Proof._ Let \(<\) be a regular and \(\neg\)-decreasing total ordering of \(Sen(L)\). Let us call an initial segment of \(<\)_weakly selective_ if from every pair \(\{\alpha,\neg\alpha\}\), \(I\) contains _at most one_ element. We first claim that there are weakly selective initial segments of \(<\). Observe that if for some \(\alpha\), \((\forall\beta)(\alpha<\beta\vee\alpha<\neg\beta)\) is true, then the initial segment \(\{\beta:\beta\leq\alpha\}\) is weakly selective. So if, towards a contradiction, we assume that no weakly selective initial segment exists, then \[\forall\alpha\exists\beta(\beta\leq\alpha\wedge\neg\beta\leq\alpha). \tag{13}\] Assume (13) holds and fix some \(\alpha\). Pick \(\beta\) such that \(\beta\leq\alpha\) and \(\neg\beta\leq\alpha\). By \(\neg\)-decreasingness, \(\neg\alpha\leq\neg\beta\). Therefore \(\neg\alpha<\alpha\). Now apply (13) to \(\neg\alpha\) to find \(\gamma\) such that \(\gamma\leq\neg\alpha\) and \(\neg\gamma\leq\neg\alpha\). By \(\neg\)-decreasingness, \(\neg\neg\alpha\leq\neg\gamma\). Thus \(\neg\neg\alpha\leq\neg\alpha\) and by regularity, \(\alpha\leq\neg\alpha\). But this contradicts \(\neg\alpha<\alpha\). So there exist weakly selective initial segments of \(<\). Taking the union of all such initial segments, we find a greatest weakly selective initial segment \(I\). It is easy to see that \(I\) is selective, i.e., from each pair \(\{\alpha,\neg\alpha\}\) it contains _exactly one_ element. Indeed, assume the contrary. Then there is \(\alpha\), such that either \(I<\alpha<\neg\alpha\), or \(I<\neg\alpha<\alpha\). Assume the first is the case, the other being similar. But then there is \(\beta\) such that \(I<\{\beta,\neg\beta\}<\alpha<\neg\alpha\), because otherwise the segment \(\{\gamma:\gamma\leq\alpha\}\) would be a weakly selective segment greater than \(I\), contrary to the maximality of \(I\). Now \(\{\beta,\neg\beta\}<\alpha<\neg\alpha\) implies that \(\beta<\alpha\) and \(\neg\beta<\neg\alpha\), which contradicts the \(\neg\)-decreasingness of \(<\). Further, let \(J=\{\neg\alpha:\alpha\in I\}\). Then \(J\) is a greatest selective final segment, \(I<J\) and \(I\cap J=\emptyset\). To show that \(I\) (and hence \(J\)) is \(\sim\)-saturated, let \(\alpha\in I\). Assume first that \(\alpha\) is not the greatest element of \(I\), so there is \(\beta\in I\) such that \(\alpha<\beta\). By regularity, \([\alpha]<\beta\). Hence \([\alpha]\subset I\). Next assume that \(\alpha\) is the greatest element of \(I\). Then necessarily \(\alpha\) is the greatest element of \([\alpha]\) too, otherwise \(I\cup[\alpha]\supsetneq I\) and \(I\cup[\alpha]\) is selective, contrary to the maximality of \(I\). Thus again \(I\) is \(\sim\)-saturated. It remains to show that \(I\cup J=Sen(L)\). Assume \(\alpha\notin I\). Since \(I\) is selective, \(\neg\alpha\in I\), therefore \(\neg\neg\alpha\in J\). Since \(J\) is \(\sim\)-saturated, \(\alpha\in J\). Thus \(I\cup J=Sen(L)\). \(\dashv\) So regular \(\neg\)-decreasing functions constitute a natural class of choice functions stronger than \(Reg^{*}\). Let \[Dec=\{f\in Reg^{*}:f\mbox{ is }\neg\mbox{-decreasing}\}.\] We abbreviate the corresponding semantic notions \(\models_{Dec}\) and \(\sim_{Dec}\), by \(\models_{Dec}\) and \(\sim_{Dec}\), respectively. ### The dual connective Every binary or unary logical operation when combined with negation produces a _dual_ one. The dual of \(|\) is \[\varphi\circ\psi:=\neg(\neg\varphi|\neg\psi)\] for all \(\varphi,\psi\in Sen(L_{s})\). A natural question is whether each of the operations \(|\) and \(\circ\) distributes over its dual with respect to a truth relation \(\models_{X}\), that is, whether there is a class of functions \(X\) such that \[\models_{X}\varphi\circ(\psi|\sigma)\leftrightarrow(\varphi\circ\psi)|(\varphi \circ\sigma) \tag{14}\] and \[\models_{X}\varphi|(\psi\circ\sigma)\leftrightarrow(\varphi|\psi)\circ(\varphi|\sigma) \tag{15}\] are \(X\)-tautologies. (14), (15) are dual and equivalent to each other, since taking the negations of both sides of (14) one obtains (15), and vice-versa. **Proposition 2.45**: _There exist \(\alpha\), \(\beta\), \(\gamma\), \(M\) and \(f\in Reg^{*}\) (\(f\) non-\(\neg\)-decreasing) such that_ \[\langle M,f\rangle\not\models_{s}\alpha\circ(\beta|\gamma)\leftrightarrow( \alpha\circ\beta)|(\alpha\circ\gamma).\] _Proof._ Pick \(\alpha\), \(\beta\), \(\gamma\) such that \(\alpha\not\models\gamma\), \(\gamma\not\sim\neg\alpha\), and \(M\) such that \(M\models\alpha\wedge\neg\gamma\). Then we can easily find a regular total ordering of \(Sen(L)\) such that \(\neg\alpha<\neg\beta\), \(\neg\gamma<\neg\alpha\), \(\gamma<\alpha\) and \(\beta<\gamma\). Let \(f=\min_{<}=\min\). By definition, \(\alpha\circ(\beta|\gamma)=\neg(\neg\alpha|\neg(\beta|\gamma))\), and \((\alpha\circ\beta)|(\alpha\circ\gamma)=\neg(\neg\alpha|\neg\beta)|\neg( \neg\alpha|\neg\gamma))\). Therefore \[\overline{f}(\alpha\circ(\beta|\gamma))=\overline{f}(\neg(\neg\alpha|\neg( \beta|\gamma))=\neg\overline{f}(\neg\alpha|\neg(\beta|\gamma))=\neg f(\neg \alpha,\neg f(\beta,\gamma))=\] \[\neg\min(\neg\alpha,\neg\min(\beta,\gamma))=\neg\min(\neg\alpha,\neg\beta)=\neg \neg\alpha.\] On the other hand, \[\overline{f}((\alpha\circ\beta)|(\alpha\circ\gamma))=\overline{f}(\neg(\neg \alpha|\neg\beta)|\neg(\neg\alpha|\neg\gamma))=f(\neg f(\neg\alpha,\neg\beta), \neg f(\neg\alpha,\neg\gamma))=\] \[\min(\neg\min(\neg\alpha,\neg\beta),\neg\min(\neg\alpha,\neg\gamma))=\min( \neg\neg\neg\alpha,\neg\neg\gamma)=\neg\neg\gamma,\] where the last equation is due to the fact that \(\min(\alpha,\gamma)=\gamma\) and \(<\) is regular. Thus \(M\models\overline{f}(\alpha\circ(\beta|\gamma))\) and \(M\not\models\overline{f}((\alpha\circ\beta)|(\alpha\circ\gamma))\). Therefore \(\langle M,f\rangle\models_{s}\alpha\circ(\beta|\gamma)\) and \(\langle M,f\rangle\not\models_{s}(\alpha\circ\beta)|(\alpha\circ\gamma)\). \(\dashv\) Note that in the preceding counterexample we have \(\gamma<\alpha\) and \(\neg\gamma<\neg\alpha\), so the ordering \(<\) is not \(\neg\)-decreasing. We see next that if \(f\) is \(\neg\)-decreasing, then in \(\langle M,f\rangle\)\(|\) and \(\circ\) do distribute over each other. **Proposition 2.46**: _If \(f\in Reg^{*}\) is \(\neg\)-decreasing, then for all \(M\), \(\varphi\), \(\psi\), \(\sigma\),_ \[\langle M,f\rangle\models_{s}\varphi\circ(\psi|\sigma)\leftrightarrow(\varphi \circ\psi)|(\varphi\circ\sigma),\] _and_ \[\langle M,f\rangle\models_{s}\varphi|(\psi\circ\sigma)\leftrightarrow(\varphi |\psi)\circ(\varphi|\sigma).\] _Proof._ The above equivalences are dual to each other, so it suffices to show the first of them. Specifically it suffices to prove that if \(f\in Reg^{*}\) and \(f\) is \(\neg\)-decreasing, then \[\overline{f}(\varphi\circ(\psi|\sigma))\sim\overline{f}((\varphi\circ\psi)| (\varphi\circ\sigma)).\] Fix such an \(f\) and let \(<\) be the regular, \(\neg\)-decreasing ordering such that \(f=\min_{<}=\min\). If we set \(\overline{f}(\varphi)=\alpha\), \(\overline{f}(\psi)=\beta\), \(\overline{f}(\sigma)=\gamma\), express \(\circ\) in terms of \(|\) and replace \(f\) with \(<\), the above equivalence is written: \[\neg\min(\neg\alpha,\neg\min(\beta,\gamma))\sim\min(\neg\min(\neg\alpha,\neg \beta),\neg\min(\neg\alpha,\neg\gamma)). \tag{16}\] If \(\alpha\sim\beta\sim\gamma\), obviously (16) is true. Assume \(\alpha\sim\beta\) and \(\alpha\not\sim\gamma\). Then, by regularity, (16) becomes \[\neg\min(\neg\alpha,\neg\min(\alpha,\gamma))\sim\min(\alpha,\neg\min(\neg \alpha,\neg\gamma)).\] To verify it we consider the cases \(\alpha<\gamma\) and \(\gamma<\alpha\). E.g. let \(\alpha<\gamma\). By \(\neg\)-decreasingness, \(\neg\gamma<\neg\alpha\), so both sides of the above relation are \(\sim\) to \(\alpha\). Similarly if \(\gamma<\alpha\). So it remains to prove (16) when \(\alpha\not\sim\beta\) and \(\alpha\not\sim\gamma\). Then, by Lemma 2.41, \(\neg\min(\neg\alpha,\neg\beta)\sim\max(\alpha,\beta)\), so (16) is written \[\max(\alpha,\min(\beta,\gamma))\sim\min(\max(\alpha,\beta),\max(\alpha,\gamma)). \tag{17}\] We don't know if there is some more elegant direct (that is, not-by-cases) proof of (17). So we verify it by cases. _Case 1._ Assume \(\alpha\leq\min(\beta,\gamma)\). Then \(\max(\alpha,\min(\beta,\gamma))=\min(\beta,\gamma)\). Besides \(\alpha\leq\min(\beta,\gamma)\) implies \(\max(\alpha,\beta)=\beta\) and \(\max(\alpha,\gamma)=\gamma\). Therefore both sides of (17) are \(\sim\) to \(\min(\beta,\gamma)\). _Case 2._ Assume \(\min(\beta,\gamma)<\alpha\). Then \(\max(\alpha,\min(\beta,\gamma))=\alpha\). To decide the right-hand side of (17), suppose \(\beta\leq\gamma\) so we have the following subcases. \((2a)\)\(\beta<\alpha\leq\gamma\): Then \(\max(\alpha,\beta)=\alpha\), \(\max(\alpha,\gamma)=\gamma\), therefore, \(\min(\max(\alpha,\beta),\max(\alpha,\gamma))=\alpha\), thus (17) holds. \((2b)\)\(\beta\leq\gamma<\alpha\): Then \(\max(\alpha,\beta)=\max(\alpha,\gamma)=\alpha\). So \[\min(\max(\alpha,\beta),\max(\alpha,\gamma))=\alpha,\] thus (17) holds again. _Case 3._ Assume \(\min(\beta,\gamma)<\alpha\), so \(\max(\alpha,\min(\beta,\gamma))=\alpha\), but suppose now \(\gamma<\beta\). Then we have the subcases: \((3a)\)\(\gamma<\alpha\leq\beta\): Then \(\max(\alpha,\beta)=\beta\) and \(\max(\alpha,\gamma)=\alpha\). Thus \(\min(\max(\alpha,\beta),\max(\alpha,\gamma))=\alpha\), that is, (17) holds. \((3b)\)\(\gamma<\beta\leq\alpha\): Then \(\max(\alpha,\beta)=\alpha\) and \(\max(\alpha,\gamma)=\alpha\). So \[\min(\max(\alpha,\beta),\max(\alpha,\gamma))=\alpha.\] This completes the proof of the Proposition. \(\dashv\). **Corollary 2.47**: _The schemes_ \[\varphi\circ(\psi|\sigma)\leftrightarrow(\varphi\circ\psi)|(\varphi\circ\sigma) \tag{18}\] _(or its dual) and_ \[\varphi\wedge\psi\rightarrow(\varphi|\psi\leftrightarrow\neg\varphi|\neg\psi) \tag{19}\] _are equivalent and each one of them is a syntactic characterization of the regular \(\neg\)-decreasing orderings (and the corresponding choice functions)._ _Proof._ The equivalence of (18) and (19), as _schemes_, follows from Propositions 2.45, 2.46, as well as from Lemma 2.41 by which (19) characterizes the regular \(\neg\)-decreasing orderings. Interchanging \(|\) and \(\circ\) inside a sentence gives rise to a duality of sentences of \(L_{s}\), that is, a mapping \(\varphi\mapsto\varphi^{d}\) defined inductively as follows: \(\varphi^{d}=\varphi\), for classical \(\varphi\), \((\varphi\wedge\psi)^{d}=\varphi^{d}\wedge\psi^{d}\), \((\neg\varphi)^{d}=\neg\varphi^{d}\), \((\varphi|\psi)^{d}=\varphi^{d}\circ\psi^{d}\). By the help of dual orderings \(<^{d}\) and dual choice functions \(f^{d}\), one can without much effort establish the following "Duality Theorem" which is the analogue of Boolean Duality: **Theorem 2.48**: _For every \(\varphi\in Fml(L_{s})\), \(\models_{Reg^{*}}\varphi\) if and only if \(\models_{Reg^{*}}\varphi^{d}\)._ ## 3 Axiomatization. Soundness and completeness results A _Propositional Superposition Logic_ (PLS for short) will consist as usual of two parts, a _syntactic_ one, i.e., a formal system \(K\), consisted of axiom-schemes and inference rules, and a _semantical_ one, consisted essentially of a set \(X\subseteq\mathcal{F}\) of choice functions over \({\it Sen}(L)\).11 Let us start with the latter. Footnote 11: More or less the same is true for every logical system, e.g. PL. Although we often identify PL with the set of its logical axioms and the inference rule of Modus Ponens, tacitly we think of it as a set of axiom-schemes \(\mathsf{Ax}(\mathrm{PL})\) and the inference rule \({\it MP}\) on the ones hand, and the natural Boolean semantics on the other. Specifically \(\mathsf{Ax}(\mathrm{PL})\) will consist of the following schemes: (i) \(\alpha\rightarrow(\beta\rightarrow\alpha)\) (ii) \((\alpha\rightarrow(\beta\rightarrow\gamma))\rightarrow((\alpha\rightarrow\beta )\rightarrow(\alpha\rightarrow\gamma))\) (iii) \((\neg\alpha\rightarrow\neg\beta)\rightarrow((\neg\alpha\rightarrow\beta) \rightarrow\alpha)\). **Lemma 3.1**: _For every \(X\subseteq{\cal F}\), the set \(Taut(X)\) is decidable (i.e., computable)._ _Proof._ By the definition of \(\models_{s}\), \(\varphi\in Taut(X)\) if and only if \((\forall f\in X)(\overline{f}(\varphi)\in Taut)\) (where \(Taut\) is the set of tautologies of PL), i.e., \[\varphi\in Taut(X)\ \Leftrightarrow\{\overline{f}(\varphi):f\in X\}\subset Taut.\] Now given \(\varphi\) and \(f\), the collapse \(\overline{f}(\varphi)\) results from \(\varphi\) by inductively replacing each subformula \(\psi_{1}|\psi_{2}\) of \(\varphi\) with either \(\overline{f}(\psi_{1})\) or \(\overline{f}(\psi_{2})\). So clearly for every \(\varphi\), the set of all possible collapses \(\{\overline{f}(\varphi):f\in X\}\) is finite. Therefore, since \(Taut\) is decidable, it is decidable whether \(\{\overline{f}(\varphi):f\in X\}\subset Taut\). \(\dashv\) In particular we are interested in the sets \[Taut({\cal F})\subseteq Taut(Reg)\subseteq Taut(Reg^{*})\subseteq Taut(Dec), \tag{20}\] (as well as in \(Taut(Asso)\subseteq Taut(Reg^{*})\)) corresponding to the truth relations considered above. It follows from 3.1 that these sets are decidable. The question is whether each of these sets of tautologies is axiomatizable by a recursive set of axioms and inference rules. We shall see that the answer is yes. Let us come to the formal system \(K\). Every \(K\) consists of a set of axioms \({\sf Ax}(K)\) and a set of inference rules \({\sf IR}(K)\). Also the axioms and rules of \(K\) extend the axioms and rules of PL, i.e., \[{\sf Ax}(K)={\sf Ax}({\rm PL})+\{S_{i}:i\leq n\},\ {\rm and}\ MP\in{\sf IR}(K),\] where \(S_{i}\) will be some schemes considered below expressing basic properties of \(|\). Given \(X\subseteq{\cal F}\) in order to axiomatize \(Taut(X)\) by a formal system \(K\), clearly it is necessary for the axioms of \(K\) to be \(X\)-tautologies, i.e., \[{\sf Ax}(K)\subseteq Taut(X).\] For any such \(X\subseteq{\cal F}\) and \(K\), we have a _logic_ that extends PL, called _Propositional Superposition Logic w.r.t. to \(X\) and \(K\),_ denoted \[{\rm PLS}(X,K).\] Given a formal system \(K\) as above and \(\Sigma\cup\{\varphi\}\subset Sen(L)\), a (Hilbert-style) \(K\)_-proof of \(\varphi\) from \(\Sigma\)_ is defined just as a proof in PL (mutatis mutandis), that is, as a sequence of sentences \(\sigma_{1},\ldots,\sigma_{n}\) such that \(\sigma_{n}=\varphi\) and each \(\sigma_{i}\) either belongs to \(\Sigma\) or belongs to \({\sf Ax}(K)\), or is derived from previous ones by the inference rules in \({\sf IR}(K)\). We denote by \[\Sigma\vdash_{K}\varphi\] the fact that there is a \(K\)-proof of \(\varphi\) from \(\Sigma\). Especially for classical sentences, i.e., \(\Sigma\cup\{\alpha\}\subseteq Sen(L)\), it is clear that \[\Sigma\vdash_{PL}\alpha\ \Leftrightarrow\ \Sigma\vdash_{K}\alpha,\] where \(\vdash_{PL}\) denotes provability in PL. \(\Sigma\) is said to be \(K\)_-consistent_, if \(\Sigma\not\vdash_{K}\bot\). Again for \(\Sigma\subset Sen(L)\), \[\Sigma\ \mbox{is}\ K\mbox{-consistent}\ \Leftrightarrow\ \Sigma\ \mbox{is consistent}.\] Recall that a formal system \(K\) (or its proof relation \(\vdash_{K}\)) satisfies the Deduction Theorem (DT) if for all \(\Sigma\), \(\varphi\) and \(\psi\), \[\Sigma\cup\{\varphi\}\vdash_{K}\psi\ \Rightarrow\ \Sigma\vdash_{K}\varphi \rightarrow\psi. \tag{21}\] It is well-known that if the only inference rule of \(K\) is _MP_ (and perhaps also the Generalization Rule), then DT holds for \(\vdash_{K}\). But in systems with additional inference rules DT often fails. Below we shall consider formal systems \(K\) augmented with an additional inference rule. So we shall need to examine the validity of DT later. **Definition 3.2**: A set \(\Sigma\subset Sen(L_{s})\) is said to be \(X\)_-satisfiable_ if for some truth assignment \(M\) for \(L\) and some \(f\in X\), \(\langle M,f\rangle\models_{s}\Sigma\). As is well-known the Soundness and Completeness Theorems of a logic have two distinct formulations, which are not always equivalent, depending on the semantics and the validity of Deduction Theorem. For the logic PLS\((X,K)\) these two forms, ST1 and ST2 for Soundness and CT1 and CT2 for Completeness, are the following: (ST1) \[\Sigma\vdash_{K}\varphi\ \Rightarrow\ \Sigma\models_{X}\varphi,\] (ST2) \[\Sigma\ \mbox{is}\ X\mbox{-satisfiable}\ \Rightarrow\ \Sigma\ \mbox{is}\ K\mbox{-consistent}\] (CT1) \[\Sigma\models_{X}\varphi\ \Rightarrow\ \Sigma\vdash_{K}\varphi,\] (CT2) \[\Sigma\ \mbox{is}\ K\mbox{-consistent}\ \Rightarrow\ \Sigma\ \mbox{is}\ X\mbox{-satisfiable}.\] Concerning the relationship between ST1 and ST2 and between CT1 and CT2 for PLS\((X,K)\) the following holds. **Fact 3.3**: _(i) For every \(X\)_ \[\Sigma\not\models_{X}\varphi\ \Rightarrow\Sigma\cup\{\neg\varphi\}\mbox{ is $X$-satisfiable.} \tag{22}\] _As a consequence, \((\mbox{ST1})\Leftrightarrow(\mbox{ST2})\) holds for every \(\mbox{PLS}(X,K)\)._ _(ii) \((\mbox{CT1})\Rightarrow(\mbox{CT2})\) holds for every \(\mbox{PLS}(X,K)\). If \(\vdash_{K}\) satisfies DT, then the converse holds too, i.e., \((\mbox{CT1})\Leftrightarrow(\mbox{CT2})\)._ _Proof._ (i) (22) follows immediately from the definition of \(\models_{X}\) and the fact that the truth is bivalent. Now \((\mbox{ST1})\Rightarrow(\mbox{ST2})\) is straightforward. For the converse assume ST2 and \(\Sigma\not\models_{X}\varphi\). By (22) \(\Sigma\cup\{\neg\varphi\}\) is \(X\)-satisfiable. By ST2, \(\Sigma\cup\{\neg\varphi\}\) is \(K\)-consistent, therefore \(\Sigma\not\vdash_{K}\varphi\). (ii) \((\mbox{CT1})\Rightarrow(\mbox{CT2})\) is also straightforward. For the converse assume CT2, DT and \(\Sigma\not\vdash_{K}\varphi\). It is well-known that by DT the latter is equivalent to the \(K\)-consistency of \(\Sigma\cup\{\neg\varphi\}\). By \(\mbox{CT2},\ \Sigma\cup\{\neg\varphi\}\) is \(X\)-satisfiable. Therefore \(\Sigma\not\models_{X}\varphi\). \(\dashv\) In view of Fact 3.3 (i) we do not need to distinguish any more between ST1 and ST2, and can refer simply to "sound" logics. However the distinction between CT1 and CT2 remains. This is also exemplified by considering the semantic analogue of DT. Given a class \(X\subseteq\mathcal{F}\), let us call the implication: \[\Sigma\cup\{\varphi\}\models_{X}\psi\ \Rightarrow\ \Sigma\models_{X}\varphi\to\psi \tag{23}\] _Semantic Deduction Theorem for \(X\)_ (or, briefly, SDT). Here is a relationship between DT and SDT via CT1. **Fact 3.4**: _For every \(X\subseteq\mathcal{F}\), SDT for \(X\) is true. This implies that if the logic \(\mbox{PLS}(X,K)\) is sound and satisfies CT1, then \(K\) satisfies DT._ _Proof._ That SDT holds for every \(X\subseteq\mathcal{F}\) is an easily verified consequence of the semantics \(\models_{X}\). Now assume that \(\mbox{PLS}(X,K)\) is sound (i.e., satisfies (equivalently) both ST1 and ST2), satisfies CT1, and \(\Sigma\cup\{\varphi\}\vdash_{K}\psi\). By ST1 it follows that \(\Sigma\cup\{\varphi\}\models_{X}\psi\). By SDT (23) we have \(\Sigma\models_{X}\varphi\to\psi\). Then CT1 implies \(\Sigma\vdash_{K}\varphi\to\psi\), as required. \(\dashv\) Next we give a list of specific axiom-schemes (referred also to simply as _axioms_) about \(|\), certain nested groups of which are going to axiomatize the truth relations \(\models_{\mathcal{F}}\), \(\models_{Reg}\), \(\models_{Reg^{*}}\) and \(\models_{Dec}\) considered in the previous sections. \((S_{1})\)\(\varphi\wedge\psi\rightarrow\varphi|\psi\) \((S_{2})\)\(\varphi|\psi\rightarrow\varphi\vee\psi\) \((S_{3})\)\(\varphi|\psi\rightarrow\psi|\varphi\) \((S_{4})\)\((\varphi|\psi)|\sigma\rightarrow\varphi|(\psi|\sigma)\) \((S_{5})\)\(\varphi\wedge\neg\psi\rightarrow(\varphi|\psi\leftrightarrow\neg\varphi|\neg\psi)\) We shall split the axiomatization of the four basic truth relations considered in the previous section in two parts. We shall consider first the basic truth relation \(\models_{\cal F}\) relying on the entire class of functions \({\cal F}\), and then we shall consider the rest stricter relations \(\models_{Reg}\), \(\models_{Reg^{*}}\) and \(\models_{Dec}\). The reason is that the relation \(\models_{\cal F}\) can be axiomatized by a formal system having \(MP\) as the only inference rule, while the rest systems require formal systems augmented with a second rule. The latter requirement makes these systems considerably more complicated. ### Axiomatizing the truth relation \(\models_{\cal F}\) In this section we deal with the relation \(\models_{\cal F}\) and show that it can be soundly and completely axiomatized by the first three axioms \(S_{1}\)-\(S_{3}\) cited above and Modus Ponens \((MP)\). We call this formal system \(K_{0}\). Namely \[{\sf A}{\sf x}(K_{0})={\sf A}{\sf x}({\rm PL})+\{S_{1},S_{2},S_{3}\}\mbox{ and }\ {\sf IR}(K_{0})=\{\mbox{\it MP}\}. \tag{24}\] Observe that \(S_{1}\) and \(S_{2}\), combined with the axioms of PL, prove (in \(K_{0}\)) \(\varphi|\varphi\leftrightarrow\varphi\). It is easy to see that the logic \({\rm PLS}({\cal F},K_{0})\) is sound. Namely we have the following more general fact. **Theorem 3.5**: _Let \(X\subseteq{\cal F}\). If \(K\) is a system such that \({\sf A}{\sf x}(K)\subset Taut(X)\) and \({\sf IR}(K)=\{\mbox{\it MP}\}\), then \({\rm PLS}(X,K)\) is sound._ _Proof._ Let \(X\), \(K\) be as stated and \(\Sigma\vdash_{K}\varphi\). Let \(\varphi_{1},\ldots,\varphi_{n}\), where \(\varphi_{n}=\varphi\), be a \(K\)-proof of \(\varphi\). As usual we show that \(\Sigma\models_{X}\varphi_{i}\), for every \(1\leq i\leq n\), by induction on \(i\). Given \(i\), suppose the claim holds for all \(j<i\), and let \(\langle M,f\rangle\models_{s}\Sigma\), for some assignment \(M\) and \(f\in X\). We show that \(\langle M,f\rangle\models_{s}\varphi_{i}\). If \(\varphi_{i}\in\Sigma\) this is obvious. If \(\varphi_{i}\in{\sf A}{\sf x}(K)\), then \(\langle M,f\rangle\models_{s}\varphi_{i}\), because by assumption \({\sf A}{\sf x}(K)\subset Taut(X)\) and \(f\in X\). Otherwise, since _MP_ is the only inference rule of \(K\), \(\varphi_{i}\) follows by _MP_ from sentences \(\varphi_{j}\), \(\varphi_{k}=(\varphi_{j}\rightarrow\varphi_{i})\), for some \(j,k<i\). By the induction assumption, \(\langle M,f\rangle\models_{s}\varphi_{j}\) and \(\langle M,f\rangle\models_{s}\varphi_{k}\). Therefore \(\langle M,f\rangle\models_{s}\varphi_{i}\). \(\dashv\) **Corollary 3.6**: _The logic \({\rm PLS}({\cal F},K_{0})\) is sound._ _Proof._ By Theorem 2.8 and Fact 2.5 (iv), \(S_{1}\), \(S_{2}\), \(S_{3}\) are schemes that hold in \(\langle M,f\rangle\) for all \(f\in{\cal F}\), therefore \({\sf Ax}(K_{0})\subset Taut({\cal F})\). So the claim follows from 3.5. \(\dashv\) **Completeness of \({\rm PLS}({\cal F},K_{0})\)** We come to the completeness of the logic \({\rm PLS}({\cal F},K_{0})\). As usual, a set \(\Sigma\subseteq Sen(L_{s})\) is said to be complete if for every \(\varphi\in Sen(L_{s})\), \(\varphi\in\Sigma\) or \(\neg\varphi\in\Sigma\). If \(\Sigma\) is \(K\)-consistent and complete, then for every \(\varphi\in Sen(L_{s})\), \(\varphi\in\Sigma\Leftrightarrow\neg\varphi\not\in\Sigma\). Moreover if \(\Sigma\vdash_{K}\varphi\), then \(\varphi\in\Sigma\). Before coming to the logics introduced in the previous subsection, we shall give a general satisfiability criterion. Fix a class \(X\subseteq{\cal F}\) of choice functions and a set of axioms \(K\subseteq Taut(X)\). Let \(\Sigma\) be a \(K\)-consistent and complete set of sentences of \(L_{s}\) and let \(\Sigma_{1}=\Sigma\cap Sen(L)\) be the subset of \(\Sigma\) that contains the classical sentences of \(\Sigma\). Then clearly \(\Sigma_{1}\) is a consistent and complete set of sentences of \(L\). By the Completeness Theorem of PL, there exists a truth assignment \(M\) for \(L\) such that, for every \(\alpha\in Sen(L)\) \[\alpha\in\Sigma_{1}\ \Leftrightarrow\ M\models\alpha. \tag{25}\] Given \(\Sigma\), \(\Sigma_{1}\), \(M\) satisfying (25), and a set \(X\subseteq{\cal F}\) of choice functions, the question is under what conditions \(M\) can be paired with a function \(f\in X\) such that \(\langle M,f\rangle\models_{s}\Sigma\). Below we give a simple characterization of this fact which is the key characterization of \(X\)-satisfiability. **Lemma 3.7**: _Let \(X\subseteq{\cal F}\) and \(K\subset Taut(X)\). Let also \(\Sigma\) be a \(K\)-consistent and complete set of sentences of \(L_{s}\) and let \(\Sigma_{1}=\Sigma\cap Sen(L)\) and \(M\) satisfy (25). Then for every \(f\in X\), \(\langle M,f\rangle\models\Sigma\) if and only if for every \(\varphi\in Sen(L_{s})\),_ \[\varphi\in\Sigma\ \Rightarrow\overline{f}(\varphi)\in\Sigma. \tag{26}\] _(Actually (26) is equivalent to_ \[\varphi\in\Sigma\ \Leftrightarrow\overline{f}(\varphi)\in\Sigma,\] _but the other direction follows from (26), the consistency and completeness of \(\Sigma\) and the fact that \(\overline{f}(\neg\varphi)=\neg\overline{f}(\varphi)\).)_ _Proof._ Pick an \(f\in X\) and suppose \(\langle M,f\rangle\models_{s}\Sigma\). Then by the completeness of \(\Sigma\) and the definition of \(\models_{s}\), for every \(\varphi\in Sen(L_{s})\), \[\varphi\in\Sigma\ \Leftrightarrow\ \langle M,f\rangle\models_{s}\varphi \Leftrightarrow M\models\overline{f}(\varphi).\] Now by (25), \(M\models\overline{f}(\varphi)\Rightarrow\overline{f}(\varphi)\in\Sigma_{1} \subset\Sigma\). Therefore \(\varphi\in\Sigma\ \Rightarrow\overline{f}(\varphi)\in\Sigma\). Thus (26) holds. Conversely, suppose (26) is true. To show that \(\langle M,f\rangle\models_{s}\Sigma\), pick some \(\varphi\in\Sigma\). By (26) \(\overline{f}(\varphi)\in\Sigma\). Then \(\overline{f}(\varphi)\in\Sigma_{1}\) since \(\overline{f}(\varphi)\) is classical, so by (25) \(M\models\overline{f}(\varphi)\). This means that \(\langle M,f\rangle\models_{s}\varphi\), as required. \(\dashv\) We come next to the completeness of PLS\(({\cal F},K_{0})\). The essential step of the proof is the following Lemma. **Lemma 3.8**: _Let \(\Sigma\) be a \(K_{0}\)-consistent and complete set of sentences of \(L_{s}\). Then \(\Sigma\) is \({\cal F}\)-satisfiable._ _Proof_. Let \(\Sigma\) be \(K_{0}\)-consistent and complete. Then for any \(\varphi,\psi\in Sen(L_{s})\), the possible subsets of \(\Sigma\) whose elements are \(\varphi|\psi\), \(\varphi\), \(\psi\) or their negations are the following: (a1) \(\{\varphi|\psi,\varphi,\psi\}\subset\Sigma\) (a2) \(\{\varphi|\psi,\varphi,\neg\psi\}\subset\Sigma\) (a3) \(\{\varphi|\psi,\neg\varphi,\psi\}\subset\Sigma\) (a4) \(\{\neg(\varphi|\psi),\neg\varphi,\neg\psi\}\subset\Sigma\) (a5) \(\{\neg(\varphi|\psi),\varphi,\neg\psi\}\subset\Sigma\) (a6) \(\{\neg(\varphi|\psi),\neg\varphi,\psi\}\subset\Sigma\) The remaining cases, (a7) \(\{\varphi|\psi,\neg\varphi,\neg\psi\}\subset\Sigma\) (a8) \(\{\neg(\varphi|\psi),\varphi,\psi\}\subset\Sigma\) are impossible because they contradict \(K_{0}\)-consistency and completeness of \(\Sigma\). Indeed, in case (a7) we have \(\neg\varphi\wedge\neg\psi\in\Sigma\). Also \(\varphi|\psi\in\Sigma\), so by \(S_{2}\) and completeness, \(\varphi\vee\psi\in\Sigma\), a contradiction. In case (a8) \(\varphi\wedge\psi\in\Sigma\). Also \(\neg(\varphi|\psi)\in\Sigma\), so by \(S_{1}\) and completeness \(\neg(\varphi\wedge\psi)\in\Sigma\), a contradiction. Given a pair \(\{\alpha,\beta\}\) we say that "\(\{\alpha,\beta\}\) satisfies (ai)" if for \(\varphi=\alpha\) and \(\psi=\beta\), the corresponding case (ai) above, for \(1\leq i\leq 6\), holds. We define a choice function \(g\) for \(L\) as follows: \[g(\alpha,\beta)=\left\{\begin{array}{ll}(i)\ \alpha,\mbox{ if }\{\alpha, \beta\}\mbox{ satisfies (a2) or (a6)}\\ (ii)\ \beta,\mbox{ if }\{\alpha,\beta\}\mbox{ satisfies (a3) or (a5)}\\ (iii)\mbox{ any of the }\alpha,\,\beta,\mbox{ if }\{\alpha,\beta\}\mbox{ satisfies (a1) or (a4)}.\end{array}\right. \tag{27}\] _Claim._\(\overline{g}\) satisfies the implication (26) of the previous Lemma. _Proof of the Claim._ We prove (26) by induction on the length of \(\varphi\). For \(\varphi=\alpha\in Sen(L)\), \(\overline{g}(\alpha)=\alpha\), so (26) holds trivially. Similarly the induction steps for \(\wedge\) and \(\neg\) follow immediately from the fact that \(\overline{g}\) commutes with these connectives and the completeness of \(\Sigma\). So the only nontrivial step of the induction is that for \(\varphi|\psi\). It suffices to assume \[\varphi\in\Sigma\ \Rightarrow\overline{g}(\varphi)\in\Sigma, \tag{28}\] \[\psi\in\Sigma\ \Rightarrow\overline{g}(\psi)\in\Sigma, \tag{29}\] and prove \[\varphi|\psi\in\Sigma\ \Rightarrow\overline{g}(\varphi|\psi)\in\Sigma. \tag{30}\] Assume \(\varphi|\psi\in\Sigma\). Then the only possible combinations of \(\varphi\), \(\psi\) and their negations that can belong to \(\Sigma\) are those of cases (a1), (a2) and (a3) above. To prove (30) it suffices to check that \(\overline{g}(\varphi|\psi)\in\Sigma\) in each of these cases. Note that \(\overline{g}(\varphi|\psi)=g(\overline{g}(\varphi),\overline{g}(\psi))=g( \alpha,\beta)\), where \(\overline{g}(\varphi)=\alpha\) and \(\overline{g}(\psi)=\beta\) are sentences of \(L\), so (27) applies. Case (a1): Then \(\varphi\in\Sigma\) and \(\psi\in\Sigma\). By (28) and (29), \(\overline{g}(\varphi)\in\Sigma\) and \(\overline{g}(\varphi)\in\Sigma\). By definition (27), \(\overline{g}(\varphi|\psi)=g(\overline{g}(\varphi),\overline{g}(\psi))\) can be either \(\overline{g}(\varphi)\) or \(\overline{g}(\psi)\). So in either case \(\overline{g}(\varphi|\psi)\in\Sigma\). Case (a2): Then \(\varphi\in\Sigma\) and \(\neg\psi\in\Sigma\). By (28) and (29), \(\overline{g}(\varphi)\in\Sigma\), \(\overline{g}(\psi)\notin\Sigma\). Also by (27), \(\overline{g}(\varphi|\psi)=g(\overline{g}(\varphi),\overline{g}(\psi))= \overline{g}(\varphi)\), thus \(\overline{g}(\varphi|\psi)\in\Sigma\). Case (a3): Then \(\neg\varphi\in\Sigma\), \(\psi\in\Sigma\). By (28) and (29), \(\overline{g}(\varphi)\notin\Sigma\), \(\overline{g}(\psi)\in\Sigma\). By (27), \(\overline{g}(\varphi|\psi)=g(\overline{g}(\varphi),\overline{g}(\psi))= \overline{g}(\psi)\), thus \(\overline{g}(\varphi|\psi)\in\Sigma\). This completes the proof of the Claim. It follows that condition (26) is true, so by Lemma 3.7, if \(M\models\Sigma_{1}\) where \(\Sigma_{1}=\Sigma\cap Sen(L)\), then \(\langle M,g\rangle\models\Sigma\), therefore \(\Sigma\) is \(\mathcal{F}\)-satisfiable. \(\dashv\) Let us remark here that, since \(\vdash_{K_{0}}\) satisfies the Deduction Theorem, by Fact 3.3 the two forms of completeness theorem CT1 and CT2 are equivalent for \(\mathrm{PLS}(\mathcal{F},K_{0})\). So it is indifferent which one we are going to prove for the system \(\mathrm{PLS}(\mathcal{F},K_{0})\). **Theorem 3.9**: (Completeness of \(\mathrm{PLS}(\mathcal{F},K_{0})\)) _The logic \(\mathrm{PLS}(\mathcal{F},K_{0})\) is complete. That is, if \(\Sigma\) is \(K_{0}\)-consistent, then \(\Sigma\) is \(\mathcal{F}\)-satisfiable._ _Proof._ Let \(\Sigma\) be \(K_{0}\)-consistent. Extend \(\Sigma\) to a \(K_{0}\)-consistent and complete \(\Sigma^{*}\supseteq\Sigma\). By Lemma 3.8, \(\Sigma^{*}\) is \(\mathcal{F}\)-satisfiable. Therefore so is \(\Sigma\). \(\dashv\) **Corollary 3.10**: _The set \(\{\varphi:\vdash_{K_{0}}\varphi\}\) is decidable._ _Proof_. By the soundness and completeness of \(\mathrm{PLS}(\mathcal{F},K_{0})\), \(\{\varphi:\vdash_{K_{0}}\varphi\}=Taut(\mathcal{F})\). But \(Taut(\mathcal{F})\) is decidable by Lemma 3.1. \(\dashv\) ### Axiomatizing the truth relations for the classes \(Reg\), \(Reg^{\star}\) and \(Dec\) The next systems, \(K_{1}\)-\(K_{3}\), are intended to capture in addition the semantic property of regularity considered in section 2.3. We need to define \(K_{1}\) so that if \(\vdash_{K_{1}}\varphi\) then \(\varphi\in Taut(Reg)\), and vice-versa (if possible). Specifically, if \(\alpha\sim\alpha^{\prime}\), we need \(K_{1}\) to prove, for every \(\beta\), \(\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta\), i.e., \[\vdash_{K_{1}}(\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta).\] This cannot be captured by an axiom-scheme, since no scheme can express the relation \(\sim\) of logical equivalence. It can be captured however by a new _inference rule_. Roughly we need a rule guaranteeing that if \(\varphi\), \(\psi\) are logically equivalent, then \(\varphi\) and \(\psi\) can be interchanged _salva veritate_ in expressions containing \(|\), that is one entailing \(\varphi|\sigma\leftrightarrow\psi|\sigma\), for every \(\sigma\).12 This is the following rule denoted \(SV\) (for salva veritate): Footnote 12: Of course substitution of logically equivalent sentences salva veritate holds also in classical logic, that is, if \(\alpha\sim\alpha^{\prime}\) and \(\alpha\) is a subformula of \(\beta\), then \(\beta[\alpha]\sim\beta[\alpha^{\prime}]\), where \(\beta[\alpha^{\prime}]\) is the result of replacing \(\alpha\) with \(\alpha^{\prime}\) within \(\beta\). This however is a simple consequence of the compositional semantics of classical logic. In contrast the choice semantics of PLS is by no means compositional. \[(SV)\qquad\mbox{\it from }\ \varphi\leftrightarrow\psi\ \mbox{\it infer } \varphi|\sigma\leftrightarrow\psi|\sigma,\] \[\mbox{if }\varphi\leftrightarrow\psi\mbox{ is provable in }K_{0}.\] We see that \(SV\) is a "conditional rule", applied under constraints, much like the Generalization Rule of first-order logic (from \(\varphi(x)\) infer \(\forall x\varphi(x)\), if \(x\) is not free in the premises, and also the Necessitation Rule of modal logic (from \(\varphi\) infer \(\Box\varphi\), if \(\vdash\varphi\)). It follows that \(SV\) operates according to the following: **Fact 3.11**: _Let \(K\) be a formal system such that \(SV\in\mbox{\it IR}(K)\). If \(\vdash_{K_{0}}(\varphi\leftrightarrow\psi)\), then \(\vdash_{K}(\varphi|\sigma\leftrightarrow\psi|\sigma)\) for every \(\sigma\)._ Note that since, according to Corollary 3.10, it is decidable, given \(\varphi\), whether \(\vdash_{K_{0}}\varphi\), it is decidable, given a recursive set of sentences \(\Sigma\), whether a sequence of sentences \(\varphi_{1},\ldots,\varphi_{n}\) is a proof (in \(K\)) from \(\Sigma\) or not. **Theorem 3.12**: _Let \(X\subseteq Reg\). If \(K\) is a system such that \(\mbox{\sf Ax}(K)\subset Taut(X)\) and \(\mbox{\it IR}(K)=\{\mbox{\it MP},SV\}\), then \(\mbox{\rm PLS}(X,K)\) is sound._ _Proof._ Let \(X\subseteq Reg\), \({\sf Ax}(K)\subset Taut(X)\) and \({\sf IR}(K)=\{\it MP,SV\}\), and let \(\Sigma\vdash_{K}\varphi\). Let \(\varphi_{1},\ldots,\varphi_{n}\), where \(\varphi_{n}=\varphi\), be a \(K\)-proof of \(\varphi\). We show, by induction on \(i\), that for all \(i=1,\ldots,n\), \(\Sigma\models_{X}\varphi_{i}\). Let \(\langle M,f\rangle\models_{s}\Sigma\), with \(f\in X\). The proof that \(\langle M,f\rangle\models_{s}\varphi_{i}\) goes exactly as in the proof of Theorem 3.5, except of the case where \(\varphi_{i}\) follows from a sentence \(\varphi_{j}\), for \(j<i\), by \(SV\). It means that \(\varphi_{i}=(\sigma|\tau\leftrightarrow\rho|\tau)\) while \(\varphi_{j}=(\sigma\leftrightarrow\rho)\), where \(\vdash_{K_{0}}(\sigma\leftrightarrow\rho)\). Now \(K_{0}\) is a system satisfying the conditions of 3.5 above for \(X={\cal F}\), so \(\models_{\cal F}(\sigma\leftrightarrow\rho)\). It means that for every assignment \(N\) and every \(g\in{\cal F}\), \(\langle N,g\rangle\models_{s}(\sigma\leftrightarrow\rho)\), i.e., \(N\models\overline{g}(\sigma)\leftrightarrow\overline{g}(\rho)\), that is, \(\overline{g}(\sigma)\leftrightarrow\overline{g}(\rho)\) is a classical tautology, or \(\overline{g}(\sigma)\sim\overline{g}(\rho)\), for every \(g\in{\cal F}\). In particular, \(\overline{f}(\sigma)\sim\overline{f}(\rho)\). Now since \(X\subseteq Reg\), \(f\in X\) implies \(f\) is regular. Therefore \(\overline{f}(\sigma)\sim\overline{f}(\rho)\) implies that \(f(\overline{f}(\sigma),\overline{f}(\tau))\sim f(\overline{f}(\rho),\overline {f}(\tau))\), or \(\overline{f}(\sigma|\tau)\sim\overline{f}(\rho|\tau)\), therefore \(M\models\overline{f}(\sigma|\tau)\leftrightarrow\overline{f}(\rho|\tau)\), or \(\langle M,f\rangle\models_{s}(\sigma|\tau\leftrightarrow\rho|\tau)\), i.e., \(\langle M,f\rangle\models_{s}\varphi_{i}\), as required. This completes the proof. \(\dashv\) We define next the systems \(K_{1}\)-\(K_{3}\) as follows: \[{\sf Ax}(K_{1})={\sf Ax}(K_{0})=\{S_{1},S_{2},S_{3}\},\qquad{\sf IR}(K_{1})=\{ \it MP,SV\}, \tag{31}\] \[{\sf Ax}(K_{2})={\sf Ax}(K_{1})+S_{4},\qquad\ \ {\sf IR}(K_{2})=\{\it MP,SV\}, \tag{32}\] \[{\sf Ax}(K_{3})={\sf Ax}(K_{2})+S_{5},\qquad\ \ {\sf IR}(K_{3})=\{\it MP,SV\}. \tag{33}\] **Theorem 3.13**: (Soundness) _The logics \({\rm PLS}(Reg,K_{1})\), \({\rm PLS}(Reg^{*},K_{2})\) and \({\rm PLS}(Dec,K_{3})\) are sound._ _Proof._ This follows essentially from the general soundness Theorem 3.12. We have \(Dec\subseteq Reg^{*}\subseteq Reg\), so all these classes of choice functions satisfy the condition \(X\subseteq Reg\) of 3.12. Also \({\sf IR}(K_{i})=\{\it MP,SV\}\), for \(i=1,2,3\). By Corollary 3.6, \({\sf Ax}(K_{0})\subset Taut({\cal F})\), and \(Taut({\cal F})\subseteq Taut(Reg)\subseteq Taut(Dec)\). Since \({\sf Ax}(K_{1})={\sf Ax}(K_{0})\), we have \({\sf Ax}(K_{1})\subseteq Taut(Reg)\), so it follows that \({\rm PLS}(Reg,K_{1})\) is sound. Next \({\sf Ax}(K_{2})={\sf Ax}(K_{0})+S_{4}\), so to see that \({\rm PLS}(Reg^{*},K_{2})\) is sound, it suffices to see that \(S_{4}\in Taut(Reg^{*})\). But by Theorem 2.19\(S_{4}\in Taut(Asso)\subset Taut(Reg^{*})\). Therefore \({\sf Ax}(K_{2})\subseteq Taut(Reg^{*})\), and we are done. Finally \({\sf Ax}(K_{3})={\sf Ax}(K_{2})+S_{5}\) and by Theorem 2.42, the scheme \(S_{5}\) characterizes \(\neg\)-decreasingness, thus \(S_{5}\in Taut(Dec)\). So \({\sf Ax}(K_{3})\subset Taut(Dec)\) and by 3.12 \({\rm PLS}(Dec,K_{3})\) is sound. \(\dashv\) The following Lemma will be essential for the completeness of the aforementioned logics, proved in the next section. **Lemma 3.14**: _If \(\Sigma\subset Sen(L_{s})\) is closed with respect to \(\vdash_{K_{i}}\), for some \(i=1,2,3\), and \(\alpha,\alpha^{\prime}\) are sentences of \(L\) such that \(\alpha\sim\alpha^{\prime}\), then for every \(\beta\), \((\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta)\in\Sigma\)._ _Proof._ Let \(\alpha\sim\alpha^{\prime}\). Then \(\vdash_{PL}\alpha\leftrightarrow\alpha^{\prime}\), hence also \(\vdash_{K_{0}}\alpha\leftrightarrow\alpha^{\prime}\). By \(SV\in{\sf IR}(K_{i})\) it follows that for every \(\beta\), \(\vdash_{K_{i}}\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta\). Therefore \((\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta)\in\Sigma\) since \(\Sigma\) is \(\vdash_{K_{i}}\)-closed. \(\dashv\) **Question 3.15**: _Do the formal systems \(K_{1}\)-\(K_{3}\) satisfy the Deduction Theorem (\(DT\))?_ We guess that the answer to this question is negative but we do not have a proof. The standard way to prove \(DT\) for \(\vdash_{K_{i}}\) is to assume \(\Sigma\cup\{\varphi\}\vdash_{K_{i}}\psi\), pick a proof \(\psi_{1},\ldots,\psi_{n}\) of \(\psi\), with \(\psi_{n}=\psi\), and show that \(\Sigma\vdash_{K_{i}}\varphi\rightarrow\psi_{i}\), for every \(i=1,\ldots,n\), by induction on \(i\). The only crucial step of the induction is the one concerning the rule \(SV\), i.e., to show that for any \(\sigma,\sigma^{\prime},\tau\), if \(\Sigma\vdash_{K_{i}}\varphi\rightarrow(\sigma\leftrightarrow\sigma^{\prime})\), and \(\vdash_{K_{0}}(\sigma\leftrightarrow\sigma^{\prime})\), then \(\Sigma\vdash_{K_{i}}\varphi\rightarrow(\sigma|\tau\leftrightarrow\sigma^{ \prime}|\tau)\). Now clearly \[\vdash_{PL}(\sigma\leftrightarrow\sigma^{\prime})\rightarrow(\varphi \rightarrow(\sigma\leftrightarrow\sigma^{\prime})),\] so also \[\vdash_{K_{0}}(\sigma\leftrightarrow\sigma^{\prime})\rightarrow(\varphi \rightarrow(\sigma\leftrightarrow\sigma^{\prime})).\] This combined with \(\vdash_{K_{0}}(\sigma\leftrightarrow\sigma^{\prime})\) and _MP_ gives \[\vdash_{K_{0}}\varphi\rightarrow(\sigma\leftrightarrow\sigma^{\prime})),\] and hence, by PL again, \[\vdash_{K_{0}}(\varphi\rightarrow\sigma)\leftrightarrow(\varphi \rightarrow\sigma^{\prime}).\] By \(SV\) it follows that \[\Sigma\vdash_{K_{i}}((\varphi\rightarrow\sigma)|\tau)\leftrightarrow(( \varphi\rightarrow\sigma^{\prime})|\tau).\] However it is not clear if and how one can get from the latter the required derivation \(\Sigma\vdash_{K_{i}}\varphi\rightarrow(\sigma|\tau\leftrightarrow\sigma^{ \prime}|\tau)\). It follows from the preceding discussion that DT is open for the formal systems \(K_{i}\), \(i=1,2,3\). Now by Fact 3.4, if DT fails for \(K_{i}\) then necessarily CT1 fails for the logics \({\rm PLS}(Reg,K_{1})\), \({\rm PLS}(Reg^{*},K_{2})\) and \({\rm PLS}(Dec,K_{3})\). This means that CT1 is also open for the preceding logics. (In connection with the status of DT note that, surprisingly enough, the question about the validity of this theorem remains essentially unsettled even for a logical theory as old as modal logic, see [5].) **Completeness** We come to the completeness of the aforementioned logics based on the systems \(K_{1}\)-\(K_{3}\). First in view of the open status of DT for the systems \(K_{1}\)-\(K_{3}\) and Fact 3.3 (ii), we cannot identify the two forms of completeness CT1 and CT2 for these systems. We only know that (CT1)\(\Rightarrow\)(CT2). So we can hope to prove CT2 for \(K_{1}\)-\(K_{3}\). There is however another serious side-effect of the lack of DT. This is that we don't know whether every consistent set of sentences can be extended to a consistent and _complete_ set. Clearly every consistent set \(\Sigma\) can be extended (e.g. by Zorn's Lemma) to a _maximal_ consistent set \(\Sigma^{\prime}\supseteq\Sigma\). But maximality of \(\Sigma^{\prime}\) cannot guarantee completeness without DT (while the converse is true). For, theoretically, \(\Sigma^{\prime}\) may be maximal consistent and yet there is a \(\varphi\) such that \(\varphi\notin\Sigma^{\prime}\) and \(\neg\varphi\notin\Sigma^{\prime}\), in which case \(\Sigma^{\prime}\cup\{\varphi\}\) and \(\Sigma^{\prime}\cup\{\neg\varphi\}\) are both inconsistent. That looks strange but we don't see how it could be proved false without DT. This property of extendibility of a consistent set to a consistent and complete one, for a formal system \(K\), plays a crucial role in the proof of completeness of \(K\) (with respect to a given semantics), so we isolate it as property of \(K\) denoted \(cext(K)\). Namely we set \[(cext(K)) \mbox{\em Every $K$-consistent set of sentences can be extended to}\] \[\mbox{\em a $K$-consistent and complete set}.\] In view of the unknown truth-value of \(cext(K_{i})\), for \(i=1,2,3\), we shall prove only _conditional_ versions of CT2-completeness for these systems. Actually it is shown that CT2-completeness is _equivalent_ to \(cext(K_{i})\). **Theorem 3.16**: (Conditional CT2-completeness for \(\mbox{PLS}(Reg,K_{1})\)) _The logic \(\mbox{PLS}(Reg,K_{1})\) is \(\mbox{CT2}\)-complete if and only if \(cext(K_{1})\) is true._ _Proof._ One direction is easy. Assume \(cext(K_{1})\) is false. Then there is a maximal \(K_{1}\)-consistent set of sentences \(\Sigma\) non-extendible to a \(K_{1}\)-consistent and complete one. It means that there is a sentence \(\varphi\) such that both \(\Sigma\cup\{\varphi\}\) and \(\Sigma\cup\{\neg\varphi\}\) are \(K_{1}\)-inconsistent. But then \(\Sigma\) is not \(Reg\)-satisfiable. For if there are \(M\) and \(f\in Reg\) such that \(\langle M,f\rangle\models_{s}\Sigma\), then \(\langle M,f\rangle\) satisfies also either \(\varphi\) or \(\neg\varphi\). Thus either \(\Sigma\cup\{\varphi\}\) or \(\Sigma\cup\{\neg\varphi\}\) is \(Reg\)-satisfiable. But this is a contradiction since both \(\Sigma\cup\{\varphi\}\) and \(\Sigma\cup\{\neg\varphi\}\) are inconsistent and by Theorem 3.5 PLS\((Reg,K_{1})\) is sound. Therefore \(\Sigma\) is consistent and not \(Reg\)-satisfiable, so PLS\((Reg,K_{1})\) is not CT2-complete. We come to the main direction of the equivalence assuming \(cext(K_{1})\) is true. Then given a \(K_{1}\)-consistent set \(\Sigma\), we may assume without loss of generality that it is also complete. We have to find \(M\) and \(g\in Reg\) such that \(\langle M,g\rangle\models_{s}\Sigma\). It turns out that the main argument of Lemma 3.8, concerning the definition of the choice function \(g\), works also, with the necessary adjustments, for the other logics defined in the previous section. Namely it suffices to find a choice function \(g\in Reg\) such that \(\langle M,g\rangle\models\Sigma\), where \(M\) is a model of \(\Sigma_{1}=\Sigma\cap Sen(L)\). The definition of \(g\) follows exactly the pattern of definition of \(g\) in the proof of Lemma 3.8, except that we need now to take care so that \(g\) be regular. Recall that \(g\) is regular if for all \(\alpha\), \(\alpha^{\prime}\), \(\beta\), \[\alpha^{\prime}\sim\alpha\ \Rightarrow\ g(\alpha^{\prime},\beta)\sim g(\alpha, \beta).\] In (27) \(g\) is defined by three clauses: (i) (a2) or (a6), (ii) (a3) or (a5), (iii) (a1) or (a4). _Claim._ The regularity constraint is satisfied whenever \(g\) is defined by clauses (i) and (ii) above. _Proof of Claim._ Pick \(\alpha\), \(\alpha^{\prime}\), \(\beta\) such that \(\alpha\sim\alpha^{\prime}\). We prove the Claim for the case that \(g(\alpha,\beta)\) is defined according to clause (i)-(a2). All other cases are verified similarly. That \(g(\alpha,\beta)\) is defined by case (i)-(a2) of (27) means that \(\alpha|\beta\in\Sigma\), \(\alpha\in\Sigma\), \(\neg\beta\in\Sigma\) and \(g(\alpha,\beta)=\alpha\). It suffices to see that necessarily \(g(\alpha^{\prime},\beta)=\alpha^{\prime}\sim g(\alpha,\beta)\). Since \(\Sigma\) is complete, it is closed with respect to \(\vdash_{K_{1}}\), so by Lemma 3.14, \(\alpha\sim\alpha^{\prime}\) implies that \((\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta)\in\Sigma\). Also by assumption, \(\alpha|\beta\in\Sigma\), hence \(\alpha^{\prime}|\beta\in\Sigma\). Moreover \(\alpha^{\prime}\in\Sigma\), since \(\alpha\in\Sigma\), and \(\neg\beta\in\Sigma\). Therefore case (i)-(a2) occurs too for \(\alpha^{\prime}|\beta\), \(\alpha^{\prime}\) and \(\beta\). So, by (27), \(g(\alpha^{\prime},\beta)=\alpha^{\prime}\), therefore \(g(\alpha^{\prime},\beta)\sim g(\alpha,\beta)\). This proves the Claim. It follows from the Claim that if we define \(g\) according to (27), regularity is guaranteed unless \(g(\alpha,\beta)\) is given by clause (iii), that is, unless (a1) or (a4) is the case. In such a case either both \(\alpha\), \(\beta\) belong to \(\Sigma\), or both \(\neg\alpha\), \(\neg\beta\) belong to \(\Sigma\), and (27) allows \(g(\alpha,\beta)\) to be _any_ of the elements \(\alpha\), \(\beta\). So at this point we must intervene by a new condition that will guarantee regularity. This is done as follows. Pick, as in the proof of Proposition 2.30, from each \(\sim\)-equivalence class \([\alpha]\), a representative \(\xi_{\alpha}\in[\alpha]\). Recall that, by completeness, the set \(\Sigma_{1}=\Sigma\cap Sen(L)\) as well as its complement \(\Sigma_{2}=Sen(L)-\Sigma_{1}\) are saturated with respect to \(\sim\), that is, for every \(\alpha\), either \([\alpha]\subset\Sigma_{1}\) or \([\alpha]\subset\Sigma_{2}\). Let \(D_{1}=\{\xi_{\alpha}:\alpha\in\Sigma_{1}\}\), \(D_{2}=\{\xi_{\alpha}:\alpha\in\Sigma_{2}\}\). Let \([D_{i}]^{2}\) be the set of pairs of elements of \(D_{i}\), for \(i=1,2\), and pick an arbitrary choice function \([D_{1}]^{2}\cup[D_{2}]^{2}\to D_{1}\cup D_{2}\). Then it suffices to define \(g\) by slightly revising definition (27) as follows: \[g(\alpha,\beta)=\left\{\begin{array}{ll}(i)\ \alpha,\ \mbox{if}\ \{\alpha,\beta\},\ \mbox{satisfies (a2) or (a6)}\\ (ii)\ \beta,\ \mbox{if}\ \{\alpha,\beta\}\ \mbox{satisfies (a3) or (a5)}\\ (iii)\ \sim g_{0}(\xi_{\alpha},\xi_{\beta}),\ \mbox{if}\ \{\alpha,\beta\} \ \mbox{satisfies (a1) or (a4)}.\end{array}\right. \tag{34}\] (The third clause is just a shorthand for: \(g(\alpha,\beta)=\alpha\) if \(g_{0}(\xi_{\alpha},\xi_{\beta})=\xi_{\alpha}\), and \(g(\alpha,\beta)=\beta\) if \(g_{0}(\xi_{\alpha},\xi_{\beta})=\xi_{\beta}\). See the similar formulation in the proof of 2.30.) In view of the Claim and the specific definition of \(g\) by (34), it follows immediately that if \(\alpha\sim\alpha^{\prime}\) then for every \(\beta\), \(g(\alpha,\beta)\sim g(\alpha^{\prime},\beta)\). So \(g\) is regular. Further, exactly as in Lemma 3.8 it follows that \(\langle M,g\rangle\models_{s}\Sigma\). This completes the proof. \(\dashv\) Next we come to the logic PLS\((Reg^{*},K_{2})\). The difference of \(K_{2}\)-consistency from \(K_{1}\)-consistency is that, as a result of axiom \(S_{4}\), if \(\Sigma\) is \(K_{2}\)-consistent and \(\varphi|(\psi|\sigma)\in\Sigma\), then \((\varphi|\psi)|\sigma\in\Sigma\), or more simply \(\varphi|\psi|\sigma\in\Sigma\). Let us outline this difference by an example. **Example 3.17**: _Let_ \[\Sigma=\{\alpha,\neg\beta,\neg\gamma,\alpha|\beta,\neg(\alpha|\gamma),\alpha |(\beta|\gamma)\},\] _where \(\alpha\), \(\beta\), \(\gamma\) are pairwise inequivalent and \(\alpha\wedge\neg\beta\wedge\neg\gamma\) is satisfiable. Then \(\Sigma\) is \(Reg\)-satisfiable, hence \(K_{1}\)-consistent, but is not \(K_{2}\)-consistent. In particular \(\Sigma\) is not Asso-satisfiable._ _Proof._ By hypothesis there is a truth assignment \(M\) such that \(M\models\alpha\wedge\neg\beta\wedge\neg\gamma\). Pick a (partial) choice function for \(L\) such that \(f(\alpha,\beta)=\alpha\), \(f(\alpha,\gamma)=\gamma\) and \(f(\beta,\gamma)=\beta\). Since \(\alpha\), \(\beta\), \(\gamma\) are pairwise inequivalent, it is easy to see that \(f\) extends to a regular choice function for the entire \(L\). Then \(\overline{f}(\alpha|\beta)=\alpha\), \(\overline{f}(\alpha|\gamma)=\gamma\) and \(\overline{f}(\beta|\gamma)=\beta\). So \(\overline{f}(\neg(\alpha|\gamma))=\neg\gamma\). It follows that \(\langle M,f\rangle\models_{s}\{\alpha|\beta,\neg(\alpha|\gamma)\}\). Moreover \(\overline{f}(\alpha|(\beta|\gamma))=f(\alpha,f(\beta,\gamma))=f(\alpha,\beta)=\alpha\), which means that \(\langle M,f\rangle\models_{s}\alpha|(\beta|\gamma)\) too. Thus \(\langle M,f\rangle\models_{s}\Sigma\), so \(\Sigma\) is \(Reg\)-satisfiable. Now in view of axiom \(S_{4}\) of \(K_{2}\), since \(\alpha|(\beta|\gamma)\in\Sigma\) it follows that \(\Sigma\vdash_{K_{2}}\alpha|\beta|\gamma\). By \(S_{2}\) and \(S_{3}\), the latter implies \(\Sigma\vdash_{K_{2}}(\alpha|\gamma)\vee\beta\). On the other hand \(\neg(\alpha|\gamma)\in\Sigma\) and \(\neg\beta\in\Sigma\), so \(\Sigma\vdash_{K_{2}}\neg(\alpha|\gamma)\wedge\neg\beta\), or \(\Sigma\vdash_{K_{2}}\neg((\alpha|\gamma)\vee\beta)\). Thus \(\Sigma\vdash_{K_{2}}\bot\), so it is \(K_{2}\)-inconsistent. Finally assume that \(\Sigma\) is satisfied in \(\langle N,f\rangle\), for some assignment \(N\) and some associative \(f\). Let \(f=\min_{<}=\min\) for some total ordering \(<\) of \(Sen(L)\). Now \(\langle N,f\rangle\models_{s}\{\alpha,\neg\beta,\alpha|\beta\}\) implies \(\min(\alpha,\beta)=\alpha\), i.e., \(\alpha<\beta\), while \(\langle N,f\rangle\models_{s}\{\alpha,\neg\gamma,\neg(\alpha|\gamma)\}\) implies \(\min(\alpha,\gamma)=\gamma\), so \(\gamma<\alpha\). Therefore \(\gamma<\alpha<\beta\). On the other hand, \(\langle N,f\rangle\models_{s}\alpha|(\beta|\gamma)\) implies \(N\models\min(\alpha,\min(\beta,\gamma))=\min(\alpha,\beta,\gamma)\), therefore \(\min(\alpha,\beta,\gamma)=\alpha\) since \(N\models\neg\beta\wedge\neg\gamma\). Thus \(\alpha<\gamma\), a contradiction. \(\dashv\) **Theorem 3.18** (Conditional CT2-completeness for PLS\((Reg^{*},K_{2})\)) _The logic \(\mbox{\rm PLS}(Reg^{*},K_{2})\) is_ CT2_-complete if and only if \(cext(K_{2})\) is true._ _Proof._ One direction of the equivalence is proved exactly as the corresponding direction of Theorem 3.16. So let us come to the other direction assuming \(cext(K_{2})\) is true. Let \(\Sigma\) be a \(K_{2}\)-consistent set, so we may assume again that \(\Sigma\) is also complete. We must construct a regular and associative choice function \(g\) such that \(\langle M,g\rangle\models\Sigma\), where \(M\models\Sigma_{1}\). As already remarked, \(\alpha|(\beta|\gamma)\in\Sigma\) implies \((\alpha|\beta)|\gamma\in\Sigma\). We shall define \(\overline{g}\) basically as in definition (27) of Lemma 3.8, except that now we want \(g\) to induce a regular total ordering of \(Sen(L)\). So let \(h\) be a partial choice function for \(L\) such that \[dom(h)=\{\{\alpha,\beta\}:\{\alpha,\beta\}\mbox{ satisfies some of the cases (a2), (a3), (a5) and }\] (a6) of Lemma 3.8\(\}\), and \[h(\alpha,\beta)=\left\{\begin{array}{ll}(i)\ \alpha,\mbox{ if }\{\alpha, \beta\}\mbox{ satisfies (a2) or (a6),}\\ (ii)\ \beta,\mbox{ if }\{\alpha,\beta\}\mbox{ satisfies (a3) or (a5).}\end{array}\right. \tag{35}\] _Claim 1._ For any \(\alpha\), \(\beta\), \(\gamma\), whenever two of the \(h(\alpha,h(\beta,\gamma))\), \(h(\beta,h(\alpha,\gamma))\), \(h(\gamma,h(\alpha,\beta))\) are defined, they are equal. _Proof of Claim 1._ Pick some \(\alpha\), \(\beta\), \(\gamma\). Then at least two of them belong either to \(\Sigma\) or to its complement. Without loss of generality assume that \(\alpha\in\Sigma\), \(\beta\notin\Sigma\), \(\gamma\notin\Sigma\). Then in view of \(K_{2}\)-consistency and completeness of \(\Sigma\), \[A=\{\alpha,\neg\beta,\neg\gamma,\neg(\beta|\gamma)\}\subset\Sigma.\] Also by \(K_{2}\)-consistency and completeness we can identify \(\alpha|(\beta|\gamma)\) and \((\alpha|\beta)|\gamma\), with respect to their containment to \(\Sigma\), and there are two options: either \(\alpha|\beta|\gamma\in\Sigma\) or \(\neg(\alpha|\beta|\gamma)\in\Sigma\). We consider now the combinations of the sentences \(\alpha|\beta|\gamma\), \(\alpha|\beta\), \(\alpha|\gamma\) and their negations that can belong to \(\Sigma\) together with the elements of \(A\). We write these combinations in the form of sets \(B_{i}\). It is easy to see that the only sets \(B_{i}\) of this kind such that \(A\cup B_{i}\subset\Sigma\), are the following: \(B_{1}=\{\alpha|\beta|\gamma,\alpha|\beta,\alpha|\gamma\}\) \(B_{2}=\{\neg(\alpha|\beta|\gamma),\alpha|\beta,\alpha|\gamma\}\) \(B_{3}=\{\neg(\alpha|\beta|\gamma),\alpha|\beta,\neg(\alpha|\gamma)\}\) \(B_{4}=\{\neg(\alpha|\beta|\gamma),\neg(\alpha|\beta),\alpha|\gamma\}\) \(B_{5}=\{\neg(\alpha|\beta|\gamma),\neg(\alpha|\beta),\neg(\alpha|\gamma)\}\) The remaining sets: \(B_{1}^{\prime}=\{\alpha|\beta|\gamma,\alpha|\beta,\neg(\alpha|\gamma)\}\) \(B_{2}^{\prime}=\{\alpha|\beta|\gamma,\neg(\alpha|\beta),\alpha|\gamma\}\) \(B_{3}^{\prime}=\{\alpha|\beta|\gamma,\neg(\alpha|\beta),\neg(\alpha|\gamma)\}\) cannot be included in \(\Sigma\) jointly with \(A\). [For instance assume \[A\cup\{\alpha|\beta|\gamma,\alpha|\beta,\neg(\alpha|\gamma)\}=\{\alpha,\neg \beta,\neg\gamma,\neg(\beta|\gamma),\alpha|\beta|\gamma,\alpha|\beta,\neg( \alpha|\gamma)\}\subset\Sigma.\] Then \(\alpha|\beta|\gamma\) is written \(\beta|\alpha|\gamma\) so by \(S_{2}\), it implies \(\beta\vee(\alpha|\gamma)\in\Sigma\). By completeness, either \(\beta\in\Sigma\) or \(\alpha|\gamma\in\Sigma\). But already \(\neg\beta\) and \(\neg(\alpha|\gamma)\) are in \(\Sigma\), a contradiction.] Since \(\beta,\gamma\notin\Sigma\), \(h(\beta,\gamma)\), and hence \(h(\alpha,h(\beta,\gamma))\) are not defined by (i) or (ii) of (27). So it suffices to verify that \(h(\beta,h(\alpha,\gamma))=h(\gamma,h(\alpha,\beta))\) in each of the cases \(A\cup B_{i}\subset\Sigma\), for \(1\leq i\leq 5\). 1) \(A\cup B_{1}\subset\Sigma\): We have \(h(\alpha,\beta)=\alpha\), \(h(\alpha,\gamma)=\alpha\). Then \[h(\beta,h(\alpha,\gamma))=h(\beta,\alpha)=\alpha=h(\alpha,\gamma)=h(\gamma,h( \alpha,\beta)),\] so the Claim holds. 2) \(A\cup B_{2}\subset\Sigma\): Same as before. 3) \(A\cup B_{3}\subset\Sigma\): We have \(h(\alpha,\beta)=\alpha\), \(h(\alpha,\gamma)=\gamma\). Thus \(h(\beta,h(\alpha,\gamma)=h(\beta,\gamma)\), so \(h(\beta,h(\alpha,\gamma)\) is also undefined. We se that only \(h(\gamma,h(\alpha,\beta))=\gamma\) is defined, so the Claim holds vacuously. 4) \(A\cup B_{4}\subset\Sigma\): We have \(h(\alpha,\beta)=\beta\), \(h(\alpha,\gamma)=\alpha\). Thus \(h(\gamma,h(\alpha,\beta))=g(\gamma,\beta)\) is undefined, and the Claim holds vacuously as before. 5) \(A\cup B_{5}\subset\Sigma\): We have \(h(\alpha,\beta)=\beta\), \(h(\alpha,\gamma)=\gamma\). Thus \(h(\beta,h(\alpha,\gamma))=g(\beta,\gamma)\) is undefined, and we are done again. This completes the proof of Claim 1. _Claim 2._ Let \[S=\{\langle\alpha,\beta\rangle:\{\alpha,\beta\}\in dom(h)\wedge h(\alpha,\beta )=\alpha\},\] and let \(<_{1}\) be the transitive closure of \(S\). Then \(<_{1}\) is a regular partial ordering on \(Sen(L)\). _Proof of Claim 2._ By Claim 1, \(h\) is an associative partial choice function, so as in the proof of Theorem 2.14 we can see that the transitive closure \(<_{1}\) of \(S\) is a partial ordering. Also that \(<_{1}\) is a regular ordering follows from the Claim of Theorem 3.16. This completes the proof of Claim 2. Now clearly the partial ordering \(<_{1}\) of Claim 2 extends to a regular total ordering \(<\) of \(Sen(L)\). Then it suffices to define \(g\) by setting \(g=\min_{<}\). Since for every \(\alpha,\beta\in Sen(L)\), if the pair \(\langle\alpha,\beta\rangle\) satisfies some of the cases (a2), (a3), (a5), (a6), \(\alpha<\beta\) if and only if \(h(\alpha,\beta)=\alpha\), clearly \(g\) extends \(h\). Moreover \[g(\alpha,\beta)=\left\{\begin{array}{ll}(i)\ \alpha,\ \mbox{if}\ \langle\alpha, \beta\rangle\ \mbox{satisfies}\ \mbox{(a2) or}\ \mbox{(a6)},\\ (ii)\ \beta,\ \mbox{if}\ \langle\alpha,\beta\rangle\ \mbox{satisfies}\ \mbox{(a3) or}\ \mbox{(a5)},\\ (iii)\ \min_{<}(\alpha,\beta),\ \mbox{if}\ \langle\alpha,\beta\rangle\ \mbox{satisfies}\ \mbox{(a1) or}\ \mbox{(a4)}.\end{array}\right. \tag{36}\] Thus it follows as in Lemma 3.8 that \(\langle M,g\rangle\models_{s}\Sigma\), that is, \(\Sigma\) is \(Reg^{*}\)-satisfiable. This completes the proof of the theorem. \(\dashv\) Finally we come to the conditional completeness of \(\mbox{PLS}(Dec,K_{3})\). **Theorem 3.19**: (Conditional CT2-completeness for \(\mbox{PLS}(Dec,K_{3})\)) _The logic \(\mbox{PLS}(Dec,K_{3})\) is \(\mbox{CT}2\)-complete if and only if \(cext(K_{3})\) is true._ _Proof._ Again one direction of the equivalence is shown exactly as the corresponding direction of Theorem 3.16. We come to the other direction assuming \(cext(K_{3})\) is true. Fix a \(K_{3}\)-consistent set \(\Sigma\). By \(cext(K_{3})\) we may assume that \(\Sigma\) is also complete. Let \(M\models\Sigma_{1}\), where \(\Sigma_{1}=\Sigma\cap Sen(L)\). We show that there exists a choice function \(g\) such that \(g=\min_{<}\), where \(<\) is a \(\neg\)-decreasing regular total ordering of \(Sen(L)\), and \(\langle M,g\rangle\models\Sigma\). \(g\) is essentially defined as in the previous theorem plus an extra adjustment that guarantees \(\neg\)-decreasingness. Namely, let \(h\) be the function defined exactly as in the proof of Theorem 3.18. _Claim._\(h\) is \(\neg\)-decreasing, i.e., whenever \(h(\alpha,\beta)\) and \(h(\neg\alpha,\neg\beta)\) are defined, then \[h(\alpha,\beta)=\alpha\ \Leftrightarrow h(\neg\alpha,\neg\beta)=\neg\beta. \tag{37}\] _Proof of Claim._ We must check that whenever \(\{\alpha,\beta\}\) and \(\{\neg\alpha,\neg\beta\}\) satisfy some of the cases (a2), (a3), (a5) and (a6), then (37) holds true. Thus we must examine the combinations of \(\alpha\), \(\beta\), \(\alpha|\beta\), \(\neg\alpha|\neg\beta\) and their negations that can belong to \(\Sigma\). There is a total of 16 possible combinations of these sentences. Of them the combinations \[U_{1} =\{\alpha|\beta,\neg(\neg\alpha|\neg\beta),\alpha,\beta\}\] \[U_{2} =\{\neg(\alpha|\beta),\neg\alpha|\neg\beta,\neg\alpha,\neg\beta\}\] do not allow definition of \(h\) since in these cases either both \(\alpha\), \(\beta\) or both \(\neg\alpha\), \(\neg\beta\) belong to \(\Sigma\). Next we have 10 combinations that contradict \(K_{3}\)-consistency and completeness of \(\Sigma\). These are: \[F_{1} =\{\alpha|\beta,\neg\alpha|\neg\beta,\alpha,\beta\}\] \[F_{2} =\{\alpha|\beta,\neg\alpha|\neg\beta,\neg\alpha,\neg\beta\}\] \[F_{3} =\{\alpha|\beta,\neg(\neg\alpha|\neg\beta),\neg\alpha,\neg\beta\}\] \[F_{4} =\{\alpha|\beta,\neg(\neg\alpha|\neg\beta),\neg\alpha,\beta\}\] \[F_{5} =\{\alpha|\beta,\neg(\neg\alpha|\neg\beta),\alpha,\neg\beta\}\] \[F_{6} =\{\neg(\alpha|\beta),\neg\alpha|\neg\beta,\alpha,\beta\}\] \[F_{7} =\{\neg(\alpha|\beta),\neg\alpha|\neg\beta,\neg\alpha,\beta\}\] \[F_{8} =\{\neg(\alpha|\beta),\neg\alpha|\neg\beta,\alpha,\neg\beta\}\] \[F_{9} =\{\neg(\alpha|\beta),\neg(\neg\alpha|\neg\beta),\alpha,\beta\}\] \[F_{10} =\{\neg(\alpha|\beta),\neg(\neg\alpha|\neg\beta),\neg\alpha, \neg\beta\}.\] Notice that of the preceding sets, \(F_{4}\), \(F_{5}\), \(F_{7}\) and \(F_{8}\) yield a contradiction because of the axiom \(S_{5}\). For instance consider \(F_{4}=\{\alpha|\beta,\neg(\neg\alpha|\neg\beta),\neg\alpha,\beta\}\). It contains \(\neg\alpha,\beta\), thus it proves \(\neg\alpha\wedge\beta\). By \(S_{5}\), \(F_{4}\) proves \((\alpha|\beta\leftrightarrow\neg\alpha|\neg\beta)\). \(F_{4}\) also contains \(\alpha|\beta\), thus it proves \(\neg\alpha|\neg\beta\). But it besides contains \(\neg(\neg\alpha|\neg\beta)\), so \(F_{4}\vdash_{K_{3}}\bot\). Thus the only combinations that can be contained in \(\Sigma\) are the following: \[C_{1} =\{\alpha|\beta,\neg\alpha|\neg\beta,\alpha,\neg\beta\}\subset\Sigma\] \[C_{2} =\{\alpha|\beta,\neg\alpha|\neg\beta,\neg\alpha,\beta\}\subset\Sigma\] \[C_{3} =\{\neg(\alpha|\beta),\neg(\neg\alpha|\neg\beta),\alpha,\neg\beta\}\subset\Sigma\] \[C_{4} =\{\neg(\alpha|\beta),\neg(\neg\alpha|\neg\beta),\neg\alpha, \beta\}\subset\Sigma.\] It is easy to verify that in each of the cases \(C_{i}\subset\Sigma\), for \(1\leq i\leq 4\), (37) is true in view of the definition (35) of \(h\). For example in case \(C_{4}\subset\Sigma\), necessarily \(h(\alpha,\beta)=\alpha\), while \(h(\neg\alpha,\neg\beta)=\neg\beta\). This completes the proof of Claim 1. As in the proof of 3.18, let \[S=\{\langle\alpha,\beta\rangle:\{\alpha,\beta\}\in dom(h)\wedge h(\alpha,\beta )=\alpha\},\] and let \(<_{1}\) be the transitive closure of \(S\). As shown in 3.18, \(<_{1}\) is a regular partial ordering. Moreover here, in view of the Claim, \(<_{1}\) is \(\neg\)-decreasing. So by a standard application of Zorn's Lemma, \(<_{1}\) extends to a regular \(\neg\)-decreasing total ordering \(<\) of \({\it Sen}(L)\). If we set \(g=\min_{<}\), then \(g\) satisfies (36) of the previous theorem and thus \(\langle M,g\rangle\models_{s}\Sigma\). Therefore \(\Sigma\) is \(Dec\)-satisfiable. \(\dashv\) The following is open. **Question 3.20**: _If \({\it cext}(K_{i})\) are true for \(i=1,2,3\), do the logics \({\rm PLS}({\it Reg},K_{1})\), \({\rm PLS}({\it Reg}^{*},K_{2})\) and \({\rm PLS}({\it Dec},K_{3})\) satisfy the form \({\rm CT1}\) of Completeness Theorem?_ ### Some closing remarks on axiomatization Before closing this section on axiomatization of superposition logics, let us notice that all axioms \(S_{1}\)-\(S_{5}\) introduced above are true also for the connectives \(\wedge\) and \(\vee\). That is, none of the \(S_{i}\) can be used to discriminate \(|\) from \(\wedge\) and \(\vee\). This looks somewhat strange, since we showed semantically that the converse of \(S_{1}\) and \(S_{2}\) are not tautologies. However this cannot be formulated in the straightforward way, namely as the _schemes_\(\varphi|\psi\not\to\varphi\wedge\psi\) and \(\varphi\vee\psi\not\to\varphi|\psi\) (the latter are false, e.g. for \(\varphi=\psi\)). It means that the axiomatic systems \(K_{i}\), for \(i=0,1,2,3\) introduced above are _interpretable_ in the standard propositional logic PL, through the obvious interpretations \(I_{\wedge}\) and \(I_{\vee}\) that interpret \(|\) as \(\wedge\) or \(\vee\), respectively. These are defined inductively in the obvious way for standard connectives, while for \(|\) we have \((\varphi|\psi)^{I_{\wedge}}=\varphi\wedge\psi\) and \((\varphi|\psi)^{I_{\vee}}=\varphi\vee\psi\). Then clearly for any \(\varphi\) and for \(I\) being some of these interpretations, \[\vdash_{K_{i}}\varphi\ \Rightarrow\vdash\varphi^{I},\] that is, for every \(Dec\)-tautology \(\varphi\) (to consider the strongest system \(Dec\) of choice functions), \(\varphi^{I}\) is a classical tautology. However both of the aforementioned interpretations are not "faithful", which means that the converse of the above implication is not true. For example for classical sentences \(\alpha\), \(\beta\), \((\alpha|\beta)^{I_{\wedge}}=\alpha\wedge\beta\), hence \((\alpha|\beta\to\alpha)^{I_{\wedge}}=\alpha\wedge\beta\to\alpha\). Then \(\alpha\wedge\beta\to\alpha\) is a classical tautology while \(\alpha|\beta\to\alpha\) is not a \(K_{i}\)-tautology. The question is if there exist any further axioms, appropriate for some finer class \(X\subset Dec\), which can distinguish \(|\) from \(\wedge\) and/or \(\vee\). The answer is yes. For example a further condition that can be imposed to \(\neg\)-decreasing orderings is one that concerns the position of the special classes \(\bot\) and \(\top\) in this ordering. For example we may require that our decreasing orderings \(<\) satisfy the condition \(\top<\bot\). Let \(Dec_{\top<\bot}\) denote the class of these total orderings of \(Sen(L)\). It is rather straightforward that the additional axiom needed to characterize \(Dec_{\top<\bot}\) is \((S_{6})\ \perp\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
2310.15119
Compressed Sensing of Generative Sparse-latent (GSL) Signals
We consider reconstruction of an ambient signal in a compressed sensing (CS) setup where the ambient signal has a neural network based generative model. The generative model has a sparse-latent input and we refer to the generated ambient signal as generative sparse-latent signal (GSL). The proposed sparsity inducing reconstruction algorithm is inherently non-convex, and we show that a gradient based search provides a good reconstruction performance. We evaluate our proposed algorithm using simulated data.
Antoine Honoré, Anubhab Ghosh, Saikat Chatterjee
2023-10-16T12:49:33Z
http://arxiv.org/abs/2310.15119v1
# Compressed Sensing of Generative Sparse-Latent (GSL) Signals ###### Abstract We consider reconstruction of an ambient signal in a compressed sensing (CS) setup where the ambient signal has a neural network based generative model. The generative model has a sparse-latent input and we refer to the generated ambient signal as generative sparse-latent signal (GSL). The proposed sparsity inducing reconstruction algorithm is inherently non-convex, and we show that a gradient based search provides a good reconstruction performance. We evaluate our proposed algorithm using simulated data. Antoine Honore, Anubhab Ghosh, Saikat Chatterjee Digital Futures and KTH Royal Institute of Technology, Stockholm, Sweden [email protected], [email protected], [email protected] Compressed sensing, generative models, inverse problems Compressed sensing, generative models, inverse problems ## 1 Introduction **Background:** Reconstruction of signals in an under-determined linear measurement setup, where the signals have generative models that are linear with latent sparse signals, has been thoroughly investigated in literature. This is primarily known as compressed sensing (CS) or compressed sampling [1, 2]. In this article we refer to the linear measurement setup along-with the linear generative model as the traditional CS. Many practical algorithms exist for signal reconstruction using sparse-latent signals as a-priori knowledge. Algorithms are mainly divided as convex, greedy pursuits and Bayesian, and their suitable combinations [3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. Convex algorithms, such as LASSO [13], use \(\ell_{1}\)-norm based penalties to promote sparsity. **Motivation:** In this article, we consider a CS setup where we have linear measurements of an ambient signal like traditional CS, but do not have the linear generative model for the ambient signal. Instead, we assume non-linear, non-convex, generative models where an ambient signal is the output of a neural network excited by a sparse-latent input signal. Signals which are generated in such a way are referred to as '_generative sparse-latent_' (GSL) signals. Our objective is to design reconstruction algorithms for GSL signals in the CS setup. In the CS setup measuring GSL, we have a non-linear mapping between the measurement and the sparse-latent signal. Due to the non-linear mapping, the reconstruction problem is inherently non-convex. We design a reconstruction algorithm that can use a-priori knowledge of sparsity. The reconstruction algorithm uses gradient search for sparse-latent estimation. Using simulations we show that it is possible to achieve a good quality signal reconstruction, although the mapping between measurement vector and sparse-latent signal is highly non-linear. Naturally a question arises: how to measure a degree of non-linearity? Such a measure could help us to perform simulation studies in a controlled manner. To the best of authors' knowledge, we are not aware of any such measure. Therefore we provide a simple measure based on how far a non-linear mapping is from a linear mapping. This helps us to perform simulation studies in a controlled manner. **Relevant literature:** In a CS setup, there exists prior work where neural networks-based generative models are used. In [14], variational auto-encoders (VAEs) and generative adversarial networks (GANs) are used where the dimension of latent signal is (much) lower than the dimension of ambient signal. In [15], the authors considered GAN-based generative models and consider the case where the dimension of the latent is small. Robustness of the CS for generative models was addressed in [16]. The work [17] considered normalizing flows (NFs) [18] that are invertible neural networks. These works address CS of signals with generative models - VAEs, GANs and NFs - excited with dense, Gaussian latent signals. **Our contributions:** Use of sparse-latent signals for generative models in CS setup is the major novelty of the article compared to previous works. The contributions are: (a) Proposing the construction of GSLs. (b) Defining a measure of non-linearity and using the measure to rank complex non-linear mapping functions to generate GSL signals. (c) Proposing sparsity inducing reconstruction algorithms for CS of GSL signals and demonstrating their performances. ## 2 Methods ### GSL signals and its CS setup A GSL signal \(\mathbf{x}\) has the following generative model for the ambient signal: \[\mathbf{x}\triangleq\mathbf{f_{\theta}}(\mathbf{B}\mathbf{z})\in\mathbb{R}^{ m}, \tag{1}\] where \(\mathbf{z}\in\mathbb{R}^{M}\) is sparse, \(\mathbf{B}\in\mathbb{R}^{m\times M}\) is a matrix, and \(\mathbf{f_{\theta}}(.):\mathbb{R}^{M}\rightarrow\mathbb{R}^{m}\) is a neural network. Neural networks have arbitrarily complex mapping functions and are differentiable. We assume that \(\|\mathbf{z}\|_{0}\triangleq K<m\leq M\), where \(\|.\|_{0}\) denotes \(\ell_{0}\)-norm. The GSL model (1) is a generalization over the traditional linear generative sparse representation model where \(\mathbf{f_{\theta}}(\cdot)\) is linear (or identity mapping). In this article, we investigate the reconstruction of GSL signal \(\mathbf{x}\) from linear CS measurements. We assume that the GSL signal model and its parameters \(\boldsymbol{\theta},\mathbf{B}\) are known. We do not consider the modeling issues and learning parameters of GSL signals. In a CS setup, using linear measurements, we have \(n\)-dimensional noisy measurements: \[\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{n}=\mathbf{A}\mathbf{f_{\theta}}( \mathbf{B}\mathbf{z})+\mathbf{n}\in\mathbb{R}^{n},\ n<m, \tag{2}\] where \(\mathbf{x}\in\mathbb{R}^{m}\) is the signal to be reconstructed, \(\mathbf{n}\) is additive measurement noise, and \(n<m\). Using the a-priori information \(\mathbf{z}\) be sparse, we need to solve the under-determined setup. We first estimate \(\mathbf{z}\) as \(\hat{\mathbf{z}}\) and then reconstruct \(\mathbf{x}\) as \(\hat{\mathbf{x}}=\mathbf{f_{\theta}}(\mathbf{B}\hat{\mathbf{z}})\). Our investigation in this article is concentrated on designing practical algorithms for quality reconstructions. ### Optimization problems and reconstruction algorithm We first start with the GSL CS setup (2) where the measurement noise \(\mathbf{n}\) is absent (\(\mathbf{n}=\mathbf{0}\)). In that case, a relevant optimization problem is \[\hat{\mathbf{z}}=\arg\min_{\mathbf{z}}\|\mathbf{z}\|_{1}\ \mathrm{s.t.}\ \mathbf{y}= \mathbf{A}\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{B}\mathbf{z}). \tag{3}\] The above problem has a resemblance with celebrated basis pursuit (BP) algorithm for the traditional CS. The major point is that BP is convex (a linear program) while the optimization problem in (3) is non-convex. BP provides exact reconstruction, i.e. \(\hat{\mathbf{z}}=\mathbf{z}\), under certain technical conditions. In our understanding the optimization problem (3) is difficult to solve due to the (hard) equality constraint and presence of the non-linear mapping of the optimization variable \(\mathbf{z}\). While the measurement noise is absent or present, we address CS of GSL using the following optimization problem \[\hat{\mathbf{z}}=\arg\min_{\mathbf{z}}\left\{\lambda_{1}\|\mathbf{y}-\mathbf{ A}\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{B}\mathbf{z})\|_{2}^{2}+(1-\lambda_{1}) \|\mathbf{z}\|_{1}\right\}, \tag{4}\] where \(\lambda_{1}\in[0,1]\) is a regularization parameter. The above optimization problem bears resemblance to the celebrated LASSO or basis pursuit denoising (BPDN) [19]. While LASSO / BPDN is convex, the optimization problem (4) is non-convex. To solve the problem we use gradient search. The gradient search is \[\hat{\mathbf{z}}_{k+1}=\hat{\mathbf{z}}_{k}-\eta\left.\frac{\partial \mathcal{L}}{\partial\mathbf{z}}\right|_{\mathbf{z}=\hat{\mathbf{z}}_{k}}, \tag{5}\] where \(\mathcal{L}(\mathbf{z})=\lambda_{1}\|\mathbf{y}-\mathbf{A}\mathbf{f}_{ \boldsymbol{\theta}}(\mathbf{B}\mathbf{z})\|_{2}^{2}+(1-\lambda_{1})\| \mathbf{z}\|_{1}\) and \(\eta\) is a tunable learning rate parameter. We use ADAM optimizer [20] to realize the gradient search. It is expected that the success of gradient search will be highly dependent on the degree of non-linearity in \(\mathbf{f}_{\boldsymbol{\theta}}(.)\), and the choices of hyper-parameters \(\lambda_{1}\) and \(\eta\). On quest of theoretical analysis, several questions can arise. Example questions are: (a) How good is the gradient search for solving the optimization algorithm (4)? (b) Can we reach to a globally optimum point? (c) If not, then how far is the local optimum point from the global optimum point? These questions are non-trivial given the optimization problem is non-convex. We do not deliberate on such theoretical questions in this article. Instead we investigate whether we can achieve empirical success using simulations. ### Competing optimization problems Competing optimization problems may use \(\ell_{2}\)-norm based penalties on the ambient signal \(\mathbf{x}\) or the latent signal \(\mathbf{z}\). Without any signal structure imposition, a standard optimization problem is \[\min_{\mathbf{x}}\left\{\kappa\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}+(1 -\kappa)\|\mathbf{x}\|_{2}^{2}\right\}, \tag{6}\] where \(\kappa\in[0,1]\) is a tunable parameter. Note that the optimization problem is convex (it is a regularized least-squares). To use signal structures, algorithms in [14, 17] use the following non-convex optimization problem: \[\min_{\mathbf{z}}\left\{\lambda_{2}\|\mathbf{y}-\mathbf{A}\mathbf{f}_{ \boldsymbol{\theta}}(\mathbf{B}\mathbf{z})\|_{2}^{2}+(1-\lambda_{2})\,\| \mathbf{z}\|_{2}^{2}\right\}, \tag{7}\] where \(\lambda_{2}\in[0,1]\) is a regularization parameter. In [14], authors investigated use of GAN and decoders of auto-encoders as the generative mapping function \(\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{B}\mathbf{z})\), where \(\dim(\mathbf{z})\ll\dim(\mathbf{x})\) and it can happen that \(\dim(\mathbf{z})<\dim(\mathbf{y})\). Therefore the problem (7) while non-convex, may not be fully under-sampled. On the other hand, the work of [17] used normalizing flows (NFs) as the generative mapping function, where \(\dim(\mathbf{z})=\dim(\mathbf{x})\) due to invertibility requirement. Therefore \(\dim(\mathbf{z})>\dim(\mathbf{y})\) and the problem (7) is under-sampled in [17]. The optimization problem (7) can be considered state-of-the-art in CS for generative models. ### A measure of non-linearity In this subsection, we deliberate on how to measure non-linearity. We conjecture that the optimization problem (4) becomes hard to solve if \(\mathbf{f}_{\boldsymbol{\theta}}\) becomes highly non-linear. This in turns raises the question: Does the optimization problem gets harder if \(\mathbf{f}_{\boldsymbol{\theta}}\) gets more non-linear according to some metric of non-linearity? Then a natural queries can be: How to measure non-linearity? How to think that a function is more non-linear that another function? Can we conduct a systematic study to show that solving optimization problem (4) becomes harder with the increase in some measure of non-linearity? We are not aware of a universal measure of non-linearity. Therefore we define a measure from a first principle - how much a non-linear mapping is away from an optimal linear mapping. The procedure is as follows. For a given \(\mathbf{f}_{\boldsymbol{\theta}}\), we first construct a dataset \(\{(\mathbf{x}_{j},\mathbf{z}_{j})\}_{j=1}^{J}\) comprised of \(J\) data samples, where \((\mathbf{x}_{j},\mathbf{z}_{j})\) is \(j\)'th datum. We use the generative procedure \(\mathbf{x}=\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{B}\mathbf{z})\) where \(\mathbf{z}\) need not be sparse; \(\mathbf{z}\) can be dense. Using this dataset, we construct the linear minimum-mean-square-estimation (LMMSE) as \[\hat{\mathbf{x}}_{L}=C_{\mathbf{x}\mathbf{x}}(C_{\mathbf{x}\mathbf{z}}+\lambda I )^{-1}(\mathbf{z}-\mu_{\mathbf{z}})+\mu_{\mathbf{x}}, \tag{8}\] where \(C_{\mathbf{x}\mathbf{x}}\) is the empirical cross-covariance between \(\mathbf{x}\) and \(\mathbf{z}\), \(C_{\mathbf{x}\mathbf{z}}\) is the empirical covariance of \(\mathbf{z}\), and \(\mu_{\mathbf{z}}\) and \(\mu_{\mathbf{x}}\) are the empirical means of \(\mathbf{z}\) and \(\mathbf{x}\), respectively, and \(\lambda>0\) is a tunable regularization parameter. The strength of non-linearities is quantified with a metric referred to as normalized-non-linearity-measure (NNLM) and defined as follows: \[\mathrm{NNLM}(\mathbf{f}_{\boldsymbol{\theta}}(.)) =\frac{\mathbb{E}\|\mathbf{x}-\hat{\mathbf{x}}_{L}\|_{2}^{2}}{ \mathbb{E}\|\mathbf{x}\|_{2}^{2}}=\frac{\mathbb{E}\|\mathbf{f}_{\boldsymbol{ \theta}}(\mathbf{B}\mathbf{z})-\hat{\mathbf{x}}_{L}\|_{2}^{2}}{\mathbb{E}\| \mathbf{f}_{\boldsymbol{\theta}}(\mathbf{B}\mathbf{z})\|_{2}^{2}}. \tag{9}\] A high value of NNLM translates to a high non-linearity, and this should be associated with larger reconstruction errors. We choose \(\lambda\) appropriately using a cross-validation approach. ## 3 Experiments In this section we perform simulations using synthetic data. This helps our investigation in a controlled manner. The simulation were carried out in Python1 using the PyTorch toolbox [21]. The experiments are carried out as follows. Footnote 1: The code is available at [https://github.com/tensorflow/tensorflow/tensorflow/tensorflow/tensorflow](https://github.com/tensorflow/tensorflow/tensorflow/tensorflow/tensorflow) ### First study - On non-linearity In this study we consider several non-linear functions. (a) One-layer feedforward neural networks with either sigmoid or exponential activations. (b) RealNVP NF [22] with various layers. We conjecture that as the number of layers increases the mapping function becomes more non-linear. RealNVP (henceforth referred to as RNVP) is known to be a powerful generative model (NF), recently used in [17], and hence we choose it for our study. We use various configurations of RNVP. In one configuration, a RNVP with 16 coupling layers is pre-trained to map a isotropic Gaussian latent source to a uniform ambient distribution. The trained model is later used for our purpose where a GSL signal is generated by exciting the trained model using a sparse-latent signal. In other configurations we use untrained RNVP with 4 coupling layers and 8 coupling layers. In order to guarantee that the randomly initialized models are non-linear, the activation functions and the parameters priors for \(\mathbf{\theta}\) are chosen according to [18]. Finally, the components of \(\mathbf{B}\) are drawn from a normal distribution and the column are \(\ell_{2}\)-normalized. In the experiments, we use \(100\)-dimensional isotropic Gaussian \(\mathbf{z}\) to create signal \(\mathbf{x}\), as mentioned in section 2.1. Linear estimators in high dimension are subject to the curse of dimensionality, making it possible to accurately estimate non-linear data with a linear transform on the training data set. We thus use the NNLM measure with a varying number of data points \(J\), and by computing the NNLM on the training and a held-out test set. The experimental results are shown in Fig. 1 for training data (used to learn the LMMSE parameters) and held out test data vis-vis number of data points \(J\). In the figure, a higher NNLM value means a higher amount of non-linearity. From Fig. 1 we can observe that increase in the number of coupling layers from 4 (denoted as RNVP, \(n_{c}=4\)) to 8 (denoted as RNVP, \(n_{c}=8\)) for RNVP with random parameters leads to an increase in NNLM, which indicates a higher amount of non-linearity. The trained RNVP with 16 layers shows a lower NNLM than the RNVPs with random parameters, since it effectively learns the mapping of the Gaussian multivariate cumulative distribution function (by probability integral transform [23]). The trend of the curves is typical for training-testing scenarios. The ranking of the RNVP models according to an increasing trend of non-linearity is as follows: (i) RNVP \(\mathcal{U}(0,1)\) as the pretrained RNVP, (ii) RNVP with random parameters and 4 coupling layers, (iii) RNVP with random parameters and 8 coupling layers. ### Second study - Performance and comparison For GSL CS, we now study the performances for three RNVP models: RNVP \(\mathcal{U}(0,1)\), RNVP \(n_{c}=4\), and RNVP \(n_{c}=8\). The GSL signals are produced using sparse-latent \(\mathbf{z}\). We choose \(m=M=100\) and a sparsity level of \(10\%\), i.e. \(K=\|\mathbf{z}\|_{0}=0.1M=10\). The support-set of \(\mathbf{z}\) is chosen randomly and the non-zero components of \(\mathbf{z}\) are drawn from a standard Gaussian. We add additive white Gaussian noise \(\mathbf{n}\) in the signal-to-noise-ratio level 30 dB. We introduce sub-sampling ratio \(\alpha=\frac{\alpha}{m}\in[0.1,0.9]\). We vary \(\alpha\) and study reconstruction performance for GSL CS. Following [8], we use two performance measures for the evaluation of the reconstruction: (1) signal to reconstruction noise ratio (SRNR) \(=10\log_{10}\frac{\underline{v}[\|\mathbf{x}\|_{2}^{2}]}{\underline{v}[\| \mathbf{x}-\mathbf{x}\|_{2}^{2}]}\), and (2) average support cardinality error (ASCE) \(=1-\frac{1}{k}\underline{\mathbb{E}}\{|\mathcal{S}_{\mathbf{z}}\cap\mathcal{S} _{\mathbf{z}}\|\}\), where \(\mathcal{S}_{\mathbf{z}}\) denotes the \(K\)-size support of \(\mathbf{z}\). A \(K\)-size support set comprises of the indices of the \(K\) components of largest amplitude of \(\mathbf{z}\). We first study the effect of tuning \(\lambda_{1}\) of (4). Fig. 2 shows SRNR performance of RNVP, \(n_{c}=4\), against \(\alpha\) for different \(\lambda_{1}\). We note that increase in \(\lambda_{1}\) helps reconstruction and there exists a suitable range for an appropriate choice of \(\lambda_{1}\). Note that \(\lambda_{1}=1\) leads to no sparsity promoting penalty term and the performance degrades badly. The experiment confirms the importance of the sparsity-promoting penalty \(\|\mathbf{z}\|_{1}\), and suggests that it is necessary to choose a suitable \(\lambda_{1}\) to achieve good results. Next we study the performances of three RNVP models and compare them. We include \(\ell_{2}\)-penalty based optimization algorithm (7) where \(\lambda_{2}\) is chosen appropriately. Fig. 3 shows SRNR and ASCE performances of methods. In the figure, \(\lambda_{1}\) and \(\lambda_{2}\) denotes the use of the optimization problem (4) and (7), respectively. It is clear that the use of sparsity promoting penalty \(\|\mathbf{z}\|_{1}\) helps to achieve better reconstruction performances than the \(\|\mathbf{z}\|_{2}^{2}\) penalty. Another interesting result is that reconstruction algorithms show success according to the non-linearity level of \(\mathbf{f_{\theta}}\). Following our remarks at the end of section 3.1, we find that the performance curves in Fig. 3 are consistent with non-linearity level at the lower range of \(\alpha\). Lower range of \(\alpha\) is of high importance in CS. Figure 1: Analysis of non-linearity Figure 2: SRNR performance of RNVP (\(n_{c}=4\)) versus the sub-sampling ratio (\(\alpha\)) for different values of \(\lambda_{1}\). We also investigated the regularized least-squares (6) and found that the performance are not promising. We skip those results to show in this article due to brevity. Finally we show a GSL signal realization using RNVP, \(n_{c}=4\), and its reconstruction using the optimization algorithm (4). Fig. 4c shows a realization of GSL signal \(\mathbf{x}\) and Fig. 4d shows its corresponding sparse-latent signal \(\mathbf{z}\). This visualization helps to show that it is possible to generate ambient signals using sparse-latents. We also show reconstructed \(\hat{\mathbf{x}}\) and \(\hat{\mathbf{z}}\). Fig. 4a demonstrates the gradient descent scheme in (5) and shows that the algorithm indeed converges to a local optimum. For the sake of completeness, we also show in Fig. 4b, the observation signal \(\mathbf{y}\) and its reconstruction \(\hat{\mathbf{y}}\) using \(\hat{\mathbf{x}}\). ### Scope of improvements and challenges While we mainly concentrate on the optimization problem (4) and its solution in this article using gradient descent, we delineate further improvement scopes and challenges. Our first approach could be iteratively reweighted \(\ell_{1}\) minimization, where we solve \[\hat{\mathbf{z}}=\arg\min_{\mathbf{z}}\left\{\beta\|\mathbf{y}-\mathbf{A} \mathbf{f}_{\mathbf{\theta}}(\mathbf{B}\mathbf{z})\|_{2}^{2}+(1-\beta)\| \mathbf{W}\mathbf{z}\|_{1}\right\}, \tag{10}\] where \(\beta\) is a tunable parameter and \(\beta\in[0,1]\). The motivation of the improvement is from the work [9] applied to the traditional CS. Here \(\mathbf{W}\) can be iteratively chosen to enhance sparsity in the same way of [9]. In the current iteration, we use \(\mathbf{W}\) constructed from the solution of previous iteration. The same gradient-descent based approach of solving (4) can be used here in each iteration. The next approach could follow the idea of iterative-reweighted-least-squares (IRLS) algorithm that has been in use for solving the traditional CS. In that case we write \(\|\mathbf{z}\|_{1}\) in terms of \(\ell_{2}\)-norm based representation, as \(\mathbf{z}^{\top}[\mathrm{diag}(\mathbf{1}./\|\mathbf{z}\|)]\mathbf{z}\), where \(\mathbf{1}./|\mathbf{z}|=\left[\frac{1}{|z_{1}|}\;\frac{1}{|z_{2}|}\;\cdots\; \frac{1}{|z_{M}|}\right]\). In the current iteration we can use the solution of \(\mathbf{z}\) from previous iteration to construct \(\mathbf{1}./|\mathbf{z}|\). In each iteration, we can use the same approach of solving (7). Note that we did not experiment with the two possible scopes of improvements discussed above. Finally, let us consider a Bayesian probabilistic approach. In that case, we wish to (1) find a posterior \(p(\mathbf{z}|\mathbf{y})\) using a sparse prior \(p(\mathbf{z})\), e.g. Laplacian prior or Gaussian-Gamma prior, and (2) find the posterior \(p(\mathbf{x}|\mathbf{y})\) (rather than a point estimate). While relevance-vector-machine (RVM) and Bayesian CS with Gaussian-Gamma prior are used for the traditional CS [10, 24], an extension of such works for (2) perhaps comes with technical challenges. We believe the technical challenge will arise due to complicated non-linear functions \(\mathbf{f}_{\mathbf{\theta}}\). This will have repercussions to compute \(p(\mathbf{z}|\mathbf{y})\) and then translate the result to compute \(p(\mathbf{x}|\mathbf{y})\). ## 4 Conclusion In this study we introduced GSL signals and address their reconstruction in CS setup. We propose a measure of non-linearity for the generative models. We show that our measure is relevant to determine the quality of reconstruction at a low number of measurements. The proposed sparsity inducing reconstruction algorithm is shown to outperform competing methods in terms of the chosen performance measures - SRNR and ASCE. Future works include theoretical analysis and designing more efficient algorithms; also learning the parameters of GSL signal models for real signals. ## 5 Acknowledgement The authors thank Borja Rodriguez Galvez (KTH) for fruitful discussions regarding the design of the non-linearity metric. Figure 4: Example of a successful reconstruction for one the \(J\) datum generated with a RNVP (\(n_{c}=4\)) model and using \(\ell_{1}\) regularization with \(\lambda_{1}=0.9\) and \(\alpha=0.5\). Figure 3: Comparison of reconstruction methods in terms of SRNR and ASCE for \(3\) non-linearities.
2306.10787
Adaptive Ordered Information Extraction with Deep Reinforcement Learning
Information extraction (IE) has been studied extensively. The existing methods always follow a fixed extraction order for complex IE tasks with multiple elements to be extracted in one instance such as event extraction. However, we conduct experiments on several complex IE datasets and observe that different extraction orders can significantly affect the extraction results for a great portion of instances, and the ratio of sentences that are sensitive to extraction orders increases dramatically with the complexity of the IE task. Therefore, this paper proposes a novel adaptive ordered IE paradigm to find the optimal element extraction order for different instances, so as to achieve the best extraction results. We also propose an reinforcement learning (RL) based framework to generate optimal extraction order for each instance dynamically. Additionally, we propose a co-training framework adapted to RL to mitigate the exposure bias during the extractor training phase. Extensive experiments conducted on several public datasets demonstrate that our proposed method can beat previous methods and effectively improve the performance of various IE tasks, especially for complex ones.
Wenhao Huang, Jiaqing Liang, Zhixu Li, Yanghua Xiao, Chuanjun Ji
2023-06-19T08:58:56Z
http://arxiv.org/abs/2306.10787v1
# Adaptive Ordered Information Extraction with Deep Reinforcement Learning ###### Abstract Information extraction (IE) has been studied extensively. The existing methods always follow a fixed extraction order for complex IE tasks with multiple elements to be extracted in one instance such as event extraction. However, we conduct experiments on several complex IE datasets and observe that different extraction orders can significantly affect the extraction results for a great portion of instances, and the ratio of sentences that are sensitive to extraction orders increases dramatically with the complexity of the IE task. Therefore, this paper proposes a novel adaptive ordered IE paradigm to find the optimal element extraction order for different instances, so as to achieve the best extraction results. We also propose an reinforcement learning (RL) based framework to generate optimal extraction order for each instance dynamically. Additionally, we propose a co-training framework adapted to RL to mitigate the exposure bias during the extractor training phase. Extensive experiments conducted on several public datasets demonstrate that our proposed method can beat previous methods and effectively improve the performance of various IE tasks, especially for complex ones. 1 Footnote 1: Resources of this paper can be found at [https://github.com/EZ-hwh/AutoExtraction](https://github.com/EZ-hwh/AutoExtraction) ## 1 Introduction Information Extraction (IE) has been studied extensively over the past few decades Grishman (2019). With the rapid development of pre-trained language models, simple IE tasks such as named entity recognition Nadeau and Sekine (2007); Li et al. (2020) have been well solved. However, complex IE tasks with multiple elements to be extracted such as relation extraction Pawara et al. (2017) and event extraction Hogenboom et al. (2011) still need further exploration. Traditional IE methods always follow a _static_ extraction manner, i.e. with a pre-defined fixed element order. For instance, in relation extraction task, Xie et al. (2021) recognizes the relation and then extract the subject and object independently. Luan et al. (2019) extracts all entity together and recognize the relation between them pair-wisely. Wang et al. (2020) formulates joint extraction as a token pair linking problem, which follows implicit predefined order. Lu et al. (2022) designs a unified structured extraction language to encode different IE structures, where the generation order defines the extraction order. These extraction-based IE methods assume that multiple elements of the same instance are independent of each other and thus will not affect each other's extraction results. However, we find that _different extraction orders highly affect the extraction results._ For example, in the A instance in Fig. 1, if we first extract the _Bassoon Concerto in B-flat major_, it will be difficult for succeeding element extraction because the \begin{table} \begin{tabular}{l c c c} \hline \hline **Dataset** & **\#Ins.** & **\#Sens ins.** & **Ratio** \\ \hline SKE21Xie et al. (2021) & 1150 & 100 & 8.70\% \\ NYT(Riedel et al., 2010) & 5000 & 476 & 9.52\% \\ NYT10-HRL(Takanobu et al., 2019) & 4006 & 525 & 13.11\% \\ DuIE(Li et al., 2019) & 15000 & 3618 & 24.12\% \\ DPE(Li et al., 2020) & 1492 & 605 & 40.55\% \\ HacRED(Cheng et al., 2021) & 1500 & 971 & 64.73\% \\ \hline \hline \end{tabular} \end{table} Table 1: The statistic of sensitive instances on different datasets, where an instance is sensitive if different extraction orders produces different extraction results with the same model. Figure 1: An example of complicated information extraction with different extraction order. entity is long-tail. Instead, if we extract _Mozart_ first, it would be much easier to extract the concerto because _Mozart_ appears frequently in the corpus and he composed lots of concerto. According to our observation experiments conducted on several popular relation extraction and event extraction datasets as listed in Table 1, a significant proportion of sentences are sensitive to the extraction order, i.e., different extraction order produces different extraction results. The ratio of sensitive sentences increases dramatically with the complexity of the task on different datasets. What's worse, _the optimal extraction order may not be the same for different instances of the same relation or event type._ For example, for the _composer_ relation, we should extract the composer _Mozart_ first in A instance, but extract the song _Shape of my heart_ first in B instance in Fig 1 because of its frequent appearance in corpus. Based on the observations above, the _static_ extraction paradigm following a pre-defined extraction order is insufficient to achieve reliable extraction. This motivates us to propose a _dynamic_ extraction paradigm, which aims to assign an optimal element extraction order for each instance adaptively. It is nontrivial to dynamically find optimal extraction order for every instance in a dataset. On one hand, the optimal extraction order of an instance depends on not only its schema (relation or event type), but also the context of the sentence. On the other hand, multiple rounds of decisions are required to generate the optimal extraction order, where the decision of each step depends on not only the schema and sentence context, but also the extraction results of previous steps. In this paper, we propose to adopt value-based reinforcement learning Mnih et al. (2015) in determining the optimal extraction order for elements of an instance. Particularly, in deciding the next extraction element for an instance, every of its unextracted elements will be evaluated with a potential benefit score, which is calculated with a BERT-based model. Then, the one with the highest potential benefit score will be selected as the next extraction object. In addition, to mitigate the exposure bias that emerges during the RL-based extractor training phase, a co-training framework is adopted which can simulate the inference environment of the extraction order decision agent to help enhance its performance. It is worth mentioning that our method focuses on generating the optimal extraction order, which is model agnostic and can be applied to various extraction paradigms. Our main contributions are summarized below: * First, we propose the extraction order assignment problem in complicated IE task, which can effectively affect the extractor and the extraction result. * Second, we propose an RL based framework that dynamically generates an optimal extraction order for each sentence, which is model agnostic and can effectively guide the model towards better extraction performance. * Third, we adopt a co-training framework for RL to alleviate the exposure bias from the extractor, which can simulate the inference environment of the extraction order decision agent to help enhance its performance. * Fourth, the experiments conducted on several public datasets show that our method outperforms the state-of-the-art extraction models. ## 2 Related Work Pipeline Information Extraction (IE) methods split the extraction process into several sub-tasks and optimize each of them. They rely on the task definition so the framework varies for different IE tasks. For relation extraction task, Wei et al. (2020); Xie et al. (2021); Li et al. (2021) gradually extract subject, relation and object from the sentence in different order. For event extraction task, Yang et al. (2018); Sheng et al. (2021); Yang et al. (2019) first recognize the event type and trigger and then extract the arguments with sequence tagging model or machine comprehension model. Joint IE methods combine two or more extraction processes into one stage. Graph-based methods are the mainstream joint IE framework. They recognize the entity or text span and build a graph with co-reference, relation Wadden et al. (2019); Luan et al. (2019), entity similarity Xu et al. (2021) or sentence (co-occurrence) Zhu et al. (2021). Through information propagation on the graph, they better encode the sentence and document and then decode the edge to build the final sub-graph. Generation-based IE methods are another paradigm for joint extraction, Cabot and Navigli (2021); Ye et al. (2021) for relation extraction task, Zheng et al. (2019); Hsu et al. (2022); Du et al. (2021) for event extraction, Lu et al. (2022) for unified information extraction, they all serialize the structured extraction results into sentence or pre-defined template in a fixed order. Apart from works above, Wang et al. (2020); Shang et al. (2022) propose one-stage joint extraction framework that decode the subject and object simultaneously. Recently, reinforcement learning (RL) has been applied to IE task. Information extraction was augmented by using RL to acquire and incorporate external evidence in Narasimhan et al. (2016). Feng et al. (2018); Wang (2020) both train a RL agent for instance selecting to denoise training data obtained via distant supervision for relation extraction. Takanobu et al. (2019) utilizes a hierarchical reinforcement learning framework to improve the connection between entity mentions and relation types. Zeng et al. (2019) first considers extraction order of relational facts in a sentence and then trains a sequence-to-sequence model with RL. ## 3 Overview ### Task Definition Fig. 2 gives an example of complicated information extraction process, which first recognizes the schema and then extracts the argument \(\mathtt{arg_{i}}\) for the corresponding role \(\mathtt{role_{i}}\). Generally speaking, the task can be split into two sub-tasks: relation (event) detection and entity (argument) extraction. And we formulate the second task as multi argument extraction task. Given an instance \(s\), the relation/event type \(rel\) and the corresponding pre-defined schema \(<\mathtt{rel},\mathtt{role_{1}},\mathtt{role_{2}},...,\mathtt{role_{n}}>\), our goal is to find all the arguments in \(s\) and fill them in their corresponding roles in schema. ### Solution Framework In this work, we model the complicated information extraction as a multi-step argument extraction task. It this setting, only a role in the schema will be extracted from instance once. With the help of extractor that can extract the arguments given the additional information and role name, we extract all arguments and fill the roles step by step. Though there are various roles in complicated IE task, the difficulty of extracting them are completely different. For example, the role _Reference_point_time_ in Fig 2 indicates the time in the context and it can be extracted without further information. Other roles like _Supplier_Consumer_, however, can not be identified with a single role name, so they should be scheduled for extraction later. We hope that the extractor can first extract the simplest role, then the next simplest one and etc. By incrementally adding the previously extracted information, the extractor can keep a good performance on extracting the difficult ones. To achieve the goal, we need to arrange a reasonable extraction order for the extractor. However, it is hard to specify the whole extraction order once because it depends on not only the schema and context, but also the previous extracted arguments. So we regard the extraction order decision as a _Markov decision process_, where we can dynamically decide the extraction order in multi-round. Clearly, reinforcement learning is a natural way to handle this modeling. We adopt the double deep Q-network (DQN) with prioritized replay buffer as RL agent. ## 4 Framework ### Extractor To handle the extraction tasks with different extraction order, we have to use a powerful extractor. GlobalPointer, proposed by Su et al. (2022), is an ideal choice as it can identify both nested and non-nested entities, and even output the scores of the entities. We first construct the sequence consisting of the extracted elements, role name and the sentence. For an input sequence with \(N\) tokens, the BERT based encoder outputs a vector sequence \([\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{N}]\). Following the computation of attention matrix, we use a one-head self-attention to compute the matrix as the output of the decoder. More specifically, we first convert the vectors \(\mathbf{h}_{i}\) to vectors \(\mathbf{q}_{i}\) and \(\mathbf{k}_{i}\) with two linear layers. \[\begin{split}\mathbf{q}_{i}&=\mathbf{W}^{q}\mathbf{ h}_{i}+\mathbf{b}^{q},\\ \mathbf{k}_{i}&=\mathbf{W}^{k}\mathbf{h}_{i}+ \mathbf{b}^{k},\end{split} \tag{1}\] where \(\mathbf{W}\) and \(\mathbf{b}\) are parameters of the linear layers. Then we compute the scores of each spans with Figure 2: An example of Complicated Information Extraction the with relative position embeddings (RoPE) proposed by Reformer Kitaev et al. (2020). The transformation matrix \(\mathbf{R}_{i}\), which compose of sine and cosine function, satisfy the property that \(\mathbf{R}_{i}^{\top}\mathbf{R}_{j}=\mathbf{R}_{j-i}\). By introducing relative positions embeddings, GlobalPointer is more sensitive to the length and span of entities, so it can better distinguish real entities. \[\begin{split} s_{\alpha}(i,j)&=(\mathbf{R}_{i} \mathbf{q}_{i})^{\top}\left(\mathbf{R}_{j}\mathbf{k}_{j}\right)\\ &=\mathbf{q}_{i}^{\top}\mathbf{R}_{i}^{\top}\mathbf{R}_{j} \mathbf{k}_{i}\\ &=\mathbf{q}_{i}^{\top}\mathbf{R}_{j-i}\mathbf{k}^{j}\end{split} \tag{2}\] To better solve the label imbalance problem, we use the following loss to train our extraction model. \[\begin{split} L=&\log\left(1+\sum_{(i,j)\in P_{ \alpha}}e^{-s_{\alpha}(i,j)}\right)+\\ &\log\left(1+\sum_{(i,j)\in Q_{\alpha}}e^{s_{\alpha}(i,j)}\right) \end{split} \tag{3}\] where \(P_{\alpha}\) is the head-tail set of all spans of query \(\alpha\), and \(Q_{\alpha}\) is the head-tail set of the rest valid spans in text. In the decoding phase, all spans \(t_{[i;j]}\) for which \(s_{\alpha}(i,j)>0\) are considered as entities that match the conditions. For better fitting into the setting of extraction tasks that extract entities under the conditions of the schema and role name, we enumerate all the extraction orders and match the corresponding conditions with the extraction results to construct a training instance. ### MDP for extraction order We regard the multi-role extraction order decision process as a _Markov decision process_ (MDP). Fig. 3 shows the whole extraction process of an instance. In each step, the agent takes the instances and extracted arguments as input, and chooses a role unselected before as the action. The environment would take the selected role, and construct the input sequence for extractor. After collecting the extraction results to fill the role and extraction scores to assign the reward, the environment would transit to new state(s). After selecting all roles to be extracted in multiple rounds, we exchange the whole extraction history into structural output. StateWe use \(s_{t}\) to denote the state of sentence \(x\) in extracting time step \(t\). The state \(s_{t}\) consists of the extraction schema \(\mathcal{S}\), the already extracted arguments \(\hat{y}^{<t}\) and the sentence \(x\). \[s_{t}=(\mathcal{S},\hat{y}^{<t},x) \tag{4}\] The state describes the extracted element in the past step. In each step, the environment would take a role selected by the agent and extracts the Figure 3: Reinforcement learning framework of adaptive ordered decision for IE. corresponding arguments in the sentence with the help of extractor described in Section 4.1. ActionThe action of the RL model is the next role to extract in an instance. Unlike the traditional RL environment, the action space in our model is continuously reduced at every time step. We restrict the initial action space \(\mathcal{A}_{0}\) to the set of roles in the schema \(\mathcal{S}\). After selecting role \(a_{0}\) in time step \(0\), the extractor in environment will extract the argument and its confident score in \(s\) with the help of extractor. The action \(a_{0}\) will be removed from the \(\mathcal{A}_{0}\) and derive the next action space \(\mathcal{A}_{1}\). The derivation of action space can be formalized as below. \[\mathcal{A}_{t}=\begin{cases}\mathcal{S},&t=0\\ \mathcal{A}_{t-1}-\{a_{t-1}\},&0<t<|\mathcal{S}|\end{cases} \tag{5}\] RewardThe reward is a vital important component of RL training, which is an incentive mechanism that encourages agent to better plan for the episode. For our task definition, there is a simple reward assignment. We can extract all the arguments in the sentences, and then assign a reward in terminated state to indicate whether the extracted tuple matches the golden label. But there is a transparent issue that it will majorly depends on the extractor we use. If the extractor is too strong, the results extracted following any extraction order are correct. If the extractor is too weak, the results extracted following any extraction order are incorrect. In the cases described above, different extraction order can not affect the final reward. Therefore, to better distinguish the impact of different extraction orders on the extractor, we define the reward as the score of the extraction results by the extractor. Though the extracted results recognized by the extractor for a single step is the same, the score is different given the different condition according to the Section 4.1. We regard the score as the difficulty of extracting the argument from the sentence. An extracted argument with high score indicates that it is easy to be extracted. The reward of our RL environment can be described as below. \[\mathcal{R}(s,a)=Extractor_{score}(a|s) \tag{6}\] where \(s\) stands for the state and \(a\) stands for the role that will be extracted chosen by the agent. ``` Input :\(\mathcal{D}\)-empty replay buffer; \(\theta\)-initial network parameters; \(\theta^{-}\)-copy of \(\theta\) Input :\(N_{b}\)-training batch size; \(N^{-}\)-target network replacement frequency 1forepoch\(=1,...,E\)do 2 Sample instances \(s,\mathcal{S}\) from the dataset. \(N_{step}\) =#number of roles in the \(\mathcal{S}\) fort = \(1,...,N_{step}\)do 3\(p\leftarrow\)Random(\(0,1\)) ifp < \(1-\epsilon\)then 4\(a_{t}\leftarrow\arg\max_{a}Q(s_{t},a;\theta)\) 5else 6\(a_{t}\leftarrow\)Random-Sample(\(\mathcal{A}_{t}\)) 7 end if 8\(s_{t+1},r_{t}\gets Transition(s_{t},a_{t})\) Store transition \((s_{t},a_{t},r_{t},s_{t+1})\) in \(\mathcal{D}\) Sample random mini batch of \(N_{b}\) transitions \((s_{t},a_{t},r_{t},s_{t+1})\) from \(\mathcal{D}\)if\(s_{t+1}=done\)then 9\(y_{t}=r_{t}\) 10else 11\(y_{t}=r_{t}+\gamma\max_{a^{\prime}}Q(s_{t+1},a^{\prime};\theta^{-})\) 12 end if 13 Update parameter \(\theta\) on the loss \(\mathcal{L}(\theta)=(y_{t}-Q(s_{t},a_{t};\theta))^{2}\) Replace target parameters \(\theta^{-}\leftarrow\theta\) every \(N^{-}\) steps 14 end if 15 16 end for ``` **Algorithm 1**The full details of our training phase for the Double DQN agent with \(\epsilon-\)greedy exploration ### Double DQN For traditional DQN(Mnih et al., 2013), the learning object follows the Bellman equation as below. \[Q(s,a)=\mathcal{R}(s,a)+\gamma\max_{a^{\prime}\in\mathcal{A}}Q(s^{\prime},a^{ \prime}) \tag{7}\] In our task setting, the agent would choose a un-extracted role and the environment would return the extracted argument with the corresponding score. Since the extractor will extract all the corresponding entities that meet the conditions at once, it is possible to extract zero to multiple answers at one time, and each extracted result will form a separated state. Due to the splitting of the state, we need to make corresponding adjustments to the original Learning object. Inspired by (Tavakoli et al., 2018), we introduce a new learning object adapted to the branching reinforcement learning environment by replacing the \(Q\) values of the next state with the average value of the next state's list. \[Q(s,a)=\mathcal{R}(s,a)+\gamma\frac{1}{|S_{(s,a)}|}\sum_{s^{\prime}\in S_{(s,a)} }\max_{a^{\prime}\in\mathcal{A}}Q(s^{\prime},a^{\prime}) \tag{8}\] where \(S_{(s,a)}\) is the set of the following states derived from the state \(s\) with action \(a\), and \(\gamma\) is the discount factor. To avoid suffering from the overestimation of the action values, we adopt the Double DQN (DDQN) algorithm (Van Hasselt et al., 2016) that uses the current Q-network to select the next greedy action, but evaluates it using the target. At the same time, to enable more efficient learning from the experience transitions, we adopt the prioritized experience replay buffer (Schaul et al., 2015) to replay important experience transitions. We define the loss to be the expected value of the mean squared TD error. \[\mathcal{L}=\mathbb{E}_{(s,a,r,s^{\prime})\sim\mathcal{D}}\left[y-Q(s,a) \right]^{2} \tag{9}\] where \(\mathcal{D}\) denotes the prioritized experience replay buffer and \(y\) denotes the estimated valued by the target network. To evaluate the value of (state, action) pair, we use Transformer based model as encoder that can take the state and action as input and encode the pair of every (state, action) pair. Specifically, we use BERT(Devlin et al., 2018) for English and RoBERTaLiu et al. (2019) for Chinese. Formally, for an input state \(\textbf{s}_{t}=[t_{1},t_{2},...,t_{N}]\) and action \(\textbf{a}_{t}=[a_{1},...,a_{M}]\), where the action is the candidate extraction role name, we form the sequence \(x=\left[\texttt{[CLS]},\textbf{a}_{t},\texttt{[SEP]},\textbf{s}_{t},\texttt{ [SEP]}\right]\). The BERT encoder converts these tokens into hidden vector \([\textbf{h}_{1},\textbf{h}_{2},...,\textbf{h}_{M+N}]\), where \(\textbf{h}_{i}\) is a \(d\)-dimension vector and \(d=768\) in the Transformer based structure. To evaluate the \(Q(s,a)\) value for the corresponding state and action, we take \(\textbf{h}_{0}\), which is the encoded vector of the first token [CLS] as the representation of the state-action pair. The final output of the value evaluation module \(\hat{\textbf{y}}\) is define in Eq. \[\hat{\textbf{y}}=\textbf{W}\textbf{h}_{0}+\textbf{b} \tag{10}\] where **W** and **b** are trainable model parameters, representing weights and bias of the linear transformation. ## 5 Experiments ### Datasets We evaluate our methods on several public and accessible complicated information extraction datasets, including NYT, NYT10-HRL, SKE21, HacRED, DuIE, DuEE, which are challenging for many novel extraction methods. We give brief introduction to these dataset in appendix. ### Comparing methods and Metrics We compare our methods with several models on the same dataset, including NovelTaggingZheng et al. (2017), CoTypeRen et al. (2017), HRLTakanobu et al. (2019), and recent PLM-based model CasRelWei et al. (2020), TPlinkerWang et al. (2020) and ReReXie et al. (2021). We choose the exact match that an extracted relational triple _(subject, relation, object)_ is regarded as correct only if the relation and the spans of both subject and object are correct. We report the standard micro precision (Prec.), recall (Reca.) and F1 scores for the relation extraction experiments. And for the event extraction task (only DuEE in our experiment), we report the word-level metrics, which considers the correctness of the arguments in word level. We give the detail of this metric in appendix. ### Main Results Because we only consider the extraction order assignment in every instance, we add a classification module to first recognize the relation in sentence, \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{NYT} & \multicolumn{3}{c}{NYT10-HRL} & \multicolumn{3}{c}{HacRED} & \multicolumn{3}{c}{DuIE} \\ \cline{2-13} & Prec. & Reca. & F1 & Prec. & Reca. & F1 & Prec. & Reca. & F1 \\ \hline Sequence order & **94.74** & 81.08 & 87.38 & **66.62** & 78.67 & 72.15 \\ Random order & 92.75 & 80.63 & 86.27 & 65.94 & 79.41 & 72.05 \\ Adaptive order & 93.37 & **82.43** & **87.56** & 66.31 & **80.56** & **72.74** \\ \hline \hline \end{tabular} \end{table} Table 4: Extraction Result on complicated extraction case with different extraction order decision. HacRED and SKE21 are both tested on at least 5 triples of the same relation. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{NYT} & \multicolumn{3}{c}{NYT10-HRL} & \multicolumn{3}{c}{HacRED} & \multicolumn{3}{c}{SUKE21} \\ \cline{2-13} & Prec. & Reca. & F1 & Prec. & Reca. & F1 & Prec. & Reca. & F1 & Prec. & Reca. & F1 \\ \hline NovelTagging \#Zheng et al. (2017) & 62.4 & 31.7 & 42.0 & 59.3 & 38.1 & 46.4 & 30.51 & 2.91 & 5.31 & - & - & - \\ CoType \#Ren et al. (2017) & 42.3 & 51.1 & 46.3 & 48.6 & 38.6 & 43.0 & - & - & - & - & - & - \\ CopyR \#Zeng et al. (2018) & 61.0 & 56.6 & 58.7 & 56.9 & 45.2 & 50.4 & 13.11 & 9.64 & 11.12 & - & - & - \\ HRL \#Takanobu et al. (2019) & - & - & - & 71.4 & 58.6 & 64.4 & - & - & - & - & - & - & - \\ CasRel \#Wei et al. (2020) & 89.7 & 89.5 & 89.6 & 77.7 & 68.8 & 73.0 & 55.24 & 43.78 & 48.85 & - & - & - & - \\ TPLinker \#Wang et al. (2020) & 91.3 & 92.5 & 91.2 & 81.19 & 65.41 & 72.45 & - & - & - & - & - & - & - \\ ReReXie et al. (2021) & - & - & - & - & 75.45 & 72.50 & 73.95 & - & - & - & - & - & - & - \\ \hline \hline CasRelWei et al. (2020) * & 87.77 & 90.79 & 89.25 & 76.59 & 68.90 & 72.54 & 57.19 & 44.99 & 50.36 & 87.21 & 75.23 & 80.78 \\ TPLinkerWang et al. (2020) * & 88.61 & 92.29 & 90.41 & **80.37** & 65.11 & 71.94 & **58.96** & 55.78 & 57.33 & 83.86 & 84.77 & 84.32 \\ ReReXie et al. (2021) * & 85.68 & 92.45 & 88.93 & 74.43 & 68.46 & 71.32 & 46.42 & 61.37 & 52.86 & 85.65 & 86.37 & 86.01 \\ Adaptive Order & **88.92** & **92.83** & **90.84** & 77.21 & **69.81** & **73.32** & 58.36 & **72.43** & **64.64** & **87.99** & **86.87** & **87.42** \\ \hline \hline \end{tabular} \end{table} Table 2: The main evaluation results of different models on NYT, NYT10-HRL, HacRED and SKE21. The results with only one decimal are quoted from Wei et al. (2020). The methods with * are based on our re-implementation. The methods with # denote that the metrics are partially matched. Best exact (partial) match F1 scores are marked **bold** (underlined). \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{NYT} & \multicolumn{3}{c}{NYT10-HRL} & \multicolumn{3}{c}{HacRED} & \multicolumn{3}{c}{DuIE} & \multicolumn{3}{c}{DuEE} \\ \cline{2-13} & Prec. & Reca. & F1 & Prec. & Reca. & F1 & Prec. & Reca. & F1 & Prec. & Reca. & F1 \\ \hline Sequence order & 92.83 & **96.19** & 94.48 & **84.66** & 85.42 & 85.04 & **58.18** & 79.76 & 67.28 & 52.14 & 62.07 & 56.68 & 72.70 & 70.47 & 71.57 \\ Random order & 93.05 & 95.97 & 94.49 & 84.42 & 85.08 & 84.75 & 57.82 & 80.02 & 67.13 & 52.53 & 62.11 & 56.92 & 72.86 & 71.10 & 71.97 \\ Adaptive order & **93.15** & 96.14 & **94.62** & 84.59 & **85.66** & **85.13** & 57.88 & **81.92** & **67.62** & **53.33** & **62.87** & **57.71** & **73.86** & **72.14** & **72.99** \\ \hline \hline \end{tabular} \end{table} Table 3: Extraction Result on different dataset with different extraction order decision. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{DuIE} & \multicolumn{3}{c}{DuEE} \\ \cline{2-13} & Prec. & Reca. & F1 & Prec. & Reca. & F1 \\ \hline Sequence order & 44.88 & 48.67 & 46.70 & 72.41 & 69.30 & 70.82 \\ Random order & 44.25 & 47.91 & 46.01 & 72.73 & 70.89 & 71.80 \\ Adaptive order & **46.60** & **50.76** & **48.59** & **73.88** & **71.69** & **72.76** \\ \hline \hline \end{tabular} \end{table} Table 5: Extraction Result on complicated extraction case with different extraction order decision. DuIE is restricted in at least 3 roles and DuEE is restricted in at least 5 roles. and then extract the subject and object with the extraction order RL agents assign. According to the result in Table 2, compared to the main stream relation extraction methods, ours method achieves improvement on both precision, recall and F1 score. ### Extraction order To demonstrate the effectiveness of our methods, we also conduct an experiment on different extraction order decision strategy on more challenging event argument extraction task. In this experiment, we mainly focus on the performance of extraction result in different extraction order, so ground-truth schema (relation) for every instance would be offered beforehand, we only test the correctness on the roles. We also provide the result of extracting arguments in a pre-defined order and random order, as baselines. Table 3 shows the result of different extraction order in different dataset, and our method achieve the best in every dataset. Compare to the standard relation extraction task, our method perform better on complex information extraction task (DuIE and DuEE). ### Complicated Extraction settings To further demonstrate the advantage of dynamically order extraction decision by RL, we conduct experiment with more complex extraction tasks, which contains more tuples or more arguments. For the former, we limit the minimum number of extraction tuples, and we limit the minimum number of extraction roles for the latter. Table 4 and 5 show that compared to fixed order extraction or random order extraction method, our framework has a more significant improvement over the original metrics. This is intuitive and reasonable, the extractor is more sensitive to the extraction order in more complex sentence. Besides, compare to the Table 4 and 5, we can find that our method improves the latter settings more significantly. It is because increasing the number of tuples does not increase the length of the extracted path, but only increase the difficulty of single-step extraction by the extractor. In contrast, the increase of the role number leads to an increase in the length of the extraction path, which makes the decision of extraction order more difficult. The results once again prove that the extraction order matters in complicated extraction. ### Case Study With taking RL agent into consideration, we can easily observe the extraction order in different instance. Table 6 show the instances that the extraction process in different sentence. Though two instances share the same event schema _Product release_, the RL agent assign different extraction order dynamically. The first sentence contains an obvious element of time, while the second does not, so our methods put the extraction order of time from the first to the last. The case strongly demonstrates the effectiveness of our method. ## 6 Conclusion In this paper, we propose a novel adaptive order IE paradigm to find the optimal element extraction \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline **Instance \#1** & **Extraction Process** & **Extraction Result** \\ \hline On October 14, Redmi officially released two new entry-level smartphones, Redmi 8 and Redmi 8A. & _Time_: October, _Public_: Redmi, _Product_: Redmi 8A; _Time_: October, _Public_: Redmi, _Product_: Redmi 8A \\ \hline Huami Technology releases Midong Health Watch and AMAZFIT Smart Watch 2. & _Time_: [None], _Public_: Huami Technology, _Product_: Midong Health Watch; _Time_: [None], _Public_: Midong Health Watch, _Product_: AMAZFIT Smart Watch 2 \\ \hline \hline \end{tabular} \end{table} Table 6: Instance of extracting complicated schema through dynamically assigning extraction order. order for different instances. We propose a RL-based framework to generate optimal extraction for each instance dynamically and a co-training framework adapted to RL to alleviate the exposure from the extrator. Extensive experiments show that our proposed method can beat previous methods and effectively improve the performance of various IE tasks, especially for complex ones. ## Acknowledgements This research is funded by National Key Research and Development Project (No. 2020AAA0109302), National Natural Science Foundation of China (No. 62102095, 62072323), Shanghai Science and Technology Innovation Action Plan (No. 22511104700, 22511105902), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103), and Science and Technology Commission of Shanghai Municipality Grant (No. 22511105902). ## Limitations Despite the remarkable improvement on complicated information extraction, there are still some limits of our method. First, due to the multi-round argument extraction modeling, we discard the parallelism in element extraction. Furthermore, the MDP process interacting with the DQN further increases the computational load of the extraction process. So compare to other methods, our framework is relative slow at the inference stage. Second, though our framework can be easily adapted to different extraction task with different schema, we still need an extra module helping identifying the relations (event types) in the instance beforehand. Because of the difference task definition and modeling (extraction task and classification task), although recognizing them potentially implies order decision making, they are beyond the scope of this paper. ## Ethics statement We hereby declare that all authors of this article are aware of and adhere to the provided ACL Code of Ethics and honor the code of conduct. Use of Human AnnotationsHuman annotations are only utilized in the early stages of methodological research to assess the feasibility of the proposed solution. All annotators have provided consent for the use of their data for research purposes. We guarantee the security of all annotators throughout the annotation process, and they are justly remunerated according to local standards. Human annotations are not employed during the evaluation of our method. RisksThe datasets used in this paper have been obtained from public sources and anonymized to protect against any offensive information. Though we have taken measures to do so, we cannot guarantee that the datasets do not contain any socially harmful or toxic language.
2301.04755
Traveling wave enantio-selective electron paramagnetic resonance
We propose a novel method for enantio-selective electron paramagnetic resonance spectroscopy based on magneto-chiral anisotropy. We calculate the strength of this effect and propose a dedicated interferometer setup for its observation.
M. Donaire, N. Bruyant, G. L. J. A. Rikken
2023-01-11T23:31:13Z
http://arxiv.org/abs/2301.04755v1
# Traveling wave enantio-selective electron paramagnetic resonance ###### Abstract We propose a novel method for enantio-selective electron paramagnetic resonance spectroscopy based on magneto-chiral anisotropy. We calculate the strength of this effect and propose a dedicated interferometer setup for its observation. pacs: _Introduction_ Electron paramagnetic resonance (EPR) spectroscopy is a powerful technique to study the local environment and the dynamics of spin-carrying entities, like transition metal ion complexes and organic radicals [1]. Also, those systems that do not intrinsically carry a spin can still be studied by EPR through spin-labelling, i.e., by selectively adding-on a spin carrying probe [2]. Many of the systems studied by EPR are chiral, i.e., they exist in two non-superimposable forms ( enantiomers) that are each other's mirror image, particularly in biochemistry where enzymes, metalloprotein, membranes, etc., are chiral subjects of intense EPR activity [3]. However, EPR is universally believed to be blind to chirality. Here we present the paradigm shift that EPR in the proper configuration is intrinsically sensitive to chirality because of magneto-chiral anisotropy (MChA). MChA corresponds to an entire class of effects in chiral media under an external magnetic field, which show an enantio-selective difference in the propagation of any unpolarized flux that propagates parallel or anti-parallel to the magnetic field. This difference has its origin in the simultaneous breaking of parity and time-reversal symmetries as a result of the chirality of the media and the magnetization induced by the external magnetic field, respectively. Generally, such a difference manifests itself in the velocity or the attenuation of the flux. MChA has been predicted since 1962 in the optical properties of chiral systems in magnetic fields [4; 5; 6; 7; 8], and was finally observed in the 1990's [9; 10; 11]. Nowadays it is observed across the entire electromagnetic spectrum, from microwaves [12] to X-rays [13]. The existence of MChA was further generalized to electrical transport [14] (in carbon nano tubes [15], organic conductors [16], metals [17; 18; 19] and semiconductors [20]), to sound propagation [21] and to dielectric properties [22]. EPR is basically a strongly resonant form of magnetic circular dichroism and magnetic circular birefringence [23], effects well known in the optical wavelength range, where they however only represent small perturbations of the optical properties of the medium. By analogy, one should expect that MChA can manifest itself also in EPR of chiral media. This expectation can be formalized by the observation that the EPR transition probability \(P\) induced by a propagating electromagnetic field between the spin levels of a chiral medium in a magnetic field, is allowed by parity and time-reversal symmetry to have the form \[P^{D/L}(\omega,\hat{\mathbf{k}},\mathbf{B}_{0})=P_{0}(\omega,B_{0})[1+\gamma^{ D/L}(\omega)\hat{\mathbf{k}}\cdot\mathbf{B}_{0}]. \tag{1}\] In this equation, \(\mathbf{B}_{0}\) is an external and constant magnetic field, \(P_{0}\) is the leading order transition probability between the Zeeman levels, common to both enantiomers, the handedness of the medium is represented by \(D-\) right and \(L-\) left, with \(\gamma^{D}=-\gamma^{L}\), and \(\hat{\mathbf{k}}\) is a unitary vector in the direction of the wave vector of the electromagnetic field driving the transition whose frequency \(\omega\) is of the order of \(\mu_{B}B_{0}/\hbar\). This shows that the EPR transition probability is enantioselectively modified when probed by an electromagnetic wave travelling parallel or anti-parallel to to the magnetic field, an effect that we shall call traveling wave enantioselective EPR (TWEEPR). TWEEPR is quantified by the anisotropy factor \(g_{T}^{D/L}\), which represents the relative difference between the transition probabilities of both enantiomers, \[g_{T}^{D/L}\equiv\frac{[P^{D/L}(\omega,\widehat{\mathbf{k}},\mathbf{B}_{0})-P^ {D/L}(\omega,\widehat{\mathbf{k}},-\mathbf{B}_{0})]}{[P^{D/L}(\omega,\widehat {\mathbf{k}},\mathbf{B}_{0})+P^{D/L}(\omega,\widehat{\mathbf{k}},-\mathbf{B}_ {0})]}=\gamma^{D/L}\hat{\mathbf{k}}\cdot\mathbf{B}_{0}. \tag{2}\] As spin is related to the absence of time-reversal symmetry, and chirality is related to the absence of parity symmetry, one might expect that the two are decoupled and that \(g_{T}^{D/L}\) is vanishingly small, thereby reducing TWEEPR to an academic curiosity. However, below we will show through a model calculation that, because of the ubiquitous spin-orbit coupling, TWEEPR represents a significant and measurable fraction of the EPR transition probability for realistic chiral systems and that its anisotropy factor is not much smaller than that of optical MChA. Lastly, we will describe a dedicated TWEEPR setup. _The model_ As for the spin system of our model calculation of TWEEPR, without loss of generality, we have chosen a crystalline quasi-octahedral Cu(II) chiral complex because this ion is one of the most extensively studied systems by EPR, it has the largest spin-orbit coupling among the first row transition metals, and it has the simplest energy diagram. Its electromagnetic response is attributed to a single unpaired electron that, in the \(3d^{9}\) configuration of the Cu(II) complex, behaves as a hole of positive charge \(+e\). We model the binding potential of the hole by that of an isotropic harmonic oscillator that represents the rest of the ion, and is perturbed by the chiral potential \(V_{C}^{D/L}\) that results from its interaction with the chiral environment of the crystal lattice, and by the spin-orbit coupling. In turn, as we will show, this model allows us to find analytic expressions for both the optical and the EPR magnetochiral anisotropy parameters, \(g_{O}^{D/L}\) and \(g_{T}^{D/L}\), respectively, in terms of the parameters of the model, both being proportional to the chiral coupling. Our model can thus relate \(g_{T}^{D/L}\) to its optical analogue \(g_{O}^{D/L}\). The latter is experimentally determined for several systems. In particular, for CsCuCl\({}_{3}\) both MChD [24] and EPR [25] have been reported. This approach thereby results in a generic analytical expression for \(g_{T}^{D/L}\) in terms of the parameters of our model, and in a semi-empirical and quantitative prediction for \(g_{T}^{D/L}\) for this particular material in terms of its experimental optical MChD. The latter can be extended to any material for which optical MChD has been determined. Below we detail our model, which is a variant of Condon's model for optical activity [26; 27], and its extension to optical magnetochiral birefringence [28]. The Hamiltonian describing the system is given by \(H=H_{0}+V_{C}^{D/L}+V_{SO}\), with \[H_{0}=\frac{p^{2}}{2m_{e}}+\frac{m_{e}\omega_{0}^{2}r^{2}}{2}-\mu_{B}({\bf L}+ g{\bf S})\cdot{\bf B}_{0}, \tag{3}\] \[V_{C}^{D/L}=C^{D/L}xyz,\quad V_{SO}=\lambda{\bf L}\cdot{\bf S}, \tag{4}\] where \({\bf r}=(x,y,z)\) and \({\bf p}\) are the position and kinetic momentum vectors of the harmonic oscillator, \(\omega_{0}\) is its natural frequency, \({\bf L}\) and \({\bf S}\) are their orbital and spin angular momentum operators, respectively, \(C^{D}=-C^{L}\) is the right/left-handed chiral coupling, \(g\simeq 2\) is the Lande factor, \(\lambda\simeq-0.1\) eV is the spin-orbit (SO) coupling parameter, and \({\bf B}_{0}\equiv B_{0}\hat{\bf z}\) is the external magnetic field. The interaction with an electromagnetic plane-wave of frequency \(\omega\), propagating along \({\bf B}_{0}\), is given in a multipole expansion by \[W=-e{\bf r}\cdot{\bf E}_{\omega}(t)/2-\mu_{B}({\bf L}+g{\bf S})\cdot{\bf B}_{ \omega}(t)/2+{\rm h.c.}, \tag{5}\] where \({\bf E}_{\omega}(t)=i\omega{\bf A}_{\omega}e^{-i\omega t}\) and \({\bf B}_{\omega}(t)=i\bar{n}{\bf k}\wedge{\bf A}_{\omega}e^{-i\omega t}\) are the complex-valued electric and magnetic fields in terms of the electromagnetic vector potential, \({\bf A}_{\omega}\), evaluated at the center of mass of the ion. Note that the field incident on a molecule of the complex is the effective field which propagates throughout the medium with an effective index of refraction \(\bar{n}\). Hence it is the effective wavevector \(\bar{n}{\bf k}\) that appears. In our model, the \(3d\) orbitals are represented by linear combinations of the \(n=2\), \(l=2\) states of the isotropic harmonic oscillator -see Appendix A. Essential to the original Condon model was the anisotropy of the harmonic oscillator, which removes all axis and planes of symmetry. In our model, such an anisotropy is provided by the interaction of the ion with the surrounding ligands of the complex, which in the case of CsCuCl\({}_{3}\) form an quasi-octahedral structure. In the first place, that interaction causes the elongation of the \(3d\) orbitals which lie along the \(z\)-axis, opening an optical gap \(\Delta_{0}\). Also, in conjunction with the Jahn-Teller distortion and the helical configuration of the Cu(II) ions, it removes the degeneracy between the orbitals lying on the \(xy\) plane and generates a small energy gap \(\delta\) between the states \(d_{zx}\) and \(d_{yz}\), with \(\lambda\gg\delta\). The ground state of the Cu(II) ion in the octahedral configuration \(\Psi\) is, at finite temperature and subject to a magnetic field, a linear combination of the doublet \(d_{x^{2}-y^{2}}\otimes\{\uparrow,\downarrow\}\), \[|\Psi\rangle=|d_{x^{2}-y^{2}}\rangle\otimes(\cos\theta/2\uparrow+\sin\theta/2 \downarrow), \tag{6}\] where \(\theta\), being a function of \(B_{0}\) and the temperature, is the angle between the magnetization of the sample and \({\bf B}_{0}\). For EPR, spin-flip takes place at a resonance frequency \(\Omega=g\mu_{B}B_{0}/\hbar\) when the up \(\uparrow\) component of \(\Psi\) turns into \(|\Phi\rangle=|d_{x^{2}-y^{2}}\rangle\otimes\downarrow\), with probability proportional to \(\cos^{2}\theta/2\), and the down \(\downarrow\) component turns into \(|\Phi^{\prime}\rangle=|d_{x^{2}-y^{2}}\rangle\otimes\uparrow\) with probability proportional to \(\sin^{2}\theta/2\). The net absorption probability is thus proportional to \(\cos^{2}\theta/2-\sin^{2}\theta/2=\cos\theta\) and hence to the degree of magnetization along \({\bf B}_{0}\). At \(B_{0}=1\)T, \(\Omega\) corresponds to an energy 150 \(\mu\)eV. In contrast, optical absorption happens at an energy \(\Delta_{0}\simeq 1.5\) eV towards the quadruplet \(\{d_{xx},d_{yz}\}\otimes\{\uparrow,\downarrow\}\). Applying standard perturbation theory with the spin-orbit and the Zeeman potentials upon this quasidegenerate quadruplet, we end up with the four states \(\phi_{i}\), \(i=1,..,4\), as appear in the energy diagram represented in Fig.1 -a brief description can be found in the Appendix A. It is of note that these states play a crucial role in the E1M1 transitions of both EPR and its optical analogue. _Results_ Using up to fourth order time-dependent perturbation theory on \(V_{SO}\), \(V_{C}\) and \(W\), in the adiabatic regime, our model allows us to calculate the standard EPR and optical transition probabilities, as well as the MChA corrections to both of them, with the latter two being both proportional to \(C^{D/L}\). As for \(g_{T}^{D/L}\), the probability difference in the denominator of Eq.(2) is an enantioselective E1M1 transition, whereas the denominator equals in good approximation the leading order M1M1 transition, \(g_{T}^{D/L}=P_{E1M1}^{D/L}/P_{M1M1}|_{\omega\approx\Omega}\), with \[P_{M1M1}|_{\omega\approx\Omega} =\hbar^{-2}\Big{|}\!\int_{0}^{\mathcal{T}}\mathrm{d}te^{-i( \mathcal{T}-t)(\Omega/2-i\Gamma/2)}e^{-it(\omega-\Omega/2)}\langle\Phi|-g\mu_ {B}\mathbf{S}\cdot\mathbf{B}_{\omega}|\Psi\rangle\Big{|}^{2}-\hbar^{-2}\Big{|} \!\int_{0}^{\mathcal{T}}\mathrm{d}te^{-i(\mathcal{T}-t)(2\omega-\Omega/2-i \Gamma/2)}\] \[\times e^{-it(\omega+\Omega/2)}\langle\Phi^{\prime}|-g\mu_{B} \mathbf{S}\cdot\mathbf{B}_{\omega}|\Psi\rangle\Big{|}^{2},\] \[P_{E1M1}^{D/L}|_{\omega\approx\Omega} =-2\hbar^{-2}\mathrm{Re}\int_{0}^{\mathcal{T}}\mathrm{d}te^{-i( \mathcal{T}-t)(\Omega/2-i\Gamma/2)}\langle\tilde{\Phi}|-e\mathbf{r}\cdot( \bar{n}^{2}+2)\mathbf{E}_{\omega}/3|\tilde{\Psi}\rangle e^{-it(\omega-\Omega/2 )}\int_{0}^{\mathcal{T}}\mathrm{d}\tau\;e^{i(\mathcal{T}-\tau)(\Omega/2+i \Gamma/2)}\] \[\times\langle\Psi|-g\mu_{B}\mathbf{S}\cdot\mathbf{B}_{\omega}^{*} |\Phi\rangle e^{i\tau(\omega-\Omega/2)}+2\hbar^{-2}\mathrm{Re}\int_{0}^{ \mathcal{T}}\mathrm{d}t\,e^{-i(\mathcal{T}-t)(2\omega-\Omega/2)}\langle\tilde {\Phi}^{\prime}|-e\mathbf{r}\cdot(\bar{n}^{2}+2)\mathbf{E}_{\omega}/3|\tilde{ \Psi}\rangle\] \[\times e^{-it(\omega+\Omega/2-i\Gamma/2)}\int_{0}^{\mathcal{T}} \mathrm{d}\tau\;e^{i(\mathcal{T}-\tau)(2\omega-\Omega/2)}\langle\Psi|-g\mu_{ B}\mathbf{S}\cdot\mathbf{B}_{\omega}^{*}|\Phi^{\prime}\rangle e^{i\tau( \omega+\Omega/2+i\Gamma/2)},\quad\Gamma\mathcal{T}\gg 1, \tag{7}\] where \(\Gamma\) is the linewidth of EPR absorption, \(\Gamma\mathcal{T}\gg 1\) implies the adiabatic approximation, and the states \(\tilde{\Psi}\), \(\tilde{\Phi}\), and \(\tilde{\Phi}^{\prime}\) are dressed with the states \(\phi_{i}\), \(i=1,..,4\), on account of the spin-orbit and chiral interactions. Using a linearly polarized microwave probe field in Eq.(7), the resultant expression for the TWEEPR anisotropy factor reads \[g_{T}^{D/L}\simeq\frac{c\,C^{D/L}\hbar\,\Omega}{m_{e}\omega_{0}^{3}\Delta_{0} ^{2}}\frac{\bar{n}^{2}+2}{3\bar{n}}, \tag{8}\] where the second factor on the right hand side describes the effect of the refractive index on the local electric field and the wavevector. It is worth noting that the aforementioned dependence on magnetization, \(\sim\cos\theta\), cancels out in the ratio between probabilities. For further details, see Appendix B. The values for the unknown parameters in Eq.(8) can be deduced comparing the predictions of the model with the experimental results for optical MChD [24] and EPR [25] in CsCuCl\({}_{3}\). In particular, we can estimate \(g_{T}^{D/L}\) from the data on the non-reciprocal absorption coefficient in optical MChD, \(\alpha_{A}=\alpha(\mathbf{B}_{0}\mid\uparrow\mathbf{k})-\alpha(\mathbf{B}_{0} \mid\uparrow\mathbf{k})\). The calculation goes as follows. In terms of the E1M1 absorption probability at resonance, \(\omega=\Delta_{0}/\hbar\), \(\alpha_{A}\) reads \[\alpha_{A}=\frac{4c\mu_{0}\rho\Delta_{0}\Gamma^{\prime}}{|E_{\omega}|^{2}}P_{ E1M1}^{D/L}|_{\omega=\Delta_{0}/\hbar}, \tag{9}\] where \(\Gamma^{\prime}\) is the linewidth of optical absorption, and \(\rho\) is the molecular number density of the complex. Using our model, a calculation analogous to that for \(P_{E1M1}^{D/L,EPR}\) but for its optical counterpart, \(P_{E1M1}^{D/L,O}\) - Appendices B, C and D-, allows as to express \(g_{T}^{D/L}\) in Eq.(8) in terms of \(\alpha_{A}\), \[g_{T}^{D/L}=\frac{c\,\hbar^{3}\Gamma^{\prime}\Omega\tilde{\Delta}\alpha_{A}}{2 \Delta_{0}^{3}\mu_{0}\mu_{B}^{2}\rho\cos\theta}, \tag{10}\] where \(\tilde{\Delta}^{-1}=\Delta_{0}^{-1}+\Delta_{2}^{-1}-3\Delta_{1}^{-1}\) is the inverse of an effective energy interval which takes account of the optical transitions to intermediate states -see Fig.1. It is of note that, whereas the magnetic transition is driven in EPR by the spin operator [Eq.(7)], it is driven by the orbital angular momentum in the optical case. In turn, this causes MChD to be stronger in the optical case and proportional to the degree of magnetization \(\cos\theta\), which can be approximated by \(\cos\theta\approx\mu_{0}B_{0}/k_{B}T\)[31]. The optical MChA parameter, \(g_{0}^{D/L}\), has an analogous expression to that in Eq.(2) with \(\hbar\omega\approx\Delta_{0}\), being proportional to \(\alpha_{A}\). Hence, our model allows us to estimate its upper bound, \(g_{0}^{D/L}\leq(cC^{D/L}\delta\cos\theta)/(m_{e}\omega_{0}^{3}\tilde{\Delta})\) - see Appendices C and D, from which \(g_{T}^{D/L}/g_{0}^{D/L}\gtrsim(\hbar\Omega\tilde{\Delta})/(\Delta_{0}^{2}\cos\theta)\). Note that, since both \(\Omega\) and \(\cos\theta\) are proportional to \(B_{0}\), the ratio between EPR and optical MChA factors is independent of the field strength. Finally, substituting the experimental values for CsCuCl\({}_{3}\) of all the variables in Eq.(10), for \(B_{0}=14\) T at a temperature of 4.2 K, we obtain \(g_{T}^{D/L}\approx 1.5\cdot 10^{-2}\) which is small but not beyond the resolution of high field EPR spectrometers. For an X band EPR spectrometer (\(B=0,35\) T), this means \(g_{T}^{D/L}\approx 3\cdot 10^{-4}\) which will require a different approach, as we discuss below. _Implementation_ In commercial EPR spectrometers, resonant standing wave cavities are used to enhance sensitivity. Such a cavity can be regarded as containing equal amounts of traveling waves with \(\mathbf{k}\) and \(-\mathbf{k}\). The MChA \(\gamma^{D/L}\) term in Eq.(1) can therefore not give a net contribution to the resonance in such a configuration. For this term to be observed, a traveling wave configuration should be used. Such configurations are not unknown in EPR; several reported home-built EPR spectrometers have used one-pass transmission configurations [32][33]. Sensitivity for such a travelling wave configuration can be enhanced by means of a Mach-Zehnder interferometer [34] or a unidirectional ring resonator [35]. In such a configuration, MChA can be obtained as the difference between the microwave transmissions for the two opposing magnetic field directions, similar to what was realized in the optical case [11]. As the EPR lines can be quite narrow, the two oppositely oriented magnetic fields should have the same magnitude with high precision, which requires a tight control of this field, possibly with another EPR or NMR feedback circuit. Stabilizing a field this way can be quite time-consuming, and TWEEPR being a small difference on the already small EPR absorption, the extensive signal-averaging through field alternations that would be required to obtain a good signal-to-noise-ratio, makes such an approach impractical. We therefore propose another approach in the form of an X band microwave interferometer that removes the normal EPR contribution from the output signal, through destructive interference between counter-propagating waves through the sample at a fixed magnetic field, as illustrated in Figure 2. This leaves ideally only the TWEEPR contribution. By applying an additional small modulation field and using phase sensitive detection (PSD) sufficient sensitivity is obtained to resolve this small contribution. When tuned to total destructive interference at zero field, the interferometer output as given by the PSD is proportional to the TWEEPR response \(d[T(\mathbf{B}_{0}\mid\uparrow\mathbf{k})-T(\mathbf{B}_{0}\mid\uparrow \mathbf{k})]/dB_{0}=\gamma^{D/L}(\omega)\). The sensitivity of the interferometer can be further improved by inserting the sample in a unidirectional resonant ring resonator. Q factors above \(10^{3}\) have been reported for such configurations [36] and would bring a corresponding increase in sensitivity. It seems therefore quite feasible that TWEEPR can evolve into a standard characterization technique in the form of standalone dedicated TWEEPR spectrometers. An alternative to this configuration could be the microwave equivalent of the first observation of optical MChA in luminescence [9], using pulsed EPR echo techniques [1] with a similar interferometer setup. _Discussion_ In general, the non-local response of a chiral system of size \(a\) to an electromagnetic wave with wave vector \(k\) is of the order \(ka\), so one could have expected \(g_{T}^{D/L}/g_{O}^{D/L}\) to be of the order \(\hbar\Omega/\Delta_{0}\), the relevant spatial length scale for both TWEEPR and optical MChD being the orbital size. This ratio is of the order of \(10^{-4}\), which would have put TWEEPR beyond experimental reach. However, in contrast to the optical absorption, which to zeroth order is independent of the magnetic field, the normal EPR absorption scales with the magnetization of the spin system. Since the MChA corrections are proportional to the magnetization in both EPR and the optical case, the cancellation of the factor \(\cos\theta\ll 1\) applies to \(g_{T}^{D/L}\) only, and it appears thereby in the denominator of \(g_{T}^{D/L}/g_{O}^{D/L}\), resulting in Eq.(10). For room temperature X-band EPR of Cu(II), this results in \(g_{T}^{D/L}/g_{O}^{D/L}\) of the order of \(10^{-1}\), which makes TWEEPR experimentally feasible under those conditions. As a consequence, and in contrast to many other magnetic resonance techniques, going to low temperatures is not necessarily favorable for TWEEPR. Going to higher magnetic field does not affect \(g_{T}^{D/L}/g_{O}^{D/L}\), the increase in \(\Omega\) being compensated by the concomitant increase of \(\cos\theta\) because of the higher resonance field. The main results of our model are an analytic expression for the TWEEPR anisotropy factor [Eq.(8)] and an expression for its relationship with the optical anisotropy absorption coefficient [Eq.(10)]. The expression in Eq.(8) shows that \(g_{T}^{D/L}\) has a linear dependence on the magnetic field strength (through \(\Omega\)) and on the chirality (through \(C^{D/L}\)), as predicted by symmetry arguments. The dependence on the spin-orbit coupling does not appear explicitly, because we have considered the case for Cu(II), where the level splitting \(\delta\) is much smaller than the SO coupling \(\lambda\). In the inverse case, \(g_{T}^{D/L}\) would be proportional to \(\lambda\) instead. Adapting the calculation to other chi Figure 2: Schematic setup of the TWEEPR interferometer. The waves counterpropagating through the sample S are depicted in red and blue. ral transition metal complexes is conceptually straightforward and should result in an expression similar to Eq.(8), apart from numerical factors of order unity. A rather different case is represented by chiral organic radicals, where the unpaired electron is delocalized on one or more interatomic bonds and a different microscopic model should be used for the calculation of \(g_{T}^{D/L}\). One might however expect that such differences apply also to the calculation of \(g_{O}^{D/L}\) for such radicals, preserving a relationship similar to that in Eq.(10). _Acknowledgements_ This work was supported by the Agence Nationale de la Recherche (SECRETS, (ANR PRC 20-CE06-0023-01) and the Laboratory of Excellence NanoX (ANR-17-EURE-0009)). We gratefully acknowledge helpful discussions with Anne-Laure Barra. ## Appendix A Fundamentals of the model As outlined in the article, in order to estimate the MChA factors of a chiral Cu(II) complex, we consider a variant of the one-electron model proposed by Condon for the study of natural optical activity in chiral compounds [26; 27]. The total Hamiltonian of our model is \(H=H_{0}+V_{C}^{D/L}+V_{SO}\), where \(H_{0}=\frac{p^{2}}{2m_{e}}+\frac{m_{e}\omega_{0}^{2}r^{2}}{2}+V_{Z}\) is the unperturbed Hamiltonian, with \(V_{Z}=-\mu_{B}(\mathbf{L}+g\mathbf{S})\cdot\mathbf{B}_{0}\) being the Zeeman potential; and \(V_{C}^{D/L}=C^{D/L}xyz\), \(V_{SO}=\lambda\mathbf{L}\cdot\mathbf{S}\) being the chiral potential and the spin-orbit coupling, respectively. We stick to the nomenclature used in the article. The chiral Hamiltonian, \(V_{C}^{D/L}\), results from the electrostatic interaction of the ion with the chiral configuration of the ligands in the complex, and produces the necessary parity asymmetry which is at the origin of natural optical activity. The orbital contribution of the Zeeman potential was added in Ref.[28] to the original Condon's model to estimate the magneto-chiral birefringence of diamagnetic chiral compounds. In order to account for magnetochiral dichroism (MChD) in a paramagnetic complex, we introduce here the spin contribution to the Zeeman potential as well as the spin-orbit coupling. In contrast to the approach in Ref.[28] and for simplicity, we consider an isotropic harmonic oscillator, whereas the anisotropy caused by the crystal field is introduced in an effective manner through the energy intervals between the \(3d\) orbitals, as depicted in Fig.1 in the article. The eigenstates of \(H_{0}\) are labeled with the eigenvalues of the orbital angular momentum and spin operators, \(\{|n_{L},n_{R},n_{z}\rangle\}\otimes\{\uparrow,\downarrow\}\)[29], upon which \(V_{C}^{D/L}\) and \(V_{SO}\) act perturbatively. In a Cu(II) complex, the chromophoric charge is the unpaired electron of the \(3d^{9}\) electronic configuration which behaves as a hole of positive charge. In the absence of ligands, the \(3d\) orbitals of the ion can be represented approximately by the \(n=2\), \(l=2\) states of the harmonic oscillator of our model. However, the ligands' fields affect the electronic configuration of the ion, removing the degeneracy of the \(d\)-states. In particular, for octahedral coordination geometries around the ion, the set of \(d\)-orbitals splits into doubly degenerate \(\mathrm{e}_{g}\) orbitals, \(d_{x^{2}-y^{2}}\) and \(d_{z^{2}}\), and triply degenerate \(\mathrm{t}_{2g}\) orbitals, \(d_{xy}\), \(d_{yz}\) and \(d_{zx}\). The energy interval between \(\mathrm{e}_{g}\) and \(\mathrm{t}_{2g}\) states, \(\Delta_{0}\), lies in the visible region of the spectrum, \(\Delta_{0}\simeq 1.5\) eV. As a result, the \(\mathrm{e}_{g}\) orbitals become the ground states, and can be approximated by linear combinations of \(l=2\), \(m_{l}=0,\pm 2\) eigenstates of the harmonic oscillator. The fact that the chromophoric charge in the \(\mathrm{e}_{g}\) states cannot rotate into any other orbital leads to an effective quenching of the orbital angular momentum of the ground state. Below a certain temperature, an additional Jahn-Teller (JT) distortion takes place when the ligands along one of the axes, say the \(z\)-axis, move away from the ion in order to minimize the electronic repulsion, giving rise to the complete removal of the degeneracy in the \(\mathrm{e}_{g}\) level, and to a partial lifting of the degeneracy in the \(\mathrm{t}_{2g}\) orbitals. The isotropy of the system is thus broken and the ground state becomes unique, up to spin degeneracy. For the particular case of the CsCuCl\({}_{3}\) crystal, the bonds along the \(z\)-axis get elongated and the ground state is the \(d_{x^{2}-y^{2}}\) orbital. Fig.1 in the article depicts the energy splitting of the distorted \(d\)-orbitals, including the approximate values of the energy intervals. Lastly, the JT distortion in conjuntion with the helical deformation of the crystal along the \(c\)-axis, of coordinates [1,1,1] in the local axis basis, removes the degeneracy between the orbitals lying on the \(xy\) plane in a small amount \(\delta\). Below, we write the approximate expression of the \(3d\) orbitals in terms of the harmonic oscillator eigenstates, \(\{|n_{L},n_{R},n_{z}\rangle\}\) together with their corresponding energies, \[|d_{zx}\rangle=(|0,1,1\rangle-|1,0,1\rangle)/\sqrt{2},\quad{\cal E}= \Delta_{0},\] \[|d_{yz}\rangle=i(|0,1,1\rangle+|1,0,1\rangle)/\sqrt{2},\quad{\cal E }=\Delta_{0}-\delta,\] \[|d_{xy}\rangle=i(|0,2,0\rangle-|2,0,0\rangle)/\sqrt{2},\quad{\cal E }=\Delta_{0}-\Delta_{2},\] \[|d_{z^{2}}\rangle=(|1,1,0\rangle-\sqrt{2}|0,0,2\rangle)/\sqrt{3},\quad{\cal E}=\Delta_{0}-\Delta_{1},\] \[|d_{x^{2}-y^{2}}\rangle=(|0,2,0\rangle+|2,0,0\rangle)/\sqrt{2},\quad{\cal E}=0. \tag{10}\] Altogether, the crystal field combined with the JT distortion and the helical deformation turns the crystalline structure into a chiral one. In accord with Condon's model, the potential \(V_{C}^{D/L}\) reproduces the electrostatic interaction of the chromophoric charge with the surrounding chiral structure, removing all axes and planes of symmetry from the system. It is through the chiral potential that E1 transitions between the \(3d\) orbitals take place in our model. In addition to the above interactions, MChD in EPR requires necessarily the coupling between the spin and the orbital angular momentum of the unpaired electron hole through the potential \(V_{SO}\), where the coupling constant is \(\lambda\approx-0.1\) eV. In particular, the SO interaction together with the Zeeman potential break the quasi-degeneracy between the four states \(\{|d_{zx}\rangle,|d_{yz}\rangle\}\otimes\{\uparrow,\downarrow\}\), providing the following eigenstates for \(\lambda\gg\delta\), \[|\Phi_{1}\rangle \approx |1,0,1\rangle\otimes\downarrow+\frac{\delta}{2\lambda}|0,1,1 \rangle\otimes\downarrow,\] \[{\cal E} \simeq \Delta_{0}-\lambda/2+\hbar\Omega,\] \[|\Phi_{2}\rangle \approx |0,1,1\rangle\otimes\uparrow+\frac{\delta}{2\lambda}|1,0,1 \rangle\otimes\uparrow,\] \[{\cal E} \simeq \Delta_{0}-\lambda/2-\hbar\Omega,\] \[|\Phi_{3}\rangle \approx |0,1,1\rangle\otimes\downarrow-\frac{\delta}{2\lambda}|1,0,1 \rangle\otimes\downarrow,\] \[{\cal E} \simeq \Delta_{0}+\lambda/2+\hbar\Omega+\frac{\delta^{2}}{4\lambda^{2}}( \lambda+\hbar\Omega),\] \[|\Phi_{4}\rangle \approx |1,0,1\rangle\otimes\uparrow-\frac{\delta}{2\lambda}|0,1,1 \rangle\otimes\uparrow,\] \[{\cal E} \simeq \Delta_{0}+\lambda/2-\hbar\Omega+\frac{\delta^{2}}{4\lambda^{2}}( \lambda-\hbar\Omega). \tag{11}\] \(\{\Phi_{1},\Phi_{2},\Phi_{3},\Phi_{4}\}\) are indeed the eigenstates of the Hamiltonian \(V_{Z}+V_{SO}\) restricted to the subspace \(\{|d_{zx}\rangle,|d_{yz}\rangle\}\otimes\{\uparrow,\downarrow\}\). They constitute the intermediate states of the transition processes in EPR mediated by the interaction of the spin with the chiral structure of the surrounding charges. In the following, we apply to our system time-dependent quantum perturbation techniques to compute first the MChA factor in EPR, \(g_{T}^{D/L}\). Next, in order to estimate the value of the unknowns of our model, we compute the anisotropy factor in optical MChD for the same system. Finally, making use of the experimental values available for CsCuCl\({}_{3}\) in the literature [24; 25], we estimate the strength of TWEEPR. ## Appendix B MChD in EPR Let us consider a CsCuCl\({}_{3}\) complex, initially prepared in its ground state, and partially polarized along a uniform magnetic field \({\bf B}=B_{0}\hat{\bf z}\) directed along the \(z\)-axis, \[|\Psi\rangle=|d_{x^{2}-y^{2}}\rangle\otimes(\cos\theta/2\uparrow+\sin\theta/ 2\downarrow)\approx\frac{1}{\sqrt{2}}(|0,2,0\rangle+|2,0,0\rangle)\otimes( \cos\theta/2\uparrow+\sin\theta/2\downarrow), \tag{12}\] where we have approximated the actual ground state with the corresponding state of our harmonic oscillator model in the basis \(\{|n_{L},n_{R},n_{z}\rangle\}\otimes\{\uparrow,\downarrow\}\), and \(\theta\) is the angle between the magnetic moment of the complex and the \(z\)-axis, \(\cos\theta=\hbar^{-1}\langle\Psi|2{\bf S}|\Psi\rangle\cdot\hat{\bf z}\). At temperature \(T\), \(\cos\theta\approx\mu_{0}B_{0}/k_{B}T\)[31]. Under the action of an incident electromagnetic field of frequency \(\omega\) close to the transition frequency, \(\Omega=g\mu_{B}B_{0}/\hbar\), and wave vector \({\bf k}\) parallel to \({\bf B}_{0}\), the complex gets partially excited towards the state \[|\Phi\rangle=|d_{x^{2}-y^{2}}\rangle\otimes\downarrow\approx\frac{1}{\sqrt{2} }(|0,2,0\rangle+|2,0,0\rangle)\otimes\downarrow, \tag{13}\] with probability proportional to \(\cos^{2}\theta/2\); and partially de-excited (through stimulated emission) towards the state \[|\Phi^{\prime}\rangle=|d_{x^{2}-y^{2}}\rangle\otimes\uparrow\approx \frac{1}{\sqrt{2}}(|0,2,0\rangle+|2,0,0\rangle)\otimes\uparrow, \tag{10}\] with probability proportional to \(\sin^{2}\theta/2\). Since the rest of probability factors are equivalent, the net absorption probability in EPR is proportional to \(\cos^{2}\theta/2-\sin^{2}\theta/2=\cos\theta\), and thus proportional to the magnetization of the complex. As mentioned in the article, from symmetry considerations and in leading order, the numerator and the denominator in the ratio \(g_{T}^{D/L}=[P^{D/L}(\omega,\hat{\mathbf{k}},\mathbf{B}_{0})-P^{D/L}(\omega, \hat{\mathbf{k}},-\mathbf{B}_{0})]/[P^{D/L}(\omega,\hat{\mathbf{k}},\mathbf{B }_{0})+P^{D/L}(\omega,\hat{\mathbf{k}},-\mathbf{B}_{0})]\) for \(\omega\approx\Omega\) are dominated, respectively, by the electric-magnetic dipole (E1M1) and the magnetic-magnetic dipole (M1M1) transition probabilities, the magnetic transition being driven by the spin operator only. That leads to the approximate expression, \[g_{T}^{D/L}\simeq\frac{P_{E1M1}^{D/L}(\omega,\hat{\mathbf{k}}, \mathbf{B}_{0})}{P_{M1M1}(\omega,\hat{\mathbf{k}},\mathbf{B}_{0})}\Big{|}_{ \omega\approx\Omega}. \tag{11}\] In what follows, we compute the transition probabilities \(P_{M1M1}\) and \(P_{E1M1}^{D/L}\) for \(\omega\approx\Omega\) using time-dependent perturbation theory in the adiabatic regime. This regime is the suitable one for a probe field whose duration is much longer than the typical lifetime for excitation or de-excitation. As in the article, the Hamiltonian of the interaction of our system with the microwave probe field reads, in the electric and magnetic dipole approximation, \(W=-e\mathbf{r}\cdot\mathbf{E}_{\omega}(t)/2-\mu_{B}(\mathbf{L}+2\mathbf{S}) \cdot\mathbf{B}_{\omega}(t)/2+\)h.c.. In this equation, \(\mathbf{E}_{\omega}(t)=\mathbf{E}_{\omega}e^{-i\omega t}=i\omega\mathbf{A}_{ \omega}e^{-i\omega t}\), \(\mathbf{B}_{\omega}(t)=\mathbf{B}_{\omega}e^{-i\omega t}=i\bar{n}\mathbf{k} \wedge\mathbf{A}_{\omega}e^{-i\omega t}\), are the complex-valued electric and magnetic fields, respectively, with \(\mathbf{A}_{\omega}\) being the complex-valued amplitude of the plane-wave electromagnetic vector potential of frequency \(\omega\approx\Omega\), evaluated at the center of mass of the Cu(II) ion, and \(\bar{n}\) being the effective refractive index of the sample. The local depolarization changes the local electric field incident on each Cu(II) ion to \(\mathbf{E}_{\omega}(\bar{n}^{2}+2)/3\). Under the action of \(W\), with \(\mathbf{k}\) along \(\mathbf{B}_{0}\), the expressions for \(P_{M1M1}\) and \(P_{E1M1}^{D/L}\) read, respectively, at leading order in the coupling constants of the interaction potentials, \[P_{M1M1}|_{\omega\approx\Omega} = \hbar^{-2}\left|\int_{0}^{\mathcal{T}}\mathrm{d}te^{-i(\mathcal{T }-t)(\Omega/2-i\Gamma/2)}e^{-it(\omega-\Omega/2)}\langle\Phi|-g\mu_{B} \mathbf{S}\cdot\mathbf{B}_{\omega}|\Psi\rangle\right|^{2} \tag{12}\] \[- \hbar^{-2}\left|\int_{0}^{\mathcal{T}}\mathrm{d}te^{-i(\mathcal{T }-t)(2\omega-\Omega/2-i\Gamma/2)}e^{-it(\omega+\Omega/2)}\langle\Phi^{\prime} |-g\mu_{B}\mathbf{S}\cdot\mathbf{B}_{\omega}|\Psi\rangle\right|^{2},\] \[P_{E1M1}^{D/L}|_{\omega\approx\Omega} = 2{\rm Re}(-i)^{3}\hbar^{-4}\sum_{p,q\neq\Psi}\int_{0}^{\mathcal{T}} {\rm d}te^{-i(\mathcal{T}-t)(\Omega/2-i\Gamma/2)}\langle\Phi|-e{\bf r}\cdot( \bar{n}^{2}+2){\bf E}_{\omega}/3|p\rangle\int_{-\infty}^{t}{\rm d}t^{\prime}e^{ \eta t^{\prime}}e^{-i(t-t^{\prime})({\cal E}_{p}+\omega)} \tag{10}\] \[\times \langle p|V_{C}^{D/L}|q\rangle\int_{-\infty}^{t^{\prime}}{\rm d}t ^{\prime\prime}e^{\eta t^{\prime\prime}}e^{-i(t^{\prime}-t^{\prime\prime})({ \cal E}_{q}+\omega)}\langle q|V_{SO}|\Psi\rangle e^{-it^{\prime\prime}(\omega- \Omega/2)}i\int_{0}^{\mathcal{T}}{\rm d}\tau\,e^{i(\mathcal{T}-\tau)(\Omega/2+ i\Gamma/2)}\] \[\times \langle\Psi|-g\mu_{B}{\bf S}\cdot{\bf B}_{\omega}^{*}|\Phi \rangle e^{i\tau(\omega-\Omega/2)}\,+2{\rm Re}(-i)^{3}\hbar^{-4}\sum_{p,q \neq\Phi}\int_{-\infty}^{\mathcal{T}}{\rm d}t\,e^{\eta t}e^{-i(\mathcal{T}-t )(\Omega/2-i\Gamma/2)}\langle\Phi|V_{SO}|p\rangle\] \[\times \int_{-\infty}^{t}{\rm d}t^{\prime}e^{\eta t^{\prime}}e^{-i(t-t^ {\prime}){\cal E}_{p}}\langle p|V_{C}^{D/L}|q\rangle\int_{0}^{t^{\prime}}{\rm d }t^{\prime\prime}e^{-i(t^{\prime}-t^{\prime\prime}){\cal E}_{q}}(q|-e{\bf r} \cdot(\bar{n}^{2}+2){\bf E}_{\omega}/3|\Psi\rangle e^{-it^{\prime\prime}( \omega-\Omega/2)}\] \[\times i\int_{0}^{\mathcal{T}}{\rm d}\tau\,e^{i(\mathcal{T}-\tau )(\Omega/2+i\Gamma/2)}\langle\Psi|-g\mu_{B}{\bf S}\cdot{\bf B}_{\omega}^{*}| \Phi\rangle e^{i\tau(\omega-\Omega/2)}\] \[- 2{\rm Re}(-i)^{3}\hbar^{-4}\sum_{p,q\neq\Psi}\int_{0}^{\mathcal{T} }{\rm d}t\,e^{-i(\mathcal{T}-t)(2\omega-\Omega/2)}\langle\Phi^{\prime}|-e{\bf r }\cdot(\bar{n}^{2}+2){\bf E}_{\omega}/3|p\rangle\int_{-\infty}^{t}{\rm d}t^{ \prime}e^{-i(t-t^{\prime})({\cal E}_{p}+\omega)}\] \[\times \langle p|V_{C}^{D/L}|q\rangle\int_{-\infty}^{t^{\prime}}{\rm d}t ^{\prime\prime}e^{\eta t^{\prime\prime}}e^{-i(t^{\prime}-t^{\prime\prime})({ \cal E}_{q}+\omega)}\langle q|V_{SO}|\Psi\rangle e^{-it^{\prime\prime}(\omega+ \Omega/2-i\Gamma/2)}i\int_{0}^{\mathcal{T}}{\rm d}\tau\,e^{i(\mathcal{T}-\tau )(2\omega-\Omega/2)}\] \[\times \langle\Psi|-g\mu_{B}{\bf S}\cdot{\bf B}_{\omega}^{*}|\Psi^{ \prime}\rangle e^{i\tau(\omega+\Omega/2+i\Gamma/2)}\,-2{\rm Re}(-i)^{3}\hbar^{ -4}\sum_{p,q\neq\Phi^{\prime}}\int_{-\infty}^{\mathcal{T}}{\rm d}t\,e^{\eta t }e^{-i(\mathcal{T}-t)(2\omega-\Omega/2)}\] \[\times \langle\Phi^{\prime}|V_{SO}|p\rangle\int_{-\infty}^{t}{\rm d}t^{ \prime}e^{\eta t^{\prime}}e^{-i(t-t^{\prime})(2\omega+{\cal E}_{p})}\langle p| V_{C}^{D/L}|q\rangle\int_{0}^{t^{\prime}}{\rm d}t^{\prime\prime}e^{-i(t^{\prime}-t^{ \prime\prime})(2\omega+{\cal E}_{q})}\] \[\times \langle q|-e{\bf r}\cdot(\bar{n}^{2}+2){\bf E}_{\omega}/3|\Psi \rangle e^{-it^{\prime\prime}(\omega+\Omega/2-i\Gamma/2)}i\int_{0}^{\mathcal{ T}}{\rm d}\tau\,e^{i(\mathcal{T}-\tau)(2\omega-\Omega/2)}\langle\Psi|-g\mu_{B}{\bf S} \cdot{\bf B}_{\omega}^{*}|\Phi^{\prime}\rangle\] \[\times e^{i\tau(\omega+\Omega/2+i\Gamma/2)},\quad\eta\to 0^{+},\, \,\,\mathcal{T}\mathcal{T}\gg 1.\] In these equations the states \(p\) and \(q\) stand for the excited states of the \(3d^{9}\) configuration together with other eigenstates of \(H_{0}\) with \(n\neq 2\). The quasi-stationary condition \(\eta\to 0^{+}\) accounts for the stationarity of the chiral and the spin-orbit interactions; whereas the adiabatic limit \(\Gamma\mathcal{T}\gg 1\) takes into account the long duration of the probe field with respect to the lifetime \(\Gamma^{-1}\), with \(\Gamma\) being the linewidth of absorption and \(\mathcal{T}\) the observation time. The diagrammatical representation of the processes involved in the above equation is given in Fig.3. In the article, the contributions of the quasi-stationary processes were incorporated into the dressed states \(\tilde{\Psi}\), \(\tilde{\Phi}\), \(\tilde{\Phi}^{\prime}\). More specifically, the bare states are dressed with the quadruplet \(\{\Phi_{1},..,\Phi_{4}\}\) through \(V_{SO}\), and with harmonic states with \(n\neq 2\) by \(V_{C}\). In terms of the eigenstates of the harmonic oscillator, they read \[|\tilde{\Phi}\rangle= \Big{[}(|020\rangle+|200\rangle)/\sqrt{2}+\frac{\lambda}{\sqrt{ 2}\Delta_{0}}(1+\Delta_{2}/\Delta_{0})(|020\rangle-|200\rangle)+\frac{i \lambda C^{D/L}K^{3/2}}{2\hbar\omega_{0}\Delta_{0}}(1+\Delta_{2}/\Delta_{0})\] \[\times (|001\rangle-2|111\rangle)\Big{]}\downarrow\,+\Big{[}\frac{ \lambda}{\sqrt{2}\Delta_{0}}(1+3\hbar\Omega/\Delta_{0})|011\rangle+\frac{ \delta}{\sqrt{2}\Delta_{0}^{2}}(\hbar\Omega+\lambda/2)|101\rangle\] \[+\frac{-iC^{D/L}K^{3/2}\lambda}{2\hbar\omega_{0}\Delta_{0}}(1+3 \hbar\Omega/\Delta_{0})(|210\rangle-\sqrt{3}|030\rangle-\sqrt{2}|100\rangle)+ \frac{iC^{D/L}K^{3/2}\delta}{2\hbar\omega_{0}\Delta_{0}^{2}}\] \[\times (\hbar\Omega+\lambda/2)(|120\rangle-\sqrt{3}|300\rangle-\sqrt{2}| 010\rangle)\Big{]}\uparrow\] \[|\tilde{\Phi}^{\prime}\rangle= \Big{[}(|020\rangle+|200\rangle)/\sqrt{2}+\frac{-\lambda}{\sqrt{2} \Delta_{0}}(1+\Delta_{2}/\Delta_{0})(|020\rangle-|200\rangle)+\frac{-i \lambda C^{D/L}K^{3/2}}{2\hbar\omega_{0}\Delta_{0}}(1+\Delta_{2}/\Delta_{0})\] \[\times (|001\rangle-2|111\rangle)\Big{]}\uparrow\,+\Big{[}\frac{- \lambda}{\sqrt{2}\Delta_{0}}(1-3\hbar\Omega/\Delta_{0})|101\rangle+\frac{ \delta}{\sqrt{2}\Delta_{0}^{2}}(\hbar\Omega-\lambda/2)|011\rangle\] \[+\frac{-iC^{D/L}K^{3/2}\lambda}{2\hbar\omega_{0}\Delta_{0}}(1-3 \hbar\Omega/\Delta_{0})(|120\rangle-\sqrt{3}|300\rangle-\sqrt{2}|010\rangle)+ \frac{-iC^{D/L}K^{3/2}\delta}{2\hbar\omega_{0}\Delta_{0}^{2}}\] \[\times (\hbar\Omega-\lambda/2)(|210\rangle-\sqrt{3}|030\rangle-\sqrt{2}| 100\rangle)\Big{]}\downarrow\] \[|\tilde{\Psi}\rangle= \cos\theta/2|\tilde{\Phi}^{\prime}\rangle+\sin\theta/2|\tilde{\Phi} \rangle,\qquad K=\hbar/(2m_{e}\omega_{0}). \tag{11}\] Using a linearly polarized incident field and averaging in orientations around the \(\hat{\bf z}\)-axis, we obtain, for \(\lambda\gg\delta\), \[P_{M1M1}|_{\omega\approx\Omega} \simeq\frac{\hbar^{-2}\mu_{B}^{2}|B_{\omega}|^{2}}{4[(\omega-\Omega )^{2}+\Gamma^{2}/4]}\cos\theta, \tag{10}\] \[P_{E1M1}^{D/L}|_{\omega\approx\Omega} \simeq\frac{(\bar{n}^{2}+2)}{3}\frac{C^{D/L}\Omega\delta}{m_{e} \omega_{0}^{3}\Delta_{0}^{3}}\frac{\hbar^{-1}\mu_{B}^{2}|B_{\omega}||E_{ \omega}|}{4[(\omega-\Omega)^{2}+\Gamma^{2}/4]}\cos\theta,\] (11) \[g_{T}^{D/L} \simeq\frac{(\bar{n}^{2}+2)}{3\bar{n}}\frac{c\,C^{D/L}\hbar\Omega \delta}{m_{e}\omega_{0}^{3}\Delta_{0}^{2}}+{\cal O}(\delta/\lambda,\lambda/ \Delta_{0}). \tag{12}\] Lastly, it is worth mentioning that for the case \(\delta>\lambda\), i.e., when anisotropy dominates over the spin-orbit coupling, \(g_{T}^{D/L}\) scales as \((c\hbar C^{D/L}\Omega\delta\lambda)/(m_{e}\omega_{0}^{3}\Delta_{0}^{3})\) instead. This scenario will be addressed in a separate publication [30]. ## Appendix C Optical MChD Optical MChD involves transitions of frequency \(\Delta_{0}\) from the ground state \(|\Psi\rangle\) to the quasi-degenerate quadruplet \(\{|d_{zz}\rangle,|d_{yz}\rangle\}\otimes\{\uparrow,\downarrow\}\) which, in account of the Zeeman and spin-orbit interactions, for \(\delta\ll\lambda\), corresponds to the set of states \(\{\Phi_{1},...,\Phi_{4}\}\) of Eq.(10). In contrast to EPR, the absorption probability in the denominator of the ratio \(g_{O}^{D/L}=[P^{D/L}(\omega,\hat{\bf k},{\bf B}_{0})-P^{D/L}(\omega,\hat{\bf k },-{\bf B}_{0})]/[P^{D/L}(\omega,\hat{\bf k},{\bf B}_{0})+P^{D/L}(\omega,\hat{ \bf k},-{\bf B}_{0})]\) for \(\omega\approx\Delta_{0}/\hbar\) may not be dominated by the magnetic-magnetic dipole absorption probability. This might be so because the \(d\)-orbitals of the Cu(II) ion Figure 3: Diagrammatic representation of the processes which contribute to \(P_{M1M1}\) and \(P_{E1M1}^{D/L}\) for \(\omega\approx\Omega\) at leading order in the perturbative interactions, i.e., at second order and fourth order, respectively. Time runs along the vertical direction from \(0\) to the observation time \({\cal T}\), where the probability is computed. Intermediate atomic states are labeled as \(p\) and \(q\). Diagrams with two-photon states account for stimulated emission. hybridize generally with the \(\sigma\) and \(\pi\) orbitals of the ligands, allowing for additional electric-electric dipole (E1E1) transitions. For the sake of simplicity, we will neglect the latter in our calculations, which implies that our preliminar estimate for \(g_{O}^{D/L}\) must be intended as an approximate upper bound. As for the case of EPR, the numerator of the ratio in \(g_{O}^{D/L}\) is again dominated by the electric-magnetic dipole absorption probability, and the non-vanishing terms come from magnetic transitions driven by the spin angular momentum -Eq.(C2) below. However, in contrast to EPR, the magnetic transitions in the denominator are mainly driven by the orbital angular momentum operator -see Eq.(C1) below. In turn, this causes the E1M1 transition probability to depend on the spin polarization of the complex, whereas neither the M1M1 nor the E1E1 probabilities do. Note also that stimulated emission from the state \(|\Psi\rangle\) is absent in optical MChD. All in all, this implies that \(g_{O}^{D/L}\) is proportional to the magnetization of the sample, which is itself proportional to the degree of spin-polarization along \({\bf B}_{0}\), \(\cos\theta\), in agreement with experiments. In Fig.4 we depict some of the diagrams which contribute to \(P_{M1M1}\) and \(P_{E1M1}^{D/L}\) in optical MChD. Following a perturbative approach analogous to that in EPR, for an incident electromagnetic plane wave with \({\bf k}\parallel{\bf B}_{0}\) and assuming \(\delta\ll\lambda\), one arrives at \[P_{M1M1}|_{\omega\approx\Delta_{0}/\hbar} \simeq \frac{\hbar^{-2}\mu_{B}^{2}|B_{\omega}|^{2}}{4[(\omega-\Delta_{0} /\hbar)^{2}+\Gamma^{2}/4]},\] (C1) \[P_{E1M1}^{D/L}|_{\omega\approx\Delta_{0}/\hbar} \simeq \frac{(\bar{n}^{2}+2)}{3}\frac{C^{D/L}\delta}{2m_{e}\omega_{3}^{2} \bar{\Delta}}\frac{\hbar^{-2}\mu_{B}^{2}|B_{\omega}||E_{\omega}|}{4[(\omega- \Delta_{0}/\hbar)^{2}+\Gamma^{2}/4]}\cos\theta,\] (C2) \[g_{O}^{D/L} \lesssim \frac{P_{E1M1}^{D/L}}{P_{M1M1}^{D}}\Big{|}_{\omega\approx\Delta_ {0}/\hbar}\simeq\frac{(\bar{n}^{2}+2)}{3\bar{n}}\,\frac{c\,C^{D/L}\delta\cos \theta}{2\,m_{e}\omega_{3}^{2}\bar{\Delta}},\] (C3) Figure 4: Diagrammatic representation of \(P_{M1M1}\) and \(P_{E1M1}^{D/L}\) for \(\omega\approx\Delta_{0}/\hbar\) at leading order in the perturbative interactions, i.e., at second and up to fifth order, respectively. Intermediate atomic states are labeled as \(p,q,r,s\). where \(\tilde{\Delta}^{-1}=\Delta_{0}^{-1}+\Delta_{2}^{-1}-3\Delta_{1}^{-1}\), and \(\Gamma^{\prime}\) is the linewidth of optical absorption. As anticipated, the fact that the magnetic dipole transition in \(P_{E1M1}^{D/L}\) is dominated by the orbital angular momentum operator causes its leading order term to depend on the magnetization \(\sim\cos\theta\). Hence, time-reversal invariance happens to be broken by the spin-polarization of the complex. ## Appendix D Estimate of \(g_{T}^{D/L}\) In the first place, we work out the relationship between \(g_{T}^{D/L}\) and \(g_{O}^{D/L}\). Comparing Eq.(10) with Eq.(11) at resonance, and taking into account Eqs.(12) and (12), we arrive at the following relationships, \[\frac{P_{E1M1}^{D/L}|_{\omega=\Omega}}{P_{E1M1}^{D/L}|_{\omega=\Delta_{0}/ \hbar}}\simeq\frac{2\hbar\Omega\tilde{\Delta}\Gamma^{{}^{\prime}2}}{\Delta_{0} ^{2}\Gamma^{2}},\qquad\frac{g_{T}^{D/L}}{g_{O}^{D/L}}\gtrsim\frac{2\hbar \Omega\tilde{\Delta}}{\Delta_{0}^{2}\cos\theta}. \tag{13}\] Next, considering the experimental data obtained in Ref.[24] for \(g_{O}^{D/L}\) and applying the relationship in Eq.(13), we can estimate a lower bound for \(g_{T}^{D/L}\). That is, substituting into Eq.(13) the experimental values \(g_{O}^{D/L}\approx 0.025\), \(\cos\theta\approx 0.4\), for \(B_{0}=14\)T at a temperature of 4.2 K, we obtain \(g_{T}^{D/L}\gtrsim 10^{-4}\). Alternatively, we can estimate \(g_{T}^{D/L}\) using the experimental data of Ref.[24] for the non-reciprocal absorption coefficient of optical MChD, \(\alpha_{A}=\alpha(\mathbf{B}_{0}\mid\uparrow\mathbf{k})-\alpha(\mathbf{B}_{0 }\mid\uparrow\mathbf{k})\). In order to do so, we first write down \(\alpha_{A}\) as a function of \(P_{E1M1}^{D/L,O}\) at resonance, \[\alpha_{A}=\frac{4c\mu_{0}\rho\Gamma^{\prime}\Delta_{0}}{|E_{\omega}|^{2}}P_{ E1M1}^{D/L}|_{\omega=\Delta_{0}/\hbar}, \tag{14}\] where \(\rho\) is the molecular density of the CsCuCl\({}_{3}\) complex (mass density 3.5g/cm\({}^{3}\)). Substituting the expression for \(P_{E1M1}^{D/L,O}(\omega=\Delta_{0}/\hbar)\) in the above equation and using Eq.(12) we arrive at the equalities, \[C^{D/L}\delta\ =\frac{3\hbar^{2}m_{e}\omega_{0}^{3}\tilde{\Delta}\Gamma^{ \prime}\alpha_{A}}{2(\bar{n}^{2}+2)\rho\mu_{0}\mu_{B}^{2}\Delta_{0}\cos\theta},\quad g_{T}^{D/L}\ =\frac{c\,\hbar^{3}\Gamma^{\prime}\Omega\tilde{\Delta} \alpha_{A}}{2\Delta_{0}^{3}\mu_{0}\mu_{B}^{2}\rho\cos\theta}. \tag{15}\] Substituting the experimental values for all the variables in Eq.(15), for \(B_{0}=14\) T at a temperature of 4.2 K, with \(\Gamma^{\prime}\approx 0.1\)eV and \(\bar{n}\approx 1.5\), we obtain \(g_{T}^{D/L}\approx 1.5\cdot 10^{-2}\), in agreement with our previous lower bound estimate. ## Appendix E Further comments on the Hamiltonian model Despite the success of our model to derive analytical estimates for the MChA factors, there is still room for improvement. In the first place, concerning the chiral Hamiltonian \(V_{C}\), it was written in terms of the local axis of the octahedral structure, \(x\), \(y\), \(z\), while it should be adapted to the crystal axis to account for the helical distribution of the active ions along the \(c\)-axis. In fact, the experimental data on \(\alpha_{A}\) taken from the literature to estimate \(g_{T}^{D/L}\) consider \(\mathbf{B}_{0}\) along the \(c\)-axis. Also, the harmonic oscillator model, which is considered only distorted in the \(n=2,l=2\) level, may not be accurate enough to account for the intermediate transitions induced by the chiral potential to levels with \(n\neq 2\). Hence, a more accurate confining potential model, though less generic, can be obtained using a more detailed formulation of the crystal field and the JT distortion for the particular case of CsCuCl\({}_{3}\)-see, eg., Ref.[37]. Finally, our estimate of the unknown combination \(C^{D/L}\delta\) in terms of \(\alpha_{A}\) [Eq.(15)], involves \(\bar{n}\)-dependent factors [Eq.(12)], which account for effective incident fields, as well as \(\rho\)-dependent factors. For high densities and \(\bar{n}\approx 1.5\) those factors are likely to depend on near field terms and spatial correlations when evaluated at the absorption frequency [38].
2310.12676
Sideband Injection Locking in Microresonator Frequency Combs
Frequency combs from continuous-wave-driven Kerr-nonlinear microresonators have evolved into a key photonic technology with applications from optical communication to precision spectroscopy. Essential to many of these applications is the control of the comb's defining parameters, i.e., carrier-envelope offset frequency and repetition rate. An elegant and all-optical approach to controlling both degrees of freedom is the suitable injection of a secondary continuous-wave laser into the resonator onto which one of the comb lines locks. Here, we study experimentally such sideband injection locking in microresonator soliton combs across a wide optical bandwidth and derive analytic scaling laws for the locking range and repetition rate control. As an application example, we demonstrate optical frequency division and repetition rate phase-noise reduction to three orders of magnitude below the noise of a free-running system. The presented results can guide the design of sideband injection-locked, parametrically generated frequency combs with opportunities for low-noise microwave generation, compact optical clocks with simplified locking schemes and more generally, all-optically stabilized frequency combs from Kerr-nonlinear resonators.
Thibault Wildi, Alexander Ulanov, Nicolas Englebert, Thibault Voumard, Tobias Herr
2023-10-19T12:11:03Z
http://arxiv.org/abs/2310.12676v1
# Sideband Injection Locking in Microresonator Frequency Combs ###### Abstract Frequency combs from continuous-wave-driven Kerr-nonlinear microresonators have evolved into a key photonic technology with applications from optical communication to precision spectroscopy. Essential to many of these applications is the control of the comb's defining parameters, i.e., carrier-envelope offset frequency and repetition rate. An elegant and all-optical approach to controlling both degrees of freedom is the suitable injection of a secondary continuous-wave laser into the resonator onto which one of the comb lines locks. Here, we study experimentally such sideband injection locking in microresonator soliton combs across a wide optical bandwidth and derive analytic scaling laws for the locking range and repetition rate control. As an application example, we demonstrate optical frequency division and repetition rate phase-noise reduction to three orders of magnitude below the noise of a free-running system. The presented results can guide the design of sideband injection-locked, parametrically generated frequency combs with opportunities for low-noise microwave generation, compact optical clocks with simplified locking schemes and more generally, all-optically stabilized frequency combs from Kerr-nonlinear resonators. ## I Introduction Continuous-wave (CW) coherently-driven Kerr-nonlinear resonators can create temporally structured waveforms that circulate stably without changing their temporal or spectral intensity profile. The out-coupled optical signal is periodic with the resonator roundtrip time \(T_{\text{rep}}\) and corresponds to an optical frequency comb [1; 2; 3; 4; 5], i.e. a large set of laser frequencies spaced by the repetition rate \(f_{\text{rep}}=T_{\text{rep}}^{-1}\). One important class of such stable waveforms are CW-driven dissipative Kerr-solitons (DKSs), which have been observed in fiber-loops [6], traveling- and standing-wave microresonators [7; 8] and free-space cavities [9]. In microresonators these soliton microcombs [10] provide access to low-noise frequency combs with ultra-high repetition rates up to THz frequencies, enabling novel applications in diverse fields including optical communication [11; 12], ranging [13; 14; 15], astronomy [16; 17], spectroscopy [18], microwave photonics [19; 20], and all-optical convolutional neural networks [21]. In a CW-driven microresonator, the comb's frequency components are defined by \(f_{\mu}=f_{\text{p}}+\mu f_{\text{rep}}\), where \(f_{\text{p}}\) denotes the frequency of the central comb line and \(\mu\) is the index of the comb line with respect to the central line (\(\mu\) is also used to index the resonances supporting the respective comb lines). For many applications [4; 5], it is essential to control both degrees of freedom in the generated frequency comb spectra, i.e. the repetition rate \(f_{\text{rep}}\) and the central frequency \(f_{\text{p}}\) (which together define the comb's carrier-envelope offset frequency). Consequently, for Kerr-resonator based combs, \(f_{\text{p}}\) is defined by the pump laser frequency \(f_{\text{p}}=\omega_{\text{p}}/(2\pi)\). However, the repetition rate \(f_{\text{rep}}\) depends on the resonator and is subject to fundamental quantum mechanical as well as environmental fluctuations. A particularly attractive and all-optical approach to controlling \(f_{\text{rep}}\) is the injection of a secondary CW laser of frequency \(\omega^{\prime}\) into the resonator, demonstrated numerically [22] and experimentally [23]. If \(\omega^{\prime}\) is sufficiently close to one of the free-running comb lines (sidebands) \(f_{\mu}\approx\omega^{\prime}/(2\pi)\), i.e., within _locking range_, the comb will lock onto the secondary laser, so that \(f_{\mu}\rightarrow\omega^{\prime}/(2\pi)\). The repetition rate is then \(f_{\text{rep}}=(\omega_{\text{p}}-\omega^{\prime})/(2\pi\mu^{\prime})\), with \(\mu^{\prime}\) denoting the index of the closest resonance to which the secondary laser couples, cf. Fig. 1a. This frequency division [24] of the frequency interval defined by the two CW lasers (as well as their relative frequency noise) by the integer \(\mu^{\prime}\) can give rise to a low-noise repetition rate \(f_{\text{rep}}\). In previous work, sideband injection locking has been leveraged across a large range of photonic systems, including for parametric seeding [25; 26], dichromatic pumping [27], optical trapping [22; 28; 29], synchronization of solitonic and non-solitonic combs [30; 31], soliton crystals [23], soliton time crystals [32], multi-color solitons [33] and optical clockworks by injection into a DKS dispersive wave [34]. Related dynamics also govern the self-synchronization of comb states [35; 36], the binding between solitons [37], modified soliton dynamics in the presence of Raman-effect [38] and avoided mode-crossings [39], as well as the respective interplay between co- [40] and counter-propagating solitons [41; 42; 43] and multi-soliton state-switching [44]. Moreover, sideband injection locking is related to modulated and pulsed driving for broadband stabilized combs [45; 17; 46], as well as spectral purification and non-linear filtering of microwave signals [47; 48] via DKS. Despite the significance of sideband injection locking, a broadband characterization and quantitative understanding of its dependence on the injecting laser are lacking, making the design and implementation of such systems challenging. In this work, we study the dynamics of sideband injection locking with DKS combs. Our approach leverages high-resolution coherent spectroscopy of the microresonator under DKS operation, enabling precise mapping of locking dynamics across a large set of comb modes, including both the central region and wing of the comb. We derive the sideband injection locking range's dependence on experimentally accessible parameters and find excellent agreement with the experimental observation and with numeric simulation. Specifically, this includes the square dependence on the mode number, the square-root dependence on injection laser and DKS spectral power, as well as, the associated spectral shifts. In addition, we demonstrate experimentally optical frequency division and repetition rate phase-noise reduction in a DKS state to three orders of magnitude below the noise of a free-running system. ## II Results To first explore the sideband injection locking dynamics experimentally, we generate a single DKS state in a silicon nitride ring-microresonator. In the fundamental TE modes, the resonator is characterized by a quality factor of \(Q\approx 2\) million (linewidth \(\kappa/(2\pi)\approx 100\) MHz), a free-spectral range (FSR) of \(D_{1}/(2\pi)=300\) GHz and exhibits anomalous group velocity dispersion \(D_{2}/(2\pi)=9.7\) MHz so that the resonance frequencies are well-described by \(\omega_{\mu}=\omega_{0}+\mu D_{1}+\mu^{2}\frac{D_{2}}{2}\) (\(1.6\times 0.8\) um\({}^{2}\) cross-section, \(76\) um radius). To achieve deterministic single soliton initiation, the microresonator's inner perimeter is weakly corrugated [49, 50]. The resonator is critically coupled and driven by a CW pump laser (\(\sim\)300 kHz linewidth) with on-chip power of 200 mW at 1557 nm (pump frequency \(\omega_{\mathrm{p}}/(2\pi)=192.5\) THz) [7]. The generated DKS has a 3 dB bandwidth of approximately 5.2 THz (cf. Fig. 2a) corresponding to a bandwidth limited pulsed of \(\sim\)60 fs duration. The soliton spectrum closely follows a _sech\({}^{2}\)_ envelope and is free of dispersive waves or avoided mode crossings. The spectral center of the soliton does not coincide with the pump laser but is slightly shifted towards longer wavelengths due to the Raman self-frequency shift [51, 52]. A secondary CW laser (\(\sim\)300 kHz linewidth), tunable both in power and frequency (and not phase-locked in any way to the first CW laser), is then combined with the pump laser upstream of the microresonator and scanned across the \(\mu^{\prime}\)th sideband of the soliton microcomb, as illustrated in Fig. 2a. The spectrogram of the repetition rate signal recorded during this process is shown in Fig. 2b, for \(\mu^{\prime}=-13\), and exhibits the canonical signature of locking oscillators [53] (cf. Supplementary Information (SI), Section 2 for details on the measurement of \(f_{\mathrm{rep}}\)). Specifically, the soliton repetition rate \(f_{\mathrm{rep}}\) is observed to depend linearly on the auxiliary laser frequency \(\omega^{\prime}\) over a locking range \(\delta_{\mathrm{lock}}\) following \(f_{\mathrm{rep}}=\frac{1}{2\pi}\frac{\omega_{\mathrm{p}}-\omega^{\prime}}{ \mu^{\prime}}\). Within \(\delta_{\mathrm{lock}}\), the soliton comb latches onto the auxiliary laser, such that the frequency of the comb line with index \(\mu^{\prime}\) is equal to the secondary laser frequency. The locking behavior is found to be symmetric with respect to the scanning direction, and Figure 2: **Soliton sideband injection locking.****a**, Single DKS comb spectrum, following a _sech\({}^{2}\)_ envelope, with a full-width-at-half-maximum (FWHM) of 5.2 THz, corresponding to a \(\sim\)60 fs pulse. The secondary laser is introduced in the spectral wing of the soliton and scanned across the \(\mu^{\prime}\)th sideband. **b**, Repetition rate beat observed while the secondary laser is scanned across the \(\mu^{\prime}\)th sideband. The locking bandwidth corresponds to the region of linear evolution of the repetition rate beat beatone. **c**, Spectra of two sideband injection-locked DKS states from either end of the locking range, exhibiting a differential spectral shift of 860 GHz. Note that a filter blocks the central pump component \(\omega_{\mathrm{p}}\). Figure 1: **Principles of sideband injection locking.****a**, In a free-running comb, the central comb line is defined by the pump laser around which equidistant comb lines, spaced by the free-running repetition rate \(f_{\mathrm{rep}}^{\mathrm{q}}\), are formed. If a secondary injection laser of frequency \(\omega^{\prime}\) is brought close to one of the comb lines (within injection locking range), then the comb locks to the injecting laser, modifying the repetition rate as indicated. **b** Outside the locking range, \(f_{\mathrm{rep}}=f_{\mathrm{rep}}^{\mathrm{0}}\) is unaffected by the secondary laser. Inside the locking range, it follows a characteristic tuning behavior with a linear dependence on the injecting laser frequency \(\omega^{\prime}\). no hysteresis is observed. Figure 2c shows the optical spectra of two sideband injection-locked DKS states, with the secondary laser positioned close to the respective boundaries of the locking range. A marked shift of the spectrum of 860 GHz is visible when going from one state to the other. As we will discuss below and in the SI, Section 3.2, the spectral shift in the presence of non-zero group velocity dispersion modifies the soliton's group velocity and provides a mechanism for the DKS to adapt to the repetition rate imposed by the driving lasers. Having identified characteristic features of sideband injection locking in our system, we systematically study the injection locking range and its dependence on the mode number \(\mu^{\prime}\) to which the secondary laser is coupled. To this end, a frequency comb calibrated scan [54] of the secondary laser's frequency \(\omega^{\prime}\) across many DKS lines is performed. The power transmitted through the resonator coupling waveguide is simultaneously recorded. It contains the \(\omega^{\prime}\)-dependent transmission of the secondary laser as well as the laser's heterodyne mixing signal with the DKS comb, which permits retrieving the locking range \(\delta_{\text{lock}}\). Figure 3a shows an example of the recorded transmission signal where the scanning laser's frequency \(\omega^{\prime}\) is in the vicinity of the comb line with index \(\mu^{\prime}=-3\). When the laser frequency \(\omega^{\prime}\) is sufficiently close to the DKS comb line, the heterodyne oscillations (blue trace) can be sampled; when \(\omega^{\prime}\) is within the locking range \(\delta_{\text{lock}}\), the heterodyne oscillations vanish, and a linear slope is visible, indicating the changing Figure 3: **DKS sideband injection locking dynamics.****a**, Transmission obtained when the secondary laser frequency \(\omega^{\prime}\) is scanned in the vicinity of comb line \(\mu^{\prime}=-3\). The trace contains features indicating the position of the microresonator resonance frequency \(\omega_{-3}/(2\pi)\) and of the soliton comb line frequency \(f_{-3}\) as well as the sideband injection locking range (see main text for details). **b**, Similar to **a** but for all \(\mu^{\prime}\) that can be reached by the scanning laser frequency \(\omega^{\prime}\). In this representation, the resonance frequencies form a quadratic integrated dispersion profile (due to anomalous dispersion) while the equidistant soliton microcomb lines (highlighted in gray and expanded in panel **b**) form a straight line, enabling retrieval of pump laser detuning and microcomb repetition rate (see main text for details). **c**, Zoom into **b**, focusing on the vicinity of the comb lines. The spectral dependence of the locking range can be observed (cf. panel **a** and see main text for details). **d**, Locking range as a function of the relative mode number \(\mu^{\prime}\). The measured data closely follows the predicted scaling (cf. main text). The grey area indicates the uncertainty we expect from 10% detuning fluctuations during the recording procedure. **e**, Locking range in terms of the repetition rate \(f_{rep}\) for \(\mu^{\prime}=-13\) as a function of secondary pump power (estimated on-chip power). Analogous to **d**, the uncertainty is approx. \(\pm 4\%\). phase between the comb line and the secondary laser across the injection locking range. In addition to the heterodyne signal between the comb line and laser, a characteristic resonance feature, the so-called \(C\)-resonance [55; 56], representing (approximately) the resonance frequency \(\omega_{\mu}\) is observed. The set of equivalent traces for all comb lines \(\mu^{\prime}\) in the range of the secondary (scanning) laser is presented in Fig. 3b as a horizontal stack. For plotting these segments on a joint vertical axis, \(\omega_{\rm p}+\mu^{\prime}D_{1}\) has been subtracted from \(\omega^{\prime}\). In this representation, the parabolic curve (blue line in Fig. 3b) connecting the \(C\)-resonances signifies the anomalous dispersion of the resonator modes \(\omega_{\mu}\). In contrast, the equidistant comb lines form a straight feature (grey highlight), of which a magnified view is presented in Fig. 3c. Due to the Raman self-frequency shift, the free-running repetition rate of the DKS comb \(f_{\rm rep}^{0}\) is smaller than the cavity's FSR \(D_{1}/(2\pi)\), resulting in the negative tilt of the line. Here, to obtain a horizontal arrangement of the features, \(\omega_{\rm p}+\mu^{\prime}2\pi f_{\rm rep}^{0}\) has been subtracted from \(\omega^{\prime}\). The locking range \(\delta_{\rm lock}\) corresponds to the vertical extent of the characteristic locking feature in Fig. 3c. Its value is plotted as a function of the mode number in Figure 3d, revealing a strong mode number dependence of the locking range with local maxima (almost) symmetrically on either side of the central mode. The asymmetry in the locking range with respect to \(\mu^{\prime}=0\) (with a larger locking range observed for negative values of \(\mu^{\prime}\)) coincides with the Raman self-frequency shift of the soliton spectrum (higher spectral intensity for negative \(\mu\)). Next, we keep \(\mu^{\prime}\) fixed and measure the dependence of \(\delta_{\rm lock}\) on the power of the injecting laser \(P^{\prime}\). As shown in Fig. 3e, we observe an almost perfect square-root scaling \(\delta_{\rm lock}\propto\sqrt{P^{\prime}}\), revealing the proportionality of the locking range to the strength of the injected field. The observed scaling of the locking range may be understood in both the time and frequency domain. In the time domain, the beating between the two driving lasers creates a modulated background field inside the resonator, forming an optical lattice trap for DKS pulses [22; 28]. Here, to derive the injection locking range \(\delta_{\rm lock}\), we extend the approach proposed by Taheri et al. [26], which is based on the momentum \(p=\sum_{\mu}\mu|a_{\mu}|^{2}=\mu\sum_{\mu}|a_{\mu}|^{2}\) of the waveform (in a co-moving frame), where \(a_{\mu}\) is the complex field amplitude in the mode with index \(\mu\), normalized such that \(|a_{\mu}|^{2}\) scales with the photon number and \(\bar{\mu}\) the _photonic center of mass_ in mode number/photon momentum space. As we show in the SI, Section 3.3, the secondary driving laser modifies the waveform's momentum, thereby its propagation speed and repetition rate. For the locking range of the secondary laser, we find \[\delta_{\rm lock}=\frac{2}{\pi}\mu^{\prime 2}\eta D_{2}\,\frac{\sqrt{P^{\prime}P_{ \mu^{\prime}}}}{\sum_{\mu}P_{\mu}}\frac{\omega_{\rm p}}{\omega_{\mu^{\prime}}}, \tag{1}\] and for the repetition rate tuning range \[\delta f_{\rm rep}=\delta_{\rm lock}/|\mu^{\prime}|, \tag{2}\] where \(\eta\) is the coupling ratio, and the \(P_{\mu}\) refer to the spectral power levels of the comb lines with index \(\mu\) measured outside the resonator. The spectral shift of the spectrum in units of mode number \(\mu\) is \(2\pi\delta f_{\rm rep}/D_{2}\). In the SI, Section 1, we recast Eq. 2 in terms of the injection ratio \({\rm IR}=P^{\prime}/P_{\mu^{\prime}}\) to enable comparison with CW laser injection locking [57]. The results in Eqs. 2 and 1 may also be obtained in a frequency domain picture (see SI, Section 3.4), realizing that the waveform's momentum is invariant under Kerr-nonlinear interaction (neglecting the Raman effect) and hence entirely defined by the driving lasers and the rate with which they inject photons of specific momentum into the cavity (balancing the cavity loss). If only the main pump laser is present, then \(\bar{\mu}=0\). However, in an injection-locked state, depending on phase, the secondary pump laser can coherently inject (extract) photons from the resonator, shifting \(\bar{\mu}\) towards (away from) \(\mu^{\prime}\). This is equivalent to a spectral translation of the intracavity field, consistent with the experimental evidence in Fig. 2c. To verify the validity of Eq. 1 and 2, we perform numeric simulation (SI, Section 4) based on the Lugiato-Lefever Equation (LLE) (see SI, Section 3.3). We find excellent agreement between the analytic model and the simulated locking range. We note that Eq. 1 and 2 are derived in the limit of low injection power, which we assume is the most relevant case. For large injection power, the spectrum may shift substantially and consequently affect the values of \(P_{\mu}\). Interestingly, while this effect leads to an asymmetric locking range, the extent of the locking range is only weakly affected as long as the spectrum can locally be approximated by a linear function across a spectral width comparable to the shift. Injection into a sharp spectral feature (dispersive wave) is studied by Moille et al. [34] The values of \(P_{\mu}\) do not generally follow a simple analytic expression and can be influenced by the Raman effect and higher-order dispersion. While our derivation accounts for the values of \(P_{\mu}\) (e.g., for the Raman effect \(a_{\mu}\) and \(P_{\mu}\) are increased (reduced) for \(\mu\) below (above) \(\mu=0\)), it does not include a physical description for Raman- or higher-order dispersion effects; these effects may further modify the locking range. Taking into account the spectral envelope of the DKS pulse as well as the power of the injecting laser (which is not perfectly constant over its scan bandwidth), we fit the scaling \(\delta_{\rm lock}\propto\mu^{\prime 2}\sqrt{P^{\prime}P_{\mu^{\prime}}}\) to the measured locking range in Fig. 3d, where we assume \(P_{\mu^{\prime}}\) to follow an offset (Raman-shifted) \({\rm sech}^{2}\)-function. The fit and the measured data are in excellent agreement, supporting our analysis and suggesting that the Raman shift does not significantly change the scaling behavior. Note that the effect of the last factor in Eq. 1 is marginal, and the asymmetry in the locking range is due to the impact of the Raman effect on \(P_{\mu}\). It is worth emphasizing that our analysis did not assume the intracavity waveform to be a DKS state and we expect that the analytic approach can in principle also be applied to other stable waveforms, including those in normal dispersion combs [31; 58]. Indeed, as we show numerically in the SI, Section 4, sideband-injection locking is also possible for normal dispersion combs. Here, in contrast to a DKS, sideband laser injection is found to have a strong impact on the spectral shape (not only spectral shift). Therefore, although the underlying mechanism is the same as in DKS combs, Eq. 1 and Eq. 2 do not generally apply (in the derivation, it is assumed that the spectrum does not change substantially). Finally, as an example application of sideband injection locking, we demonstrate optical frequency division, similar to previous work [34], and measure the noise reduction in \(f_{\text{rep}}\) (Fig. 4a). With a growing separation between the two driving lasers (increasing \(\mu^{\prime}\)), the phase noise is lowered by a factor of \(\mu^{\prime 2}\), resulting in a phase noise reduction of more than 3 orders of magnitude (with respect to the free-running case) when injecting the secondary laser into the mode with index \(\mu^{\prime}=42\) (limited by the tuning range of the secondary laser), and this without any form of stabilization of either the pump or secondary laser. Fig. 4b and c compare the repetition rate beatnote of the free-running and injection-locked cases. ## III Conclusion In conclusion, we have presented an experimental and analytic study of sideband injection locking in DKS microcombs. The presented results reveal the dependence of the locking range on the intracavity spectrum and on the injecting secondary laser, with an excellent agreement between experiment and theory. While our experiments focus on the important class of DKS states, we emphasize that the theoretical framework from which we derive the presented scaling laws is not restricted to DKSs and may potentially be transferred to other stable waveforms. Our results provide a solid basis for the design of sideband injection-locked, parametrically generated Kerr-frequency combs and may, in the future, enable new approaches to low-noise microwave generation, compact optical clocks with simplified locking schemes, and more generally, stabilized low-noise frequency comb sources from Kerr-nonlinear resonators. ## Data Availability Statement The data supporting this study's findings are available from the corresponding author upon reasonable request. ## Funding This project has received funding from the European Research Council (ERC) under the EU's Horizon 2020 research and innovation program (grant agreement No 853564) and through the Helmholtz Young Investigators Group VH-NG-1404; the work was supported through the Maxwell computational resources operated at DESY.
2308.07757
A Scalable Formal Verification Methodology for Data-Oblivious Hardware
The importance of preventing microarchitectural timing side channels in security-critical applications has surged in recent years. Constant-time programming has emerged as a best-practice technique for preventing the leakage of secret information through timing. It is based on the assumption that the timing of certain basic machine instructions is independent of their respective input data. However, whether or not an instruction satisfies this data-independent timing criterion varies between individual processor microarchitectures. In this paper, we propose a novel methodology to formally verify data-oblivious behavior in hardware using standard property checking techniques. The proposed methodology is based on an inductive property that enables scalability even to complex out-of-order cores. We show that proving this inductive property is sufficient to exhaustively verify data-obliviousness at the microarchitectural level. In addition, the paper discusses several techniques that can be used to make the verification process easier and faster. We demonstrate the feasibility of the proposed methodology through case studies on several open-source designs. One case study uncovered a data-dependent timing violation in the extensively verified and highly secure IBEX RISC-V core. In addition to several hardware accelerators and in-order processors, our experiments also include RISC-V BOOM, a complex out-of-order processor, highlighting the scalability of the approach.
Lucas Deutschmann, Johannes Mueller, Mohammad Rahmani Fadiheh, Dominik Stoffel, Wolfgang Kunz
2023-08-15T13:19:17Z
http://arxiv.org/abs/2308.07757v2
# A Scalable Formal Verification Methodology ###### Abstract The importance of preventing microarchitectural timing side channels in security-critical applications has surged in recent years. Constant-time programming has emerged as a best-practice technique for preventing the leakage of secret information through timing. It is based on the assumption that the timing of certain basic machine instructions is independent of their respective input data. However, whether or not an instruction satisfies this data-independent timing criterion varies between individual processor microarchitectures. In this paper, we propose a novel methodology to formally verify data-oblivious behavior in hardware using standard property checking techniques. The proposed methodology is based on an inductive property that enables scalability even to complex out-of-order cores. We show that proving this inductive property is sufficient to exhaustively verify data-obliviousness at the microarchitectural level. In addition, the paper discusses several techniques that can be used to make the verification process easier and faster. We demonstrate the feasibility of the proposed methodology through case studies on several open-source designs. One case study uncovered a data-dependent timing violation in the extensively verified and highly secure IBEX RISC-V core. In addition to several hardware accelerators and in-order processors, our experiments also include RISC-V BOOM, a complex out-of-order processor, highlighting the scalability of the approach. Data-Oblivious Computing, Formal Verification, Hardware Security, Constant-Time Programming ## I Introduction In recent years, the view on hardware (HW) as a root-of-trust has been severely damaged. A flood of new security vulnerabilities renewed the focus on microarchitectural side channels. Both software (SW) and HW communities have proposed numerous countermeasures to these new security gaps. However, to fully meet the stringent demands of security-critical applications, a holistic combination of multiple countermeasures is needed that takes consideration of the entire system stack. The most prominent SW paradigm that tries to mitigate microarchitectural timing side channels is known as _data-oblivious_, or _constant-time_, _programming_[1, 2, 3, 4]. It works by the assumption that the timing and the resource usage of certain operations inside a processor are independent of their respective input data. Constant-time programming is an actively studied discipline with many important contributions from the SW community, including open-source libraries [5, 6, 3], domain-specific languages [7], verification tools [8, 9, 10, 2, 11] and dedicated compilers [12, 13]. The term "constant-time" can, however, be misleading, as there is no need for execution times to be _constant_. A variable operation timing is acceptable, as long as it depends only on public information. For example, in constant-time programming, the program itself is considered public information. With this assumption in mind, consider a Read-After-Write (RAW) hazard in a pipelined processor causing a stall. The resulting change in instruction timing is legal because it is based on the _public_ sequence of instructions, not on its (potentially confidential) operands. The data-oblivious subset of a processor's instruction set is often referred to as the set of _oblivious HW primitives_. More complex instructions, which could potentially be insecure, are decomposed into these primitives to ensure a data-oblivious behavior. As a simple example, consider a processor which implements a multi-cycle multiplication. A common optimization in a HW multiplier is to check whether one of the operands is zero and, if so, produce the result after a single cycle. This creates a data-dependent timing and a possible side channel if the operand contains confidential information. Instead of issuing a multiplication, constant-time programming would therefore resort to replacing the multiply instruction by a sequence of primitive instructions like _add_ and _shift_. However, there is no guarantee that even these simple instructions are actually _oblivious_ HW primitives. Wether or not an instruction fulfills the criterion of data-independent timing can vary between the implemented microarchitectures. Yet, there is only little research on how to verify these assumptions at the microarchitectural level [14, 15, 16, 17]. To make things worse, a recent survey [18] highlighted seven classes of microarchitectural optimizations that all undermine the constant-time paradigm. While exploiting some of these optimizations in commodity processors may seem unrealistic, an attack called Augury [19] demonstrated that this threat is not only a theoretical one. The work shows the security implication of one of these optimizations, namely data memory-dependent prefetchers, as they are present in modern Apple processors. In light of more and more such advanced optimizations being implemented, the question arises as to how we can restore the trust in HW. To this end, we propose a novel methodology for proving data-oblivious execution of HW operations using standard formal property checking techniques. For processors, the approach certifies a set of trusted HW primitives for data-oblivious programming. Our results show that even simple instructions like a logical shift might suffer from an unexpected timing variability. We also found a potential, and preventable, timing vulnerability in the extensively verified Ibex RISC-V core [20]. Furthermore, we extend the approach to out-of-order cores, demonstrating its feasibility and scalability with an experiment on the Berkeley Out-of-Order Machine (BOOM) [21]. In summary, this paper makes the following contributions: * We provide a comprehensive overview of related approaches that aim to ensure data-obliviousness building upon an analysis at the HW level. * We introduce a definition for data-oblivious execution at the microarchitectural level and formalize it using the notion of a 2-safety property over infinite traces. Since most HW designs are not built for data-obliviousness, we also introduce a weaker notion of _input-constrained_ data-obliviousness for general designs. In order to make these properties verifiable in practice, we present how the definitions over infinite traces can be transformed to inductive properties that span over only a single clock cycle. * We propose a formal verification methodology, called _Unique Program Execution Checking for Data-Independent Timing (UPEC-DIT)_[17], that can exhaustively detect any violation of data-obliviousness at the HW level. The methodology is based on standard formal property languages, and can therefore be easily integrated into existing formal verification flows. When applied to processor implementations, UPEC-DIT can be used to qualify the instructions of a microarchitectural ISA implementation regarding their data-obliviousness. These data-oblivious instructions constitute HW primitives for countermeasures such as constant-time programming and can therefore serve as a HW root-of-trust for the higher levels of the system stack. * We present several optimization techniques, such as black-boxing, proofs over an unrolled model, and cone-of-influence reduction, which can further increase the scalability and usability of the proposed methodology. * We demonstrate the feasibility of our approach through case studies on multiple open-source Register-Transfer Level (RTL) designs. Besides several HW accelerators and in-order processors, our experiments also cover the BOOM [21], a superscalar RISC-V processor with floating-point (FP) support, a deep 10-stage pipeline and out-of-order execution. ## II Related Work One of the first works that aims to formally verify data-obliviousness on the microarchitectural level is Clepsydra [14]. In their approach, the authors instrument the HW with Information Flow Tracking (IFT) [22] in such a way that it tracks not only explicit but also implicit timing flows. The verification engineer is then able to perform simulation, emulation or formal verification to verify timing flow properties on the instrumented design. We believe that the ability to utilize a simulation-based approach can be particularly useful when dealing with very large designs. The experiments conducted using formal verification in Clepsydra are, however, restricted to individual functional units, e.g., cryptographic cores. The additional logic added to monitor timing flows introduces a complexity overhead within the design that can be prohibitively large when trying to formally verify data-independent timing in commercial-sized processors. Other tools that formally verify data-obliviousness at the RTL are XENON [16] and its predecessor IODINE [15]. These tools build their own tool chain and annotation system, utilizing open-source libraries and special-purpose solvers. While this approach is an important contribution to restoring trust in HW, its dependency on special-purpose solvers may be an obstacle to adoption in industry. In contrast, in our proposed methodology, we use standard SystemVerilog Assertions (SVA) combined with state-of-the-art SAT-based formal property checking tools. Our goal is to complement existing formal workflows, creating a synergy between functional and security verification. Furthermore, although XENON makes significant performance improvements over its predecessor, it may face complexity hurdles when dealing with commercial microarchitectures. With the work proposed in this paper, we aim to significantly improve the scalability of formal security verification and, for the first time, present a method that is applicable to large processors featuring out-of-order execution. A related line of research aims to establish a formally defined relationship between HW and SW by formulating and verifying so-called HW/SW contracts [23]. Similar to [17], they can be used to prove data-independent timing on an instruction-level granularity. However, current experiments only cover in-order processor designs up to a pipeline depth of three stages. A promising and ongoing related work named _TransmitSynth_[24, 25] maps such a contract to a verifiable SVA property in order to automatically detect data-dependent side effects. _TransmitSynth_ can enumerate microarchitectural execution paths for a given instruction under verification. This allows for a fine-grained categorization of leakage scenarios. However, the workflow includes a manual annotation of so-called _Performing Locations_ (PLs), which are identifiers that mark an instruction execution path. Correctly marking these PLs requires some knowledge of the underlying design and could be increasingly difficult with more complex systems, especially for deep out-of-order processors like BOOM. Other work pursues augmenting the Instruction Set Architecture (ISA) with information about the data-obliviousness of instructions. In fact, both Intel [26] and ARM [27] have recently added support for instructions with data-independent timing. With the same goal in mind, RISC-V [28] has just ratified the _Cryptography Extension for Scalar & Entropy Source Instructions_[29]. A subset of this extension, denoted _Zkt_, requires a processor to implement certain instructions of standard extensions with a data-independent execution latency. This _ISA contract_ provides the programmer with a safe subset of instructions that can be used for constant-time programming. Another work on a _Data-Oblivious ISA Extension (OISA)_[30, 31] proposes to refine the ISA with information about the data-obliviousness of each instruction. The authors then develop hardware support for this to track wether confidential information can reach unsafe operands. The method proposed here is complementary to the research efforts on such architectures as it provides a tool to verify their security. ## III Theoretical Foundation In this section, we introduce a formal notation that we will use throughout this paper (Sec. III-A). We formally define data-obliviousness at the microarchitectural level (Sec. III-B) and then develop a weaker definition that is suitable for general circuits not specifically designed for data-obliviousness (Sec. III-C). In order to ensure scalability for more complex designs, we translate these definitions, which are formulated over infinite traces, into 1-cycle inductive properties (Sec. III-D). We prove that these inductive properties are equivalent to the corresponding definitions over infinite traces. To conclude this chapter, we address some interesting special cases (Sec. III-E). ### _Definitions_ We first introduce some general notations to reason about data-obliviousness as a HW property. We model a digital HW design as a standard finite state machine (FSM) of Mealy type, \(M=(I,S,S_{0},O,\delta,\lambda)\), with finite sets of input symbols \(I\), output symbols \(O\), states \(S\), initial states \(S_{0}\subseteq S\), transition function \(\delta:S\times I\mapsto S\) and output function \(\lambda:S\times I\mapsto O\). The interface sets \(I\), \(O\) and the state set \(S\) are encoded in (binary-valued) input variables \(X\), output variables \(Y\) and state variables \(Z\), respectively. A key observation is that, in a HW design, the timing of a module is dictated by its control behavior. Accordingly, we partition each interface set into two disjoint subsets in order to separate _control_ (\(C\)) from _data_ (\(D\)). We denote these sets as \(X_{C}\), \(X_{D}\), \(Y_{C}\) and \(Y_{D}\), with \[X_{C}\cup X_{D} =X;X_{C}\cap X_{D}=\emptyset\] \[Y_{C}\cup Y_{D} =Y;Y_{C}\cap Y_{D}=\emptyset\] In practice, this partitioning of the interface is straightforward and is usually done manually based on the specification of the design. For example, the operands and the result of a functional unit are considered _data_, whereas any handshaking signals that trigger the start of a new computation or indicate that a provided result is valid belong to _control_. We further define a _trace_\(\tau=\{e_{0},e_{1},...\}\) to be a sequence of events, with an event \(e_{t}\) being a tuple \((i_{t},s_{t},o_{t})\), where \(i_{t}\) is the valuation of our design's input variables \(X\) at time point (clock cycle) \(t\), \(s_{t}\) is its state, as represented by the value of \(Z\) at \(t\) and \(o_{t}\) is the valuation of its output variables \(Y\) at \(t\). Let \(T\) be the set of all infinite traces of the design where \(s_{0}\in S_{0}\). We introduce the following definitions: * \(s(\tau)=\{s_{0},s_{1},...\}\) is the sequence of valuations to the design's state variables \(Z\) in the trace \(\tau\in T\). * \(i(\tau)=\{i_{0},i_{1},...\}\) is the sequence of valuations to the design's input variables \(X\) in the trace \(\tau\in T\). Likewise, \(i_{C}(\tau)\) is the sequence of valuations to \(X_{C}\) and \(i_{D}(\tau)\) is the sequence of valuations to \(X_{D}\). * \(o(\tau)=\{o_{0},o_{1},...\}\) is the sequence of valuations to the design's output variables \(Y\) in the trace \(\tau\in T\). Likewise, \(o_{C}(\tau)\) is the sequence of valuations to \(Y_{C}\) and \(o_{D}(\tau)\) is the sequence of valuations to \(Y_{D}\). * With a slight abuse of notation, we also allow the above functions to take single events as arguments, e.g., \(s(e_{t})=s_{t}\). * For any \(t\in\mathbb{N}_{0}\), the notation \(\tau[t]\) represents the event \(e_{t}\) at time point \(t\). For example, \(s(\tau[t])=s_{t}\) represents the valuation to the state variables in \(Z\) at time point \(t\). * Similarly, we define the notion [\(t..\)] as an infinite time interval beginning with and including \(t\). Accordingly, \(\tau[t..]\) represents an infinite subsequence of \(\tau\) that starts at and includes the time point \(t\). ### _Data-Obliviousness in HW_ Having established a basic notation, we proceed to defining data-oblivious behavior at the microarchitectural level. In our threat model, we assume that an attacker cannot access confidential information directly. The attacker is, however, able to observe the control signals of the design under attack, e.g., by monitoring bus transactions. This means that, for a HW module to be data-oblivious, the data used in the computations must not affect the operating time or cause other microarchitectural side effects. Based on this intuition we formulate a definition of the security feature under verification: **Definition 1** (_Data-Obliviousness_).: A HW module is called _Data-oblivious (DO)_ if the sequence of values of its control outputs \(Y_{C}\) is uniquely determined by the sequence of values at the control inputs \(X_{C}\). We can express Def. 1 formally as a 2-safety property over infinite traces. We formalize it as follows: For any two infinite traces running on the design whose starting states at time \(t\) are identical and which receive the same control input sequences from time point \(t\) on, data-obliviousness guarantees that the control outputs after \(t\) are also identical: \[\begin{split}\textbf{DO}\coloneqq&\ \forall\tau_{1},\tau_{2}\in T,\ \forall t\in\mathbb{N}_{0}:\\ & s(\tau_{1}[t])=s(\tau_{2}[t])\wedge i_{C}(\tau_{1}[t..])=i_{C}( \tau_{2}[t..])\\ &\ \Rightarrow o_{C}(\tau_{1}[t..])=o_{C}(\tau_{2}[t..])\end{split} \tag{1}\] ### _Input-Constrained Data-Obliviousness_ Def. 1 of data-obliviousness is fairly straightforward. Put simply, it ensures that the control behavior of a HW design is independent of the data it processes. Unfortunately, this strict definition works only for HW that is carefully designed for data-obliviousness. In general, however, designs are not data-oblivious. A processor must be able to make decisions based on the data it is processing, for example, when it executes conditional _branch_ instructions. Constant-time programming tries to prevent data-dependent timing by excluding such instructions from the security-critical parts of the program. Data-obliviousness is achieved by restricting the program to only use a data-oblivious subset of the ISA. Consequently, in order to qualify a microarchitectural implementation for data-obliviousness, we require a separation between data-oblivious and non-data-oblivious operations at the HW level. This means, we must systematically identify and formally verify the _control input configurations_ under which the design operates data-independently. In practice, this requires constraining the possible input values to a legal subset that ensures data-obliviousness. **Definition 2** (Input-constrained Execution).: An _input constraint_ is a non-empty subset \(\phi\subseteq I\) of the possible inputs to the design. An _input-constrained trace_\(\tau_{\phi}\) is an infinite trace in which the inputs to the design are constrained by \(\phi\), i.e., \(i(\tau_{\phi}[t])\in\phi\) for every \(t\in\mathbb{N}\). The subset \(T_{\phi}\subseteq T\) of all traces constrained by \(\phi\) is called _input-constrained execution_ of the design. We can now modify Def. 1 to introduce a weaker notion of data-obliviousness. We call a HW design that provides a data-oblivious subset of its functionality _input-constrained data-oblivious_. **Definition 3** (Input-constrained Data-Obliviousness).: A HW design is called _input-constrained data-oblivious_ (\(DO_{\phi}\)) if, for a given input constraint \(\phi\subseteq I\), the values of its control outputs \(Y_{C}\) are uniquely determined by the sequence of control inputs \(X_{C}\). In essence, Def. 3 partitions the design behavior into data-oblivious and non-oblivious HW operations. The HW runs without any observable side effect on the architectural level, as long as only inputs within the constraint \(\phi\) are given. For the special case of \(\phi=I\), Def. 3 is equivalent to the original Def. 1. As an example, assume a processor executing a sequence of instructions. If each instruction is a data-oblivious HW primitive, e.g., an addition or a logic operation whose execution time is data-independent (as in most microarchitectures), the sequence of instructions as a whole is also data-oblivious. However, if an instruction has a data-dependent behavior, e.g., a variable-time division, the entire sequence of instructions becomes non-oblivious. We use a constraint \(\phi\) to exclude such an instruction from consideration. The goal of the proposed methodology is to systematically find all scenarios that cause data-dependent side effects. The following trace property formalizes the requirement of input-constrained data-obliviousness: \[\textbf{{DO}}_{\phi} \coloneqq\exists\phi,\ \forall\tau_{1},\tau_{2}\in T_{\phi},\ \forall t\in\mathbb{N}_{0}: \tag{2}\] \[s(\tau_{1}[t])=s(\tau_{2}[t])\wedge i_{C}(\tau_{1}[t..])=i_{C}( \tau_{2}[t..])\] \[\Rightarrow o_{C}(\tau_{1}[t..])=o_{C}(\tau_{2}[t..])\] ### _Formally Proving Data-Obliviousness in Practice_ In the previous subsections, we defined data-obliviousness as a property over infinite traces. Most commercial model checking tools, however, reason about sequential circuits by unrolling the combinatorial part of the design over a finite number of clock cycles. Therefore, our definitions of data-obliviousness formalized over an _infinite_ number of clock cycles are not yet suitable for being checked on practical tools. For exhaustive coverage of every possible design behavior, the circuit must be unrolled to its _sequential depth_. The sequential depth of a circuit is the minimum number of clock cycles needed to reach all possible states, typically starting from the reset state of the design. For most practical designs, the sequential depth can easily reach thousands of clock cycles. To make things worse, the sheer size of an industrial design may make it impossible for the property checker to unroll for more than a few clock cycles, even for highly optimized commercial tools. Therefore, it is usually not possible to verify such a design _exhaustively_ with bounded model checking from the reset state. Interval Property Checking (IPC) [32, 33] provides unbounded proofs and can scale to large designs by starting from a symbolic initial state. Instead of traversing a large number of states from reset, IPC starts from an arbitrary "any" state. However, there are two challenges associated with this approach. The first challenge stems from the nature of the symbolic initial state. Since the proof starts from _any_ possible state, it also includes states that are unreachable from reset. This can lead to false counterexamples, since the property in question may hold for the set of reachable states, but fail for certain unreachable states. This challenge arises not only in security verification, but in all formal verification approaches that use a symbolic initial state. A standard approach to address this problem is to add _invariants_ to the proof: **Definition 4** (_Invariant_).: For a given HW design, we call a subset of states \(B\subseteq S\) an _invariant_ if for every state \(s\in B\) all its successor states \(s^{\prime}\) are also in \(B\). This means that, \[\forall s\in S,\forall i\in I:s\in B\Rightarrow\delta(s,i)\in B\] In other words, an invariant is a set of states that is closed under reachability. To simplify the notation, we implicitly assume that \(S_{0}\subseteq B\) in the rest of the paper, i.e., we only consider invariants that include the reset states. Even when using a symbolic initial state that "fast-forwards" the system to an arbitrary execution state, the data must still propagate from the input through the system before it can affect a control output. Therefore, the second challenge is to handle the structural depth, i.e., the length of the propagation path from \(X_{D}\) to \(Y_{C}\), of the design. For large and complex designs, unrolling the circuit is costly and quickly reaches the capacity of formal tools. In such cases, it is not possible to make exhaustive statements about a design's data-obliviousness based on its I/O behavior alone and we need to look into the internal state of the system. For this problem, we propose an _inductive property_ for data-obliviousness that spans a single clock cycle and that also considers internal state signals. Just like for the input/output signals, we partition the set of state variables \(Z\) into control variables \(Z_{C}\) and data variables \(Z_{D}\), where and \(Z_{C}\cap Z_{D}=\emptyset\). Accordingly, \(s_{C}(\tau)\) is the sequence of valuations to \(Z_{C}\) and \(s_{D}(\tau)\) is the sequence of valuations to \(Z_{D}\). For the sake of a simplified notation, we also let \(i_{C}\), \(o_{C}\) and \(s_{C}\) take an input symbol, output symbol or state of the Mealy machine and return the valuation of the corresponding subset of control signals \(X_{C}\), \(Y_{C}\) and \(Z_{C}\), respectively. We present how to systematically partition \(Z\) into \(Z_{C}\) and \(Z_{D}\) later in Sec. IV such that this process is always conservative in terms of security. The data-obliviousness property that we use as an element of our inductive reasoning is shown in Eq. 3. \[\begin{split}\textbf{{DO}}^{\prime}\coloneqq&\ \exists B\subseteq S,\ \forall s_{1},s_{2}\in B,\ \forall i_{1},i_{2}\in I:\\ & s_{C}(s_{1})=s_{C}(s_{2})\wedge i_{C}(i_{1})=i_{C}(i_{2})\\ &\Rightarrow s_{C}(\delta(s_{1},i_{1}))=s_{C}(\delta(s_{2},i_{2}) )\\ &\wedge o_{C}(\lambda(s_{1},i_{1}))=o_{C}(\lambda(s_{2},i_{2})) \end{split} \tag{3}\] This property expresses that if we have two instances of the system for which the control inputs and states have equal values, then the control state variables will also be equal in the next states of the two instances. This weakens the initial assumption that a discrepancy between the two instances can originate only from the data inputs. By allowing internal data (state) signals to take arbitrary values, we implicitly model any propagation of data through the system by the symbolic initial state. It is important to remember that the invariant \(B\) is a superset of the reachable states because we require it to include the initial states, \(S_{0}\subseteq B\). In practice, the security property of Eq. 3 holds not only in the reachable state set but, often, also in many unreachable states. Therefore, finding a suitable invariant for the given property is usually less of a problem than may be generally expected. In our proposed methodology (Sec. IV), we systematically create the necessary invariant in an iterative procedure and prove it on the fly along with the property for data-obliviousness. In the same way as for Eq. 3, we can derive an inductive property for data-obliviousness when the set of allowed inputs to the design is restricted by a constraint \(\phi\) (Def. 3). This causes a restriction also in the set of reachable states, which must be expressed by an invariant. To this end, we extend Def. 4 and denote \(B_{\phi}\) as a set of states for which \(\forall s\in S,\forall i\in\phi:s\in B_{\phi}\Rightarrow\delta(s,i)\in B_{\phi}\) under a given constraint \(\phi\). As an example, assume that the input constraint \(\phi\) excludes branch instructions from entering the pipeline of our processor. A corresponding invariant \(B_{\phi}\) excludes all states that involve processing of such branch instructions. We elaborate on how to systematically derive such an invariant in Sec. IV. Eq. 4 shows the inductive property for input-constrained data-obliviousness. \[\begin{split}\textbf{{DO}}^{\prime}{}_{\phi}\coloneqq& \ \exists\phi,\ \exists B_{\phi}\subseteq S,\ \forall s_{1},s_{2}\in B_{\phi},\ \forall i_{1},i_{2}\in\phi:\\ & s_{C}(s_{1})=s_{C}(s_{2})\wedge i_{C}(i_{1})=i_{C}(i_{2})\\ &\Rightarrow s_{C}(\delta(s_{1},i_{1}))=s_{C}(\delta(s_{2},i_{2}) )\\ &\ \ \ \ \ \ \wedge o_{C}(\lambda(s_{1},i_{1}))=o_{C}(\lambda(s_{2},i_{2}) )\end{split} \tag{4}\] We now show that, for any HW design, our inductive properties cover their corresponding definitions over infinite traces. **Theorem 1** ((3) \(\Rightarrow\) (1) and (4) \(\Rightarrow\) (2)).: _If a HW design operates (constrained) data-obliviously according to the inductive property Eq. 3 (Eq. 4), it also operates (constrained) data-obliviously according to the property over infinite traces Eq. 1 (Eq. 2)._ Proof.: We show that (4) \(\Rightarrow\) (2). This covers (3) \(\Rightarrow\) (1) for the special case that \(\phi=I\). We prove this implication by contradiction. Assume a HW design fulfills property (4) for a given \(\phi\) but violates property (2), i.e., there exists a set of traces \(\tau_{1},\tau_{2}\in T_{\phi}\) with \(t\in\mathbb{N}_{0}:s(\tau_{1}[t])=s(\tau_{2}[t])\wedge i_{C}(\tau_{1}[t..])=i_ {C}(\tau_{2}[t..])\) such that \(\exists t_{k}\in\mathbb{N}_{0}:o_{C}(\tau_{1}[t_{k}])\neq o_{C}(\tau_{2}[t_{k }])\) where \(t_{k}>t\). (The case \(t_{k}=t\) can be excluded since property (4) holds on the design and \(Z_{C}\subseteq Z\).) \(o_{C}(\tau_{1}[t_{k}])\neq o_{C}(\tau_{2}[t_{k}])\) requires that the antecedent of (4) is violated at time point \(t_{k}-1\). Since our initial assumption requires \(i_{C}(\tau_{1}[t_{k}-1])=i_{C}(\tau_{2}[t_{k}-1])\), it follows that \(s_{C}(\tau_{1}[t_{k}-1])\neq s_{C}(\tau_{2}[t_{k}-1])\). However, this is a contradiction to \[\begin{split} s(\tau_{1}[t])=s(\tau_{2}[t])& \stackrel{{ Z_{C}\subseteq Z}}{{\Rightarrow}}s_{C}(\tau_{1}[t])=s_{C}( \tau_{2}[t])\\ &\stackrel{{(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq operation has finished, we can prove data-independence for these signals instead. In any case, if there is no handshaking control implemented whatsoever, the specification must give detailed information about the timing behavior of the system. Then, timing is an essential part of the circuit's functional correctness and should be covered by conventional verification methods. ## IV Methodology In this chapter, we present _Unique Program Execution Checking for Data-Independent Timing (UPEC-DIT)_ building upon earlier work in [17]. UPEC-DIT is a formal methodology to systematically and exhaustively detect data-dependent behavior at the microarchitectural level. In particular, we show how UPEC-DIT is used to verify data-obliviousness by proving the properties introduced in the previous chapter (Eq. 3 and Eq. 4). In the following, for reasons of simplicity, when the distinction between the case of \(\phi=I\) and the case of \(\phi\subset I\) is irrelevant, we omit the term "input-constrained" and simply speak of "data-obliviousness". ### _UPEC-DIT Overview_ Fig. 1 shows an overview of the different steps of the methodology. We describe each step in more detail in the following subsections. We base our work on a methodology called _Unique Program Execution Checking (UPEC)_[34, 35]. UPEC utilizes IPC [32, 33] and a 2-safety model to systematically and exhaustively trace the propagation of confidential information through the system. Originally, UPEC was devised to detect transient execution attacks in processors. In that scenario, the secret is stored at a protected location (_data-at-rest_) from which it must never leak into the architecturally visible state. This paper extends over previous UPEC approaches in that it targets a threat model for _data-in-transit_, i.e., confidential information is processed legally, but must not cause any unwanted side effects. Our starting point is the RTL description of the system. Based on the specification, we partition the I/O signals of the design into control and data. With this information, we create the computational model and initialize the main inductive property for UPEC-DIT which is then submitted to the main algorithm that implements the induction step in our global reasoning. During the execution of the algorithm, the property is iteratively refined with respect to the internal state signals until it either holds or a counterexample is returned, describing a data-dependent behavior. This refinement procedure is conservative in the sense that a wrong decision may lead to a false counterexample, but never to a false security proof. If the property holds, the final step is to ensure that our assumptions for the proof are valid by performing an induction base proof. Once both, the induction step property and the induction base property, have been successfully verified we have obtained a formal guarantee that the design under verification operates in a data-oblivious manner. ### _Computational Model_ Fig. 2 shows the abstract computational model for the proposed methodology. Like previous UPEC approaches, UPEC-DIT is based on a 2-safety computational model. In our model, inputs and outputs of the design are partitioned into control and data signals. This is a manual step that, in most cases, is straightforward and can be done by consulting the design specification. Generally speaking, any confidential information passing through the system must be marked as data. After the partitioning, the generation of the 2-safety model can be fully automated. The control inputs \(X_{C}\) take arbitrary but equal values, whereas the data inputs \(X_{D}\) remain unconstrained. According to Def. 1, our goal is to prove that for any sequence of inputs, the control outputs \(Y_{C}\) never diverge from their respective counterparts in the other instance. ### _The UPEC-DIT Property_ Fig. 3 shows our IPC property template to formally verify the data-obliviousness of the design. It expresses our abstract definitions introduced in Sec. III by standard property languages such as SVA. We iteratively refine this property with respect to \(Z_{C}\), \(\phi\) and \(B_{\phi}\) during the execution of the UPEC-DIT algorithm. We now describe the Fig. 1: UPEC-DIT Flow Fig. 3: Interval property template for UPEC-DIT Fig. 2: UPEC-DIT Computational Model individual components of the property, expressed as _macros_ or _functions_, in more detail: * _Control_State_Equivalence()_ constrains the state holding signals related to control (\(Z_{C}\)) to be equal in both instances of the computational model. At the start of the algorithm, we set \(Z_{C}=Z\), before iteratively refining this partitioning. We elaborate on how this is done in Sec. IV-D. * _Input_Constraints()_ exclude unwanted behavior to achieve input-constrained data-obliviousness (Def. 3). For systems specifically designed to be data-oblivious, such constraints might not be required. An example for this macro would be "no new (data-dependent) division issued to the processor pipeline". * _Invariants()_ are used to constrain the state space to exclude unreachable scenarios. These invariants are iteratively deduced during the main algorithm (Sec. IV-D). All invariants used to strengthen our properties are proven inductively _on the fly_, along with data-obliviousness, which is why the macro _Invariants()_ is included also in the property commitment. * _Control_Output_Equivalence()_ is our main proof target and expresses that the outputs marked as control must never diverge. It is important to note that the control inputs are already constrained in the computational model itself (cf. Fig. 2), and, therefore, are not specified in the property. ### _The UPEC-DIT Algorithm_ ``` 1:procedureUPEC-DIT 2:\(Z_{C}\gets Z\) 3:\(\phi,B_{\phi}\leftarrow\emptyset\) 4:\(\textit{CEX}\leftarrow\) IPC(UPEC-DIT-Step(\(Z_{C},Y_{C},\phi,B_{\phi}\))) 5:while\(\textit{CEX}\neq\{\}\)do 6:if\(Y_{C1}\neq Y_{C2}\in\textit{CEX}\)then 7:return\(\textit{CEX}\) 8:for each \(z\in Z_{C}:z_{1}\neq z_{2}\in\textit{CEX}\)do 9:if\(z\) is \(Data\)then 10:\(Z_{C}=Z_{C}\backslash\{z\}\) 11:elseifInvalid\(\textit{CEX}\)then 12: Update(\(\phi,B_{\phi}\)) 13:\(Z_{C}\gets Z\) 14:break 15:else 16:return\(\textit{CEX}\) 17:\(\textit{CEX}\leftarrow\) IPC(UPEC-DIT-Step(\(Z_{C},Y_{C},\phi,B_{\phi}\))) 18:returnhold ``` **Algorithm 1** The UPEC-DIT Algorithm The basic idea of the UPEC-DIT algorithm is to use the counterexamples of the formal tool to iteratively refine the set of all states \(Z\) into control and data subsets \(Z_{C}\) and \(Z_{D}\), respectively. Leveraging this partitioning of internal signals results in an inductive proof over only a single clock cycle, which scales even for very large designs. Alg. 1 shows the algorithm in pseudocode. We begin by initializing the set of control states \(Z_{C}\) with the set of all state signals \(Z\) in Line 2. In the beginning, the set of input constraints \(\phi\) and invariants \(B_{\phi}\) is empty. We then call the formal property checker in Line 4 using the property in Fig. 3. In almost all cases, this will produce a counterexample _CEX_ which shows a first propagation of data. We then continuously investigate the returned counterexamples to decide on how to proceed: If a discrepancy has traveled to a control output (Line 6), we detected a data-dependent timing and return the respective counterexample. If a propagation to one or multiple internal signals has occurred (Line 8), the verification engineer has to decide if this information flow was valid or not. Whenever a propagation to a data signal, e.g., a pipeline buffer, is detected, we generalize the proof by removing this variable from the set of control signals (Line 10). In the case that the propagation hit a signal considered control, e.g., pipeline stalls or valid signals, the algorithm concludes and returns the counterexample (Line 16). In some cases, the counterexamples may show behavior that is either unreachable or invalid in the given application scenario. An example for an invalid behavior could be the exclusion of certain instruction types for which data-obliviousness is not required, e.g., branches. To handle these cases, invariants or input constraints must be added to restrict the set of considered states (Line 12). Afterwards, we also reset the set of control signals \(Z_{C}\). This is required for correctness, as an added input constraint might also make some previously considered propagation impossible. In our experiments, however, resetting \(Z_{C}\) did not occur often and thus did not cause significant overhead. After every discrepancy in the counterexample has been investigated, we rerun the proof with the new assumptions (Line 17). This is continued until the proof holds and no new counterexample can be found by the formal property checker. In this case, the algorithm terminates. We continue by verifying the induction base, i.e., we check, as described in the following subsection, that our initial assumptions hold after a system reset. It is important to note that the proposed methodology is conservative in the sense that if a control-related signal from \(Z\) is mistakenly declared as data (Line 9), it will not result in a false security proof. In this case, the algorithm will continue until the propagation eventually reaches a control output \(Y_{C}\) and returns the corresponding counterexample. The main purpose of the individual partitioning of the state set signals is to detect and stop such propagation as early as possible. ### _Induction Base_ By successfully proving the inductive property (Fig. 3), we show that our 2-safety model never diverges in its control behavior during operation. For completing our proof of data-obliviousness, we also have to verify that the assumptions and invariants made in this inductive proof are correct. Therefore, the last step of the methodology is to prove the induction base, i.e., that the system starts in a data-oblivious state. Fig. 4 shows our IPC property template for the induction base. We want to show that the reachability assertions we introduced during the iterative algorithm include the reset state of the system, which means that they are indeed invariants according to Def. 4. Furthermore, we want to prove that the system is properly initialized regarding its control behavior. In essence, _Control_State_Equivalence()_ ensures that all control state signals \(Z_{C}\) are initialized. If this commitment fails, then the system could behave differently after a reset, which could hint to a functional bug. Lastly, by assuming _Control_Output_Equivalence()_, we verify that there is no combinatorial path from a data input \(X_{D}\) to a control output \(Y_{C}\). When both the base and step property hold, we have exhaustively verified that our design operates data-obliviously, either in the strong sense of Def. 1, or for a subset of its total behavior in the weakened sense of Def. 3. This represents an _unbounded_ formal proof that, due to its inductive nature, can scale to very large designs. ## V Optimizations The methodology described Sec. IV verifies data oblivious-behavior at the microarchitectural level exhaustively. In practice, it may not always be necessary to be exhaustive and an efficient bug-hunting may suffice for the intended application of some designs. In addition, the low complexity of certain designs allows for exhaustive verification of data-obliviousness at the I/O interface level, without the need for invariants and constraints on the internal behavior of the design. To this end, we now discuss some enhancements and trade-off techniques that may be useful for applying UPEC-DIT in practice. ### _Unrolled Proofs_ The methodology presented in Sec. IV uses an inductive proof with a single-cycle property to avoid complexity issues. In this approach, the set of all possible data propagation paths is over-approximated in the symbolic initial state by leaving the values of internal data signals unconstrained. While this over-approximation leads to a very low proof complexity, it implies the need to deal with the possibility of spurious counterexamples. Writing invariants can overcome this problem, but in some cases it is affordable to simply increase the computational effort to avoid these problems. If the complexity of the system allows for a sufficient number of unrollings in our computational model, considering the full propagation path starting from any data input to any control output can significantly reduce the number of false counterexamples and thus the effort of writing invariants. This unrolled approach represents the original UPEC-DIT methodology, as described in [17]. The idea of unrolled proofs is shown in Fig. 5 and is a straightforward implementation of Def. 1. In this property, _all_ state signals \(Z\) are initialized to equal but arbitrary values between the two instances. This is denoted by the _State_Equivalence()_ macro. We then prove that for a maximum latency \(k\), the two instances maintain equal control outputs \(Y_{C}\). We choose \(k\) to be greater or equal to the length of the longest HW operation in the design. If this property fails, it means that the difference in \(Y_{C}\) must originate from the data inputs \(X_{D}\), since this is the only source of discrepancy between the two instances. In this case, the property checker returns a counterexample which guides the verification to the root cause by highlighting the deviating values. The great advantage of this variant of UPEC-DIT is that it does not require an iterative partitioning of internal state signals \(Z\). Therefore, the only manual steps are partitioning the system interface and choosing a maximum latency \(k\). Everything else can be automated. For many low-complexity designs, such as functional units or accelerators, this approach can provide exhaustive proofs. It can also serve as a quick initial test for larger systems, as most timing channels become visible after only a few cycles. Unfortunately, this approach can run into scalability problems because the full propagation paths from input to output can be too long in more complex systems such as processor cores. A trade-off between computational complexity and a decreased number of false counterexamples is presented in Fig. 6. This variant of UPEC-DIT also starts by initializing _all_ state signals \(Z\) to equal but arbitrary values, and thus considers propagation paths starting from the data inputs. In this case however, we perform a partitioning of \(Z\) into \(Z_{C}\) and \(Z_{D}\) based on Alg. 1. Having an internal representation of the control-flow allows for a much earlier detection of data-dependent side-effects. Furthermore, isolating the source of the discrepancy to the input makes the returned counterexamples more intuitive and less likely to be spurious. Fig. 4: Interval property template for UPEC-DIT Induction Base Fig. 5: Unrolled UPEC-DIT property only considering I/O-behavior Fig. 6: Unrolled UPEC-DIT property with internal state signals Unfortunately, this approach does not scale well for complex systems beyond a few clock cycles. Nonetheless, this unrolled method can serve as a basis for the inductive proof, since the deduced partitioning of \(Z\) is the same for both variants. Therefore, it often makes sense to start with the unrolled approach and set \(k=1\). The verification engineer can then iteratively increase \(k\) until no new counterexamples appear or the computational cost becomes prohibitive. Then the transition to the inductive method is made by initializing \(Z_{C}\) in Line 2 of Alg. 1 with the remaining control state signals. In our experience, starting with an unrolled model makes the counterexamples more intuitive because it shows longer propagation paths starting from the inputs. This can be especially helpful early in the methodology, when the verification engineer has little to no knowledge or intuition about the design under verification. We have omitted the invariants in Fig. 5 and Fig. 6 because they are usually very simple when all state signals \(Z\) are initialized to equal values. Finally, we would like to point out that the unrolled property (Fig. 6) can also be used in an effective _bug hunting_ approach that trades formal/exhaustive coverage for efficiency. Instead of executing the full iterative algorithm that refines the set of state signals systematically (required for formal exhaustiveness), we can specify _Control_State_Equivalence()_ using a set of state signals identified as control manually, and run the proof. The verification engineer determines the control variables based on knowledge and experience. Obvious examples of control signals are stall variables in a processor pipeline. Empirical evidence from our experiments shows that almost all timing vulnerabilities manifest themselves in only a handful of control signals. While this short-cut over the formal algorithm bears a certain risk of missing corner-case behavior, it avoids the potentially laborious iterative procedure and produces high-quality results fast. It may be a viable option in certain practical scenarios. ### _Black-Boxing_ Black-boxing can significantly increase the scalability of the formal proof. Essentially, black-boxing excludes the functionality of certain components from the system, reducing the complexity of the computational model. When a module is black-boxed, its inputs become outputs of the system under verification. Likewise, any output of the module connected to the rest of the system is now considered as an open input, since the functionality of that component is no longer considered. In particular, for state-heavy submodules, such as caches, their black-boxing can greatly simplify the overall state space for the formal proof. Many commercial formal tools provide black-boxing as a fully automated, built-in feature. While black-boxing significantly reduces the overall complexity of the formal proof, it can lead to false counterexamples or even verification gaps in standard functional verification. Fortunately, the 2-safety computational model of UPEC-DIT allows for a _sound_ black-boxing to ensure that no security violations are missed. We accomplish this by monitoring the interface of the black-boxed component, as shown in Fig. 7. By checking that the data traveling to the black-box is equal in both instances, UPEC-DIT detects any propagation involving a submodule that could result in a future data-dependent timing behavior. False counterexamples are prevented in our approach by constraining all the outputs of the black-boxed component to be equal between the two instances. If a counterexample is produced that shows a difference at the black-box inputs, the verification engineer can decide to either undo the black-boxing or examine the module individually. The first option requires less effort but results in higher computational complexity, while the second option requires more manual effort but can lead to a better understanding of the system and simpler counterexamples. We will explore the second option further in the following subsection. ### _Modularization_ Setting up formal proofs from scratch for a very large system can be a difficult task. Therefore, it is often advisable to decompose the problem and to first look at individual components that are "suspicious" or critical to security. Examples would be a cryptographic accelerator or, in the case of a processor, the various functional units of the pipeline. Investigating an individual component results in a less complex computational model, simpler counterexamples, and helps to establish a better understanding of the system. We can utilize the same inductive approach as elaborated in Sec. IV or, if computational complexity permits, the unrolled approach described in Sec. V-A. If a counterexample is found for a single module, it is very likely that it also indicates a security threat to the entire system. If a component turns out to be data-oblivious, we can use this information to simplify our computational model of the system using black-boxing (Sec. V-B). For this purpose, we consider the control (data) inputs of the black-box as control (data) outputs of the system and the control (data) outputs of the black-box as control (data) inputs of the system. Therefore, we can skip the data propagation through the black box and use its data output as a new source of discrepancy. This allows us to systematically partition the formal proof in a divide-and-conquer fashion, making it scalable even for very large systems. ### _Proof Parallelization_ Splitting and parallelizing the proof is another optimization of the algorithm that can be fully automated. The fundamental idea is that each propagation candidate can be checked independently. For this purpose, we modify our property to take Fig. 7: Computational Model before and after sound black-boxing a parameter \(z\in Z\), as shown in Fig. 8. The _Equivalence(z)_ macro in the prove part of our property checks that \(z\) maintains equality between the two instances of the computational model. Instead of proving all possible propagation paths at once in Alg. 1, we now generate and prove an individual property for each \(z\in Z_{C}\) (Line 4 and Line 17). Each individual property can have a significantly reduced runtime compared to the global property, especially when considering an unrolled proof (Sec. V-A). If sufficient computing resources are available to run multiple properties in parallel, this step can reduce the overall runtime of the methodology. The same idea of splitting the proof can also be applied to the control outputs. However, since their number is usually much smaller than the number of state signals, this does not significantly improve the overall runtime in most cases. ### _Cone-of-Influence Reduction_ The UPEC-DIT property in Fig. 3 checks all internal control signals \(Z_{C}\) for a possible data propagation. However, data can only propagate into state signals that are in the sequential fan-out of already deviating signals. So instead of checking for _Control_State_Equivalence()_, we can check for the equivalence of all state signals that are in the fan-out of \(I\cup Z\backslash Z_{C}\). Since this step is based solely on structural information, it can be fully automated with the help of dedicated tools. This cone-of-influence reduction can also be used to further improve proof parallelization (Sec. V-D). By reducing the number of signals considered, the total number of parallelized properties is also reduced. ## VI Experiments ### _Example: SHA512 Core_ We begin our practical evaluation with a practical example demonstrating the proposed methodology in detail, namely an open-source implementation of a cryptographic accelerator implementing the SHA512 algorithm [36]. A timing channel in such a core can create severe security flaws, as it can significantly reduce the strength of the underlying encryption. Therefore, we want to exhaustively verify that no data-dependent timing behavior exists in this accelerator. We begin by looking at the interface of the module along with its specification. The core implements 5 inputs (_clk_i_, _rst_i_, _text_i_, _cmd_i_ and _cmd_w_i_) and 2 outputs (_text_o_ and _cmd_o_). After referring to the documentation, we mark the clock, reset and handshaking signals (_cmd_) as _control_, while the plain and cipher text ports are marked as _data_. Our goal is to prove that the accelerator is data-oblivious according to Def. 1, i.e., the data input _text_i_ has no influence on the control output _cmd_o_. The next step is to generate our 2-safety computational model (Fig. 2), in which the control inputs are constrained to be equal, while the data input remains unconstrained. We also generate a macro for _State_Equivalence()_ that constrains each of the 37 state-holding signals (2162 flip-flops) to equal values between the two instances at the start of our proof. Given the I/O partitioning, both steps can be fully automated with dedicated tool support. We can now choose to formulate either a single-cycle or an unrolled proof. For an exhaustive proof over multiple clock cycles, we have to unroll the model to its sequential depth, as elaborated in Sec. V-A. According to the specification, one encryption operation has a latency of 97 clock cycles. Since the complexity of this module is rather low, it is still possible to unroll the formal proof for several hundred cycles. For more complex accelerators, this is usually not feasible. Therefore, to show the effectiveness of the UPEC-DIT methodology, we decide to iteratively create an inductive single-cycle proof using Alg. 1. We begin by setting up the UPEC-DIT-Step property (Fig. 3 in Sec. IV-C) in such a way that _Control_State_Equival()_ considers all 37 state holding signals \(Z\) of the design and _Control_Output_Equality()_ ensures the equality of _cmd_o_. After running the verification tool, we receive a first counterexample showing a discrepancy propagation to two internal pipeline buffers (_W3_ and _Wt_). Since these are used to store intermediate results of the encryption, we classify them as _data_, exclude them from the macro and rerun the proof. After several iterations of checking the UPEC-DIT-Step property, a fixed point is reached with only 5 out of 37 signals left in \(Z_{C}\): _busy_, _cmd_, _read_counter_, _round_ and _Kt_. All other signals only store intermediate results and were therefore marked as data and removed from the proof. This means that even though a vast majority of the design can be in an arbitrary state, the timing behavior only depends on certain control registers inside the design. The last step is to verify the induction base with the UPEC-DIT-Base property (Fig. 4 in Sec. IV-E), which also holds. Hence, we successfully showed that the SHA core operates timing-independent w.r.t. its input data. With some experience, the entire proof procedure can be completed in less than an hour. ### _Functional Units and Accelerators_ Tab. I shows the results for several functional units (FUs) and accelerators. All of these experiments were conducted using the unrolled approach as elaborated in Sec. V-A. The first design, _BasicRSA_, was taken from _OpenCores_[37] and implements an RSA encryption. Using the UPEC-DIT methodology, we detected a timing side channel in the design. The FU computes the modular exponentiation needed for the encryption algorithm in a _square-and-multiply_ fashion. While Fig. 8: Parameterized UPEC-DIT-Step Property this approach is very efficient, it makes the encryption time directly dependent on the size of the secret key. We also took a separate look at the submodule that is responsible for performing the modular multiplication. This FU also turned out not to be data-oblivious, since its timing depends on the size of the multiplier. _SHA1_, _SHA256_ and _SHA512_[36] implement different variants of the _Secure Hash Algorithm_. _AES1_ and _AES2_ are different implementations of the _Advanced Encryption Standard_. All of these accelerators were proven by UPEC-DIT to have data-independent timing. The _Multiplication-Division-Shifting-Unit_ is part of the _Featherweight RISC-V_[38] project. Its goal is to build a resource-efficient implementation for FPGAs. All of its operations take multiple clock cycles, shifting only one bit in each cycle. In this FU, multiplication and division were proven to execute data-independently, requiring 33 cycles to complete. However, UPEC-DIT produced a counterexample for shift operations, as the timing is directly dependent on the shift amount. To perform a shift by \(N\) bits, the module sets its internal counter value to \(N\) which causes the operation to take _N+1_ cycles in total. This can be dangerous because shift operations are considered data-independent in almost all applications. The last two designs are _serial division units_ taken from the _ZipCPU_[39] and _CVA6_[40] open-source projects. Both FUs showed strong dependencies of their timing w.r.t. their operands. _ZipCPU_ implements an early termination when dividing by zero. Also, if the signs of the operands are different, the division takes an additional cycle. The _CVA6_ FU implements certain optimizations for HW division which entail that the division time is determined by the difference in leading zeros of the operands. This creates a data dependency. In all case studies, UPEC-DIT was applied without _a priori_ knowledge of the designs. It systematically guided the user to the points of interest. Even though the designs have operations taking up to 192 cycles, proof time and memory requirements remained insignificant. Furthermore, operations like multiplication, which are usually a bottleneck for formal tools, did not cause any complexity issues. This is by merit of the proposed 2-instance computational model which abstracts from functional signal valuations and only considers the difference between the two instances. ### _In-Order Processor Cores_ We investigated four different open-source in-order RISC-V processors from low to medium complexity, as shown in Tab. II. The _Ibex_ processor is listed twice, as it comes with a _data-independent timing (DIT)_ security feature, which we examined separately. All of these experiments were conducted using the unrolled approach as elaborated in Sec. V-A. The results for this approach show that time and memory requirements are still moderate, even in the case of a medium-sized processor. The first design we investigated is the sequential _Featherweight RISC-V_[38] processor which aims at balancing performance with FPGA resource utilization. As our results show, most instructions execute independently of their input data. However, there was one big exception, namely, R-Type shift instructions. For area efficiency, the implementation shares a single shifting unit for multiplication, division and shifting. The shifting unit can only shift one bit in each cycle (cf. VI-B), which results in data-dependent timing depending on the shift amount (rs2). We singled out the shift instructions in a separate proof and showed that other R-Type instructions like addition do, in fact, preserve data-independent timing. Note that I-Type shift instructions also execute with dependence on the shift amount. The shift amount, however, is specified in the (public) immediate field of the instruction. Consequently, since the program itself is viewed as _public_, I-Type shifting executes data-obliviously. Load, Store and Jump (JALR) can cause an exception in case of a misaligned address, while Branches incur a penalty if a branch is not taken. The _Ibex RISC-V Core_[20] is an extensively verified, production-quality open-source 32-bit RISC-V CPU. It is maintained by the _lowRISC_ not-for-profit company and deployed in the OpenTitan platform. It is highly configurable and comes with a variety of security features, including a _data-independent timing (DIT)_ mode. When activating this mode during runtime, execution times of all instructions are supposed to be independent of input data. In our experiments, we apply UPEC-DIT for both inactive and active DIT mode and use the default _"small"_ configuration, with the _"slow"_ option for multiplication. When the DIT mode is turned off, we found three cases of data-dependent execution time: * Division and (slow) multiplication implement fast paths for certain scenarios. * Taken branches cause a timing penalty, as the prefetch buffer has to be flushed. * Misaligned loads and stores are split into two aligned memory accesses. The first two issues are solved when DIT mode is active, as seen in Tab. II. All fast paths are deactivated and non-taken branches now introduce a delay to equal the timing of taken branches. However, the timing violation for misaligned memory accesses is not addressed. When running Ibex in DIT mode, data-oblivious memory accesses require special measures, such as the integration of the core with a data-oblivious memory sub-system. For example, an oblivious RAM controller [41] makes any memory access pattern computationally indistinguishable from any other access pattern of the same length. However, our experiments with UPEC-DIT reveal that even with such strong countermeasures in place, Ibex still suffers from a side channel in the case of memory accesses that are misaligned. This is because the core creates a different number of memory requests for aligned and misaligned accesses. We reported this issue to the lowRISC team and suggested to disable the misaligned access feature for DIT mode. With this fix, the HW would remain secure even in case that a faulty/malicious SW introduces a misaligned access. The lowRISC team refined the documentation and will consider the proposed fix for future updates of the core. The _SCARV_[42] is a 5-stage single-issue in-order CPU core, implementing the RISC-V RV32IMC instruction sets. We prove that most instructions do not leak information through timing by running UPEC-DIT on the core. Taken branches, however, use additional cycles due to a pipeline flush. Memory accesses can cause exceptions if they are misaligned. Furthermore, any loads and stores to a specific address region are interpreted as memory-mapped I/O accesses which do not issue a memory request. This can be used to set a SW interrupt timer depending on the store value, thus creating data-dependent timing. _CVA6_[40], also known as _Ariane_, is a 6-stage, single-issue 64-bit CPU. Having almost 700 k state bits, the design itself already pushes the limits of formal property checkers. To make things worse, a straightforward 2-safety circuit model would have twice as many state bits. Fortunately, UPEC-DIT allows for _sound_ black-boxing to cope with complexity issues (cf. Sec. V-B). Black-boxing reduces the computational model to 24 k state bits. For load and store instructions, a couple of exception scenarios showed up: Access faults (PMP), misaligned addresses or page faults. Besides a data-dependent division and timing variations by mispredicted branches UPEC-DIT also found an interesting case in which a load can be delayed. In order to prevent RAW hazards, whenever a load is issued, the store buffer is checked for any outstanding stores to the same offset. If any exist, the load is, conservatively, stalled until the stores have been committed. However, this can cause a timing delay in case of a matching offset, even if both memory accesses go to different addresses. ### _UPEC-DIT with Inductive Proofs_ In this subsection, we extend our experiments to the BOOM [21], a superscalar RISC-V processor with FP support, a deep 10-stage pipeline and out-of-order execution. In a first attempt, the same unrolled approach (cf. Sec. V-A), as above for in-order processors, was applied to BOOM, and UPEC-DIT was able to prove the data-obliviousness of basic arithmetic instructions and multiplication. However, the high design complexity caused by the deep pipeline pushed the formal tools to their limits, with some proof times exceeding 20 hours. Fortunately, with a few design-specific adjustments, it was still possible to formally verify the absence of any data-dependent side-effects. These optimizations included splitting the verification into individual proofs for the integer and FP pipelines, since these are implemented as separate modules in BOOM. To do this, we constrained the outputs of the FP register file to be equal for the integer pipeline verification and vice versa. After unrolling for 7 clock cycles, no new propagation was detected in either pipeline. Furthermore, a sound black-boxing (cf. Sec. V-B) of complex components, such as the re-order buffer (ROB) and caches, was performed, significantly reducing the complexity of the computational model: While a single instance of BOOM has more than 500k state bits, our final 2-safety model contained only about 47k state bits. Black-boxing the ROB not only reduces the size of the computational model, but also drastically reduces the complexity of spurious counterexamples caused by the symbolic initial state. During normal operation of an out-of-order processor, the ROB reflects the control state of the entire system. Assuming an arbitrary initial state can therefore lead to many false counterexamples where the ROB in no way reflects the state of the computational pipeline. Fortunately however, the actual state of the ROB is of no concern to UPEC-DIT, as long as instructions commit to it equally between the two instances. We can therefore create a black-box and consider its inputs in our proof statement. Nevertheless, the long proof times caused by the high complexity made us rethink the approach. Thus, we extended the methodology and developed the inductive proof, as presented in Sec. IV, which spans over only a single clock cycle. Eliminating the need to unroll the circuit drastically reduced the complexity of the computational model, diminishing individual proof times from over 20 hours to just a few seconds. In this transition, we also found that in most cases, data can only affect the control flow through a few specific channels. This insight made it easier to find meaningful invariants and constraints that restrict the over-approximated state space in order to avoid false counterexamples. Tab. III shows the results in terms of computational complexity for the inductive proofs. We also re-run our experiment on CVA6 [40] to illustrate the improvement in scalability compared to an unrolled proof. As shown in the table, proof time and memory usage are reduced significantly by eliminating the need to unroll the model. In CVA6, data propagates through 22 internal signals (\(Z_{D}\)), which can take arbitrary values in the final proof. By assuming the equality of two internal signals, we excluded division and control and status register (CSR) operations from consideration. Branches and Load/Store instructions were excluded by black-boxing the branch unit and the load-store unit (LSU). The experiments in BOOM were split for the integer and FP pipelines. Within the integer pipeline, UPEC-DIT detected data propagation into 31 internal signals (\(Z_{D}\)). We excluded branches, division, and misaligned memory accesses after receiving counterexamples for each of these cases. Inside the FP pipeline, data is propagated into 143 internal data signals (\(Z_{D}\)). However, only counterexamples caused by FP division and square root had to be excluded from the proof with constraints. For BOOM, our results show that branch, integer division, FP division, and square-root operations have data-dependent timing. We have not examined memory access instructions, as these are usually given special treatment in constant-time programming. The remaining integer arithmetic instructions (including multiplication) were formally proven to execute data-obliviously. An interesting case are FP arithmetic instructions: Although their timing is independent of their operands, they are not data-oblivious. UPEC-DIT revealed that these instructions can still leave a side effect in the form of an exception flag in the FP CSR named _fcsr_. This is in compliance with the ISA specification of RISC-V. However, in order to create a timing side channel, a victim program would have to explicitly load these registers. Therefore, these instructions could, in fact, be used securely for constant-time programming, if a suitable SW restriction was introduced. ## VII Conclusion In this paper, we proposed UPEC-DIT, a novel methodology for formally verifying data-independent execution in RTL designs. We proposed an approach based on an inductive property over a single clock cycle, which facilitates a verification methodology scalable even to complex out-of-order processors. We presented and discussed several techniques that can help the verification engineer simplify and accelerate the verification process. UPEC-DIT was evaluated against several open-source designs, ranging from small functional units to a complex out-of-order processor. While many of the implemented instructions execute as expected, UPEC-DIT uncovered some unexpected timing violations. Our future work will address the design of security-conscious hardware. We envision that UPEC-DIT, if integrated in standard design flows, can make significant contributions to restoring the trust in the hardware for confidential computing.
2310.06603
V2X-AHD:Vehicle-to-Everything Cooperation Perception via Asymmetric Heterogenous Distillation Network
Object detection is the central issue of intelligent traffic systems, and recent advancements in single-vehicle lidar-based 3D detection indicate that it can provide accurate position information for intelligent agents to make decisions and plan. Compared with single-vehicle perception, multi-view vehicle-road cooperation perception has fundamental advantages, such as the elimination of blind spots and a broader range of perception, and has become a research hotspot. However, the current perception of cooperation focuses on improving the complexity of fusion while ignoring the fundamental problems caused by the absence of single-view outlines. We propose a multi-view vehicle-road cooperation perception system, vehicle-to-everything cooperative perception (V2X-AHD), in order to enhance the identification capability, particularly for predicting the vehicle's shape. At first, we propose an asymmetric heterogeneous distillation network fed with different training data to improve the accuracy of contour recognition, with multi-view teacher features transferring to single-view student features. While the point cloud data are sparse, we propose Spara Pillar, a spare convolutional-based plug-in feature extraction backbone, to reduce the number of parameters and improve and enhance feature extraction capabilities. Moreover, we leverage the multi-head self-attention (MSA) to fuse the single-view feature, and the lightweight design makes the fusion feature a smooth expression. The results of applying our algorithm to the massive open dataset V2Xset demonstrate that our method achieves the state-of-the-art result. The V2X-AHD can effectively improve the accuracy of 3D object detection and reduce the number of network parameters, according to this study, which serves as a benchmark for cooperative perception. The code for this article is available at https://github.com/feeling0414-lab/V2X-AHD.
Caizhen He, Hai Wang, Long Chen, Tong Luo, Yingfeng Cai
2023-10-10T13:12:03Z
http://arxiv.org/abs/2310.06603v1
V2X-AHD:Vehicle-to-Everything Cooperation Perception via Asymmetric Heterogenous Distillation Network ###### Abstract Object detection is the central issue of intelligent traffic systems, and recent advancements in single-vehicle lidar-based 3D detection indicate that it can provide accurate position information for intelligent agents to make decisions and plan. Compared with single-vehicle perception, multi-view vehicle-road cooperation perception has fundamental advantages, such as the elimination of blind spots and a broader range of perception, and has become a research hotspot. However, the current perception of cooperation focuses on improving the complexity of fusion while ignoring the fundamental problems caused by the absence of single-view outlines. We propose a multi-view vehicle-road cooperation perception system, vehicle-to-everything cooperative perception (V2X-AHD), in order to enhance the identification capability, particularly for predicting the vehicle's shape. At first, we propose an asymmetric heterogeneous distillation network fed with different training data to improve the accuracy of contour recognition, with multi-view teacher features transferring to single-view student features. While the point cloud data are sparse, we propose Spara Pillar, a spare convolutional-based plug-in feature extraction backbone, to reduce the number of parameters and improve and enhance feature extraction capabilities. Moreover, we leverage the multi-head self-attention (MSA) to fuse the single-view feature, and the lightweight design makes the fusion feature a smooth expression. The results of applying our algorithm to the massive open dataset V2Xset demonstrate that our method achieves the state-of-the-art result. The V2X-AHD can effectively improve the accuracy of 3D object detection and reduce the number of network parameters, according to this study, which serves as a benchmark for cooperative perception. The code for this article is available at [https://github.com/feeling414-lab/V2X-AHD](https://github.com/feeling414-lab/V2X-AHD). Knowledge Distillation, V2X perception, Heterogeneous data. ## I Introduction Environmental perception is a fundamental component of autonomous driving [1][2], as it provides information for path planning and motion control [3][4][5]. As a critical task of environment perception, object detection [6][7] has always been a research hotspot in autonomous driving. Light changes, as well as rainy and foggy weather, have a significant impact on vision-based object detection methods, resulting in low-security redundancy. In recent years, with the enhancement of information processing capabilities and the decrease in sensor prices, lidar [8][9][10][11][12] has achieved leapfrog development with its superior space construction and anti-weather interference capabilities. Academic and industrial circles are largely in agreement that lidar plays a role in advanced intelligent driving technologies. Despite this, the current application of lidar is limited to the single-vehicle dimension, and single-view perception has many inherent flaws. For instance, in crowded road conditions, it is easy to develop blind spots due to occlusion, whereas in long-distance open road conditions, it will result in sparse perception. More importantly, if a pedestrian suddenly emerges from the blind spot, the self-driving vehicle will have very little time to react, which will be a difficult problem for human drivers as well. To address the aforementioned issues with single-view perception, a collaborative perception system incorporating multi-view data has emerged. In congested road conditions, the collaborative sensing system has a global vision. Cross-viewing compensates for the obstruction caused by a single perspective. In contrast, when sensing long-distance road conditions, the Fig. 1: Advantages of cooperation perception. The purpose of our model is that a single view of an object has the characteristics of a fusion view. (a) and (b) show the single-view object detection results, respectively. (c) shows the fusion view detection results. (d) and (e) show the single-view vehicle’s point cloud. (f) shows the fusion view vehicle’s point cloud. collaborative sensing system increases the sensing distance through information relays and solves the issue of sparse perception. It is believed that cooperative sensing can achieve superior perception results compared to single-view vehicles. According to the different transmission content, the existing cooperative sensing strategies can be categorized into three groups based on the transmission content: early fusion [13] of the original point cloud data transmission, intermediate fusion [14][15][16][17] of the transmitted point cloud feature data, and late fusion [18][19][20] of the transmission perception results. In the early fusion, the original point cloud data are used for splicing. The perception problem of a single vehicle can be fundamentally resolved by filling the point cloud gap caused by occlusion and expanding the point cloud range. However, due to the enormous amount of data in the original point cloud, the transmission bandwidth limits information transmission, so early fusion cannot be realized. In contrast, the post-fusion strategy transmits each perspective's perception results directly, and the transmission bandwidth requirement is small. However, due to differences in the perception results of the same object from different perspectives, the fusion results are susceptible to the negative results of a single view, resulting in missed and false detections. After the information exchange, the intermediate fusion strategy fuses the extracted intermediate features. It is a compromise strategy that balances perception accuracy and transmission bandwidth, and it has become the predominant research direction of collaborative sensing. Currently, the intermediate fusion strategy improves perception accuracy by increasing the module's complexity. However, it only improves the fitting capability brought on by the increase in parameters. The characteristics and essence of the fusion perspective have not yet been explored by the aforementioned methods, and increasing the number of model parameters will decrease real-time performance. The accuracy of collaborative perception is dependent on two factors: the extraction of single-view scene features and the fusion of multi-view scene features. Existing intermediate strategy fusion methods emphasize the latter, often ignoring the fundamental role of the former. In our primary research on object detection, we identified two features: First, as shown in Figures 1(a) and (b), the current single-vehicle object detection is more accurate for vehicle position detection but less accurate for vehicle shape detection. Second, as shown in Figure 1(c), the perceptual accuracy of early fusion strategies is typically greater than that of intermediate fusion strategies. The causes for the aforementioned characteristics are also evident. The point cloud of a single vehicle is sparse and incomplete. At the same time, due to the limitation of the viewing angle. As shown in Figures 1(d) and (e), although the algorithm can determine the precise position of the vehicle, it is difficult to determine the car's outline. In Figure 1(f), the vehicle's outline is complete and distinct in the fused viewpoint cloud. The early fusion strategy of direct matching of the original point cloud is the most effective for performance improvement and has interpretability. As shown in Figure 1(c), the roadside radar has more wiring harnesses and denser point clouds, and the collaborative perception of vehicle-road coordination is typically more robust. At present, the intermediate fusion strategy improves perception accuracy by increasing the complexity of the fusion module. However, it only enhances the fitting ability brought about by the increase in the number of parameters. The above methods have yet to explore the characteristics and essence of the fusion perspective, and increasing the number of parameters of the model will lead to a decrease in real-time performance. The accuracy of collaborative perception depends on two aspects: single-view scene feature extraction and multi-view scene feature fusion. Existing intermediate strategy fusion methods emphasize the latter, often ignoring the fundamental role of the former. In the primary research on object detection, we found two characteristics: First, the existing single-vehicle object detection is more accurate for vehicle position detection, but weak for vehicle shape detection, as shown in Figure 1(a)(b). Second, the perceptual accuracy based on early fusion strategies is usually higher than intermediate fusion strategies, as shown in Fig. 1(c). The reasons for the above characteristics are also apparent. The point cloud of a single vehicle is sparse and incomplete. At the same time, due to the limitation of the viewing angle. Although the algorithm can judge the vehicle's specific position, the car's outline is difficult to determine, as shown in Figure 1(d)(e). In Figure 1(f), the vehicle's outline in the fused viewpoint cloud is clear and complete. The early fusion strategy of direct matching of the original point cloud is the most effective for performance improvement and has interpretability. As shown in Figure 1(c), the roadside radar has more wiring harnesses and denser point clouds, and the collaborative perception of vehicle-road coordination often has a more robust performance. This paper proposes an algorithm for vehicle-to-everything cooperative perception V2X-AHD based on an asymmetric heterogeneous distillation network. Among them, asymmetry refers to the difference between the parameters of the teacher network and the student network, while heterogeneity refers to the heterogeneity of point cloud data generated by other sensor models and different detection perspectives due to vehicle-side data and roadside data. The main contributions of this paper are as follows: 1. Propose an asymmetric structure for knowledge distillation [21][22][23]. Integrate the strategic benefits of early and intermediate integration. During training, the teacher and student networks are separately trained using fused point cloud and single view data. When using single-view data to extract features, the student network has the same fusion point cloud features as the teacher network and can automatically complete the incomplete vehicle outline features. The asymmetric structure is adopted for the difference of cross-view point cloud data of vehicles and roads, which improves the accuracy of point cloud feature extraction with different structures. 2. Feature extraction module Spare Pillar. Due to the sparse nature of point clouds, we convert them to 2D pseudo-images and design a Spare Pillar feature extraction module based on sparse convolution [25]. Compared with the conventional dense convolution method, it has superior feature encoding capabilities while significantly reducing computation time. The step-by-step descending structure of the bottleneck module facilitates multi-scale feature fusion and helps decouple point cloud features of different sizes. 3. Multi-head self-attention (MSA) feature fusion module. Existing methods hinder the practical expression of single-vehicle features due to the use of complex fusion modules; therefore, we employ a lightweight MSA mechanism to reduce computation while realizing the practical expression of single-vehicle features. The remainder of the paper is organized as follows. The literature related to this work is briefly introduced in related work. The Methodology section describes the V2X-AHD algorithm structure proposed in this paper. The Experiments section demonstrates the results of the experiments of the proposed algorithm on the V2XSet [14] dataset, while the Conclusion section provides a summary. ## II related works ### _Knowledge Distillation_ Knowledge distillation is also known as the teacher-student model [21][22][23]. Geoffrey Hinton [22] noted that the smooth distribution result predicted by the teacher network makes it easier for students to learn features than the Dirac result. The knowledge distillation network trains the teacher and student networks with different parameters. The network of teachers with a large parameter scale can be compressed into a smaller network of students. At the same time, the teacher network and the student network can input different data for training. Currently, this method is widely employed in the fields of semantic segmentation [26], point cloud target detection [27], and object re-identification [28]. As shown in Figures 2(a) and 2(b), the distillation network can be divided into Feature Imitation and Prediction Mimicking, depending on the transfer feature content. DiscoNet [29] utilizes a symmetric distillation network and a predictive simulation structure to transform multi-view point cloud features into a single-view result. However, DiscoNet's Prediction Mimicking has the problem of hindering the detection of performance diversity. In contrast, the purpose of feature simulation is to improve the consistency of the teacher-student model in terms of latent features. As shown in Figure 2(c), we employ an innovative asymmetric knowledge distillation structure to train the point cloud feature extraction. Compared with the student network, the teacher network has a more significant number of parameters and uses multi-view fusion point clouds as data input. The parameters of the student network are small, and single-view point cloud data are used to extract the network's features. During the stage of training, the fused-view features of the teacher network are transferred to the single-view features of the student network. During the test phase, even without the guidance of the teacher network, the student network retains the feature expression of the fusion point cloud beneath the single-view data. ### _3D point cloud object detection_ Currently, the 3D object detection algorithms based on point cloud follow the "encoding-decoding-detection head" detection architecture and adopt the grid-based form. According to the type of feature extraction convolutional network, it can be divided into two categories: the first category, based on the 3D voxel grid method [8][9][10][31], uses a 3D convolutional network to extract features; the second category, based on 2D Pillar grids, is the method [32][11][33][34], uses 2D convolutional networks to extract features. VoxelNet[18], the pioneering work of the voxel grid method, divides the point cloud space into 3D voxels and encodes point cloud features using 3D convolution. The Second [9] method investigates the problem of empty voxels in the VoxelNet method's long-distance area and employs a sparse 3D convolution method to improve the detection accuracy. The above algorithms that use 3D convolutional networks to extract features have high accuracy. However, due to the high computational complexity of the 3D convolution method, even the enhanced sparse method still has a substantial computational overhead. To address the aforementioned issues, a class of techniques employing 2D convolutions has gradually emerged, converting 3D point clouds into 2D pseudo-image features and then employing 2D convolutions with reduced operational complexity. One representative algorithm is the PointPillar [32] pillar feature extractor based on PointNet [12]. HVNet [33] combines columnar features of varying scales to Fig. 2: Comparison of distillation modes. The yellow and green lines represent the teacher and student network data flow, respectively. (a) and (b) illustrate two kinds of distillation modes, prediction mimicking, and Feature Imitation. (c) is our new mode of asymmetric knowledge distillation, the raw point cloud is divided into a fusion view and a single view. enhance operational precision and inference speed. Currently, algorithms based on 2D pseudo-graph features prioritize the projection of complex pillar features and multi-scale aggregation. However, compared to images, the information density of point cloud grids is significantly smaller than that of image grids, and the features are sparse. Using dense convolution methods for point cloud features wastes computing resources. We use sparse 2D convolution as the core and propose the Spare Pillar Plug and Play feature extraction network. Sparse convolution can reduce computation time significantly. At the same time, the detection accuracy improves due to the more concentrated feature extraction. ### _Vehicle-to-Everything Cooperation Perception_ Vehicle-to-Everything collaborative perception is the process of obtaining fusion perception results by receiving the perception data of neighboring agents. Due to its finer point cloud data and more stable own position coordinates, roadside lidar is more conducive to enhancing detection accuracy than onboard lidar. However, because access to roadside equipment will result in differences in point cloud data perspectives, a heterogeneous fusion module is required to fuse them. Currently, the mainstream algorithm of collaborative sensing can be divided into three categories based on the various transmission data types: the early fusion of the original point cloud data [13], the intermediate fusion of the transmission point cloud features [14][15][16][17], and the transmission perception. As a result of late fusion [18][19][20], the required bandwidth for data transmission decreases for each of the three methods. Cooper[25] proposed the early fusion method based on the original point cloud data level. He obtained the perception result of the complete perspective by aggregating the surrounding point cloud data. This method can fundamentally solve the problem of a single perspective, but transmitting raw point cloud data requires a substantial amount of communication bandwidth and high latency. In the late fusion algorithm, Andreas [18] proposed a high-dimensional sensor fusion structure that uses temporal and spatial sequences as input data to predict the position, action, and angle of surrounding agents in three dimensions. Rawashdeh [19] utilized machine learning techniques to transmit three-dimensional information, including the position and size of the tracking object's center point, in order to accurately predict surrounding objects. However, in late fusion, although the required data transmission bandwidth is small, a significant amount of effective scene information will be lost, resulting in excessive dependence on the results of a single agent, and an effective error correction mechanism cannot be created when individual sensors make errors. The intermediate fusion strategy can reduce the transmission bandwidth requirements to reduce the loss of scene information, which has attracted a large number of researchers to conduct research in this area. F-cooper [15] employs voxel characteristics for information transfer and the Maxout [35] method for voxel fusion at the junction. The OPV2V [17] algorithm leverages the self-attention mechanism [36] to learn the interaction between compressed features at the same spatial position. The V2Vnet [16] algorithm fuses passed compressed features using a graph Fig. 3: Pipeline of our proposed V2X-AHD. It consists of two networks: a teacher network and a student network. The single arrow solid line and dotted line represent the forward propagation and backward propagation process, respectively. The details of each individual component are illustrated in Sec.3. convolutional neural network GNN [37] network. V2X-ViT [14] considers the issue of space-time misalignment caused by real-world communication delays and employs self-attention mechanisms and multi-scale windows to fuse features from various perspectives. The MSA module minimizes the parameters' size while maintaining the perception's accuracy. The following sections will focus on the structural details of each part of the model V2X-AHD proposed in this paper. ## III Methodology In this paper, we propose a Vehicle-to-Everything cooperation perception system V2X-AHD based on 3D point cloud data to perceive the surrounding environment of a vehicle. The primary sources of design inspiration are the high precision of the early fusion strategy and the benefits of the knowledge distillation network for joint training of different input stream data. The advantage of the knowledge distillation system is its ability to compress the knowledge gained from the large model into the small model by means of "knowledge distillation." At the same time, the distillation model can be used for zero-shot or few-shot learning, allowing the teacher and student networks to input distinct training data. The point cloud of Vehicle-to-Everything cooperation perception is unique, and it is simple to obtain multi-view point cloud feature training offline. In contrast, in the online test scenario, the agent can only accept a single perspective point cloud to extract and transmit features, which is perfectly compatible with the distillation network. During the training stage, the teacher model uses multi-view fusion data, while the student model uses single-view data. Both of them use the Spare Pillar to extract the points cloud feature. Through the distillation method of feature simulation, the teacher model transfers the multi-view feature extraction paradigm to the single-view student model. During the test stage, although the student model is trained with single-view data, the detection performance can approach that of the teacher model using multi-view data, thereby fundamentally solving the problem of single-view data feature extraction. ### _Student network_ Each agent _i_\(\in\)**{_I_,_,_N_} within the communication distance, agent categories _c_\(i\)\(\in\)**{_V_I_}, \(V\) and \(I\) represent vehicles and roadside equipment, respectively. We assume that the data transmission is synchronized, which means that each agent \(i\) position is _P_\(i\), and the lidar data are _L_\(i\). Assuming that agent _ego_is selected as the central agent, and _ego_ accepts the positions from surrounding agents. Central agent _ego_ can receive the original point cloud or feature from surrounding agents through coordinate system transformation. #### Iii-A1 Feature extraction The point cloud feature extraction algorithm uses the 2D columnar grid feature extraction algorithm Spare Pillar, as shown in Figure 4. Using the "Encoder-Neck" architecture under the Bird's Eye View (BEV) while retaining the traditional 2D columnar grid algorithm, it employs a sparse convolution method better suited for point cloud features to further reduce inference delay and improve feature extraction capabilities. The overall structure consists of three components: a 2D pseudo-image generation module, encoder, and bottleneck block Neck. Point Feature Net converts the point cloud into a stacked Pillar tensor, which is then projected into a 2D pseudo-image with a scale of _H_\(\times\)**_W\(\times\)C_, where \(H\) and \(W\) represent the height and width of the pseudo-image canvas, respectively, and A indicates the number of channels of the pseudo-image. The 2D pseudo-image is then fed into the encoder. The Encoder adopts the VGGNet [38] architecture. The objective is to extract sparse columnar features of varying depths from the projected sparse 2D pillar and to feed all sparse pillar features of varying scales into the bottleneck block. Since feature compression has been performed, sparse pillar features can be fused using standard dense 2D convolutions. The specific procedure is illustrated by formula (1): \[F_{i}^{s}=\phi_{s}(L_{i}),F_{i}^{s}\in\mathbb{R}^{\bar{H}\times\bar{W}\times \bar{C}} \tag{1}\] where \(\phi_{s}(\bullet)\) represents the student feature extraction network, and \(\mathbb{R}^{\bar{H}\times\bar{W}\times\bar{C}}\) indicates the scale of the feature space after convolution. SparePillar employs sparse convolution to reduce computational complexity during the stage of feature extraction. In the feature upsampling procedure, dense convolution is used to integrate high-level abstract semantics and low-fine-grained spatial features to improve the accuracy of large objects. We use different feature extraction structures for the teacher and student models. The experimental section will demonstrate distinct structural differences. #### Iii-A2 Compression and decompression To further reduce the required bandwidth for data transmission, each agent needs to perform compression before data transmission. We use 1\(\times\)1 convolutional layer to compress the features in the channel direction, as shown in formula (2): \[F_{i}^{s^{\prime}}=Enc_{com}(F_{i}^{s}),F_{i}^{s^{\prime}}\in\mathbb{R}^{\bar{ H}\times\bar{W}\times\bar{C}},\bar{\bar{C}}\ll\bar{C} \tag{2}\] \[Data_{i}\leftarrow(F_{i}^{s^{\prime}},P_{i}) \tag{3}\] where \(Enc_{Com}(\bullet)\) represents the compression function, formula (3) represents the information transmission data packet, \(Data_{i}\) transmits the compressed feature \(F_{i}^{s^{\prime}}\) and its position \(P_{i}\), and decompresses after other agents receive the compressed feature \(F_{i}^{s^{\prime}}\); the specific process is shown in formula (4): \[F_{i}^{s^{\prime\prime}}=Dec_{com}(F_{i}^{s^{\prime}}),F_{i}^{s^{\prime\prime }}\in\mathbb{R}^{\bar{H}\times\bar{W}\times\bar{C}} \tag{4}\] where \(Dec_{Com}(\bullet)\) represents the decompression function corresponding to the compression process, and the decoded feature space becomes \(\mathbb{R}^{\bar{H}\times\bar{W}\times\bar{C}}\). The decompressed feature \(F_{i}^{s^{\prime}}\) will be transmitted to the feature fusion part. #### Iii-A3 Feature fusion The MSA mechanism model [36] fuses the decompressed features. The specific procedure is shown in Figure 5. The feature vectors at the same position on the feature map correspond to particular points in the original point cloud data. Spatial correlation is destroyed by tiling the feature map and calculating the weighted sum of each feature. However, although the complex fusion structure objectively increases the network's depth and thus improves its ability to fit, this operation will hinder the accurate expression of individual features. We use an MSA mechanism with a few parameters to more accurately capture the features it represents and generate a fusion feature map. The specific procedure is shown in formulas (5) and (6): \[m_{i}=M(P_{ego},P_{i}) \tag{5}\] \[\begin{split} M_{ego}=MSA(F_{ego}^{s},(F_{1}^{s^{\prime\prime}},m_ {1}),\\ (F_{2}^{s^{\prime\prime}},m_{2}),\ldots,(F_{n}^{s^{\prime\prime}},m_ {n}))\end{split} \tag{6}\] The formula (5) indicates that after receiving the data \(Data_{i}\), \(M(\bullet)\) means to use its position information \(P_{ego}\) and \(P_{i}\) to calculate the position transfer matrix \(m_{i}\). After that, we send matrix \(m_{i}\) and the decompressed feature \(F_{1}^{s^{\prime\prime}}\) to the fusion network \(MSA(\bullet)\), as shown in the formula (6). \(MSA(F_{ego}^{s}\) is the feature of its point cloud, and \(M_{ego}\) represents the fusion feature. Finally, the fusion features are sent to the detection head. The fusion network structure is shown in Figure 5. After the features have been connected, the MSA mechanism is used to extract features, where A represents the number of multi-head attention heads. Formulas (7), (8), and (9) represent the specific execution steps of the multi-head attention mechanism, while formula (7) represents the connection between the multi-head attention feature results; formula (8) represents the generation of a single attention head, where \(W_{i}^{Q}\), \(W_{i}^{K}\), and \(W_{i}^{V}\) represents the relationship matrix of \(Q\),\(K\),\(V\), \(K\), and \(V\) values between single attention head \(Attention(Q^{\prime},K^{\prime},V^{\prime})\) and multi-head attention \(Multihead(Q,K,V)\), respectively. Formula (9) represents the operation process of single-head self-attention, where \(dK^{\prime}\) indicates the feature dimension. The fusion network structure is shown in Figure 5. After the features are connected, the multi-head self-attention mechanism is used for feature extraction, where the parameter A represents the number of multi-head attention heads. Formulas (7), (8), and (9) represent the specific execution steps of the multi-head attention mechanism, and formula (7) represents the connection of multi-head attention feature results; formula (8) represents the generation of a single attention head, where \(W_{i}^{Q}\), \(W_{i}^{K}\), \(W_{i}^{V}\) represents the relationship matrix of \(Q\),\(K\),\(V\) values between single attention head \(Attention(Q^{\prime},K^{\prime},V^{\prime})\) and multi-head attention \(Multihead(Q,K,V)\), respectively. formula (9) represents the operation process of single-head self-attention, where \(dK^{\prime}\) means the feature dimension. \[Multihead(Q,K,V)=Linear(Concat \tag{7}\] \[(head_{1},head_{2},\ldots,head_{h}))\] \[head_{i}=Attention(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}) \tag{8}\] \[Attention(Q^{\prime},K^{\prime},V^{\prime})=softmax(\frac{Q^{\prime}K^{\prime T }}{\sqrt{dK^{\prime}}})V^{\prime} \tag{9}\] #### Iii-B4 Detection Head After obtaining the final fusion feature \(M_{ego}\), we use two 1\(\times\)1 convolutional layers to generate classification and regression prediction results, respectively, and form a prediction box, as shown in formulas (10) and (11): \[Y_{class}=\xi_{class}(M_{ego}) \tag{10}\] \[Y_{reg}=\xi_{reg}(M_{ego}) \tag{11}\] where \(\xi_{class}(\bullet)\) represents the classification layer, and \(Y_{class}\) outputs a score, which is used to indicate whether the preselected box is an object or a background. \(\xi_{reg}(\bullet)\) represents the regression layer, and the output of \(Y_{reg}\) is seven dimensions \((x,y,z,w,l,h,\theta)\), where \(x\), \(y\), and \(z\) represent the position of the prediction frame, \(w\), \(l\), and \(h\) represent the size of the prediction frame, and \(\theta\) represents the heading angle of the prediction box. Fig. 4: Feature extraction network Spare Pillar structure. The structure includes three parts: a point feature net, an encoder, and a neck. Orange and blue modules represent sparse convolution and dense convolution, respectively ### _Teacher network_ #### Iii-B1 Multi-view data fusion The input data of the teacher network are fused perspective data, which needs to be fused before inputting into the model. The process of fused point cloud is as follows: \[L_{mix}=A((L_{ego},P_{ego}), \tag{12}\] \[(L_{1},P_{1}),(L_{2},P_{2}),\ldots,(L_{N},P_{N}))\] where \(L_{mix}\) represents the fused point cloud, \(N\) indicates the number of agents within the transmission range, and \(A(\bullet)\) represents the aggregation process of the surrounding point cloud data. To ensure that the input data of the teacher network of the fusion view and the student network of a single view are aligned, the coordinates are transformed into the coordinates centered on agent _ego_ after the fusion point cloud is cropped. #### Iii-B2 Feature extraction After collecting the multi-view data, we feed it into the network for feature extraction. The process of feature extraction for the teacher network is similar to that for the student network, but the input data differ, as shown in formula (13): \[F_{i}^{t}=\phi_{t}(L_{mix}),F_{i}^{t}\in\mathbb{R}^{\hat{H}\times\hat{W}\times \hat{C}} \tag{13}\] \(\phi_{t}(\bullet)\) represents the feature extraction network of the teacher's point cloud, \(F_{ego}^{t}\) indicates the feature extracted from the fusion point cloud centered on agent \(ego\), and \(R^{\hat{H}\times\hat{W}\times\hat{C}}\) represents the same feature space as the student network. #### Iii-B3 Knowledge distillation loss function During the training process, the teacher network is more straightforward to converge than the student network training; therefore, the teacher and student networks can be jointly trained with random initialization parameters. In this paper, we jointly train the model using object detection loss and distillation loss. The loss \(\mathcal{L}_{total}\) that needs to be minimized is shown in formula (14): \[\mathcal{L}_{total}=\lambda_{det}\mathcal{L}_{det}+\lambda_{KD}\mathcal{L}_{KD} \tag{14}\] where hyperparameters \(\lambda_{det}\) and \(\lambda_{KD}\) control the weights of object detection loss \(\mathcal{L}_{det}\) and knowledge distillation loss \(\mathcal{L}_{KD}\), respectively. The target detection loss is shown in formula (15), including classification loss \(\mathcal{L}_{class}(\bullet)\) and regression loss \(\mathcal{L}_{reg}(\bullet)\). Classification loss \(\mathcal{L}_{class}(\bullet)\) use focal loss[38] to calculate classification \(Y_{class}(\bullet)\) and label classification value \(\hat{Y}_{class}\), which is used to judge whether the object in the detection frame is background or target. Regression loss \(\mathcal{L}_{reg}(\bullet)\) uses \(\ell_{1}\) smooth loss to calculate regression value \(Y_{reg}\) and label regression value \(\hat{Y}_{reg}\), which are used to judge the detection frame's position, size, and heading angle. \[\mathcal{L}_{det}=\mathcal{L}_{reg}(Y_{reg},\hat{Y}_{reg})+\mathcal{L}_{class} (Y_{class},\hat{Y}_{class}) \tag{15}\] The distillation loss \(\mathcal{L}_{KD}(F_{ego}^{t},F_{ego}^{s})\) is shown in formula (16), where \(KL((p(x)||(q(x)))\) represents the Kullback-Leibler(KL) divergence, which is used to describe the difference between the distributions \(p(x)\) and \(q(x)\). \(V(\bullet)\) indicates retaining the feature channel \(\hat{c}\) and generating a one-dimensional feature (vector) from a feature of \(\hat{H}\times\hat{W}\) size. The \(\tau(\bullet)\) function represents the softmax operation process with the distillation temperature \(T\) according to the number of channels \(\hat{c}\), as shown in formula (17). \[\mathcal{L}_{KD}(F_{ego}^{t},F_{ego}^{s})=KL(\tau(V(F_{ego}^{t}))||\tau(V(F_{ ego}^{s}))) \tag{16}\] \[\tau(\bullet)\leftarrow\frac{exp(z_{i}/T)}{\sum_{j}exp(z_{j}/T)} \tag{17}\] where \(z_{i}\) represents the \(i\)-th channel's characteristics, and the hyperparameter \(T\) represents the distillation temperature. This section describes the network structure and direction of data flow for each network component. Experiments will be conducted in the next section to confirm the effectiveness of the structure described above. ## IV Experiments ### _Dataset_ To evaluate the performance of the proposed V2X-AHD algorithm, all experiments use the open large-scale vehicle-road collaboration dataset V2XSet [14]. CARLA [39] and Fig. 5: Feature fuse module multi-head attention (MSA) mechanism. OpenCDA [40] generate V2XSet to simulate natural scenes. The data set consists of 11447 frames, and since each frame contains multiple agents, the total number of samples is 33081. Regarding structure, the training, verification, and test sets are 6694/1820/2833 frames, respectively. ### _Settings_ As an evaluation metric, we employ the general 3D object detection standard: the average precision results when the Intersection-over-Union (IoU) threshold is 0.5 and 0.7. In the experiment, the communication distance between any two agents is 70 m, agents beyond this distance are ignored, and the compression rate of transmitted data is uniformly set to 32. The x- and y-axis measurement ranges are [-140.8, 140.8] and [-38.4, 38.4], respectively, while the z-axis measurement range is [-1,3]. A single voxel's length, width, and height are set to 0.4, 0.4, and 4, respectively. During the training process, all comparison methods employ the Adam [41] optimizer for 60 training rounds; the initial learning rate is set to 0.001, and it is decreased to the original value of 0.1 every 20 training rounds. All models are trained on a device equipped with an NVIDIA GeForce RTX 3090 graphics card and an AMD 5900x CPU. I displays the network and parameter information for the distillation network. ### _Comparison methods_ Early Fusion, which aggregates the original point clouds of surrounding agents; Late Fusion, which obtains the perception results of all surrounding agents and employs the non-maximal suppression method to produce the final result; This experiment compares intermediate fusion strategies by contrasting the most recent five algorithms: F-cooper, V2VNet, DiscoNet, OPV2V, and V2X-ViT. For a fair comparison, the same method of feature extraction is used during evaluation. As the backbone for testing, they are based on the PointPillar [32] and SparePillar methods proposed in this paper. ### _Quantitative analysis_ In this section, we present the results of our V2X-AHD method and compare it to the literature. #### Iv-D1 Main methods results The experimental results of existing 3D object detection algorithms are compared in II. At the same time, the Spare Pillar module proposed in this paper is a plug-and-play module. The test results indicate that the point pillar and the spare pillar should serve as the network's backbone. The experimental results demonstrate that the intermediate strategy fusion methods have better results than the no-fusion and late-fusion algorithms. The early fusion strategy algorithm can receive complete original point cloud data, and the test results are better than the intermediate fusion strategy algorithm. Under IoU=0.7, the proposed V2X-AHD algorithm achieves the best performance. Compared with the optimal algorithm V2X-ViT algorithm, using the Point Pillar and Spare Pillar backbone networks, the accuracy is increased by 1.6% and 2.2%, respectively. The perception accuracy of the Spare Pillar module is superior to that of the Point Pillar module. When IoU=0.7, the results of Fcooper, DiscoNet, OPV2V, and V2X-ViT methods increased by 8.4%, 4.5%, 13.1%, and 4.5% respectively. Table II shows the comparative experimental results of existing 3D object detection algorithms. At the same time, the Spare Pillar module proposed in this paper is a plug-and-play module. The test results show using the point pillar and the spare pillar as the backbone network. The experimental results show that the intermediate strategy fusion methods have better results than the no-fusion and late-fusion algorithms. The early fusion strategy algorithm can receive complete original point cloud data, and the test results are better than the intermediate fusion strategy algorithm. Under IoU=0.7, the V2X-AHD algorithm proposed in this paper achieves the best results. Compared with the optimal algorithm V2X-ViT algorithm, using the Point Pillar and Spare Pillar backbone networks, the accuracy is increased by 1.6% and 2.2%, respectively. The Spare Pillar module has higher perception accuracy than the Point Pillar module. In the case of IoU=0.7, the results of Fcooper, DiscoNet, OPV2V, and V2X-ViT methods increased by 8.4%, 4.5%, 13.1%, and 4.5% respectively. Performance is determined jointly by the number of model parameters and the accuracy of prediction. Figure 6 illustrates the relationship between the parameter amounts of various algorithms and their precision. This article utilizes the knowledge distillation architecture, so there are two training and testing algorithm parameters. The figure demonstrates that this paper achieves the best possible result with the fewest number of test parameters. Combined with the quantitative analysis results, we discovered that the F-Cooper and OPV2V algorithms with small parameters significantly enhanced detection performance after the Spare Pillar module was replaced due to their simple fusion strategy. However, the V2X-ViT algorithm with many parameters has a slight improvement due to the increased complexity of the fusion strategy. The V2VNet with the greatest number of parameters even experienced negative growth. The preceding results demonstrate that the high-complexity fusion model will hinder the expression of features. The lightweight MSA feature fusion module presented in this paper is better suited for fusing single-view vehicle data features. #### Iv-E2 Distillation temperature comparison The knowledge distillation structure employs soft training targets. Compared with manually labeled hard target labels, soft target labels contain more entropy, thereby increasing the difference between various features and allowing the student network to obtain more valuable information from the teacher network. Distillation temperature is a vital knowledge distillation hyperparameter, and Table III shows the AP results of the model at different distillation temperatures. When the temperature of distillation is low, detection precision increases as the temperature rises. The detection accuracy reached the highest value when the distillation temperature was ten, and then with the increase of distillation temperature, the detection accuracy showed a downward trend. According to the analysis, the inter-class differences of soft objects become smaller as distillation temperature increases, and the characteristics cannot be learned. Finally, we selected the optimum temperature as 10. ### _Qualitative analysis_ #### Iv-E1 Object detection result Figure 7 depicts the results of the detection process. We select three representative scenarios and compare the F-cooper, OPV2V, and V2X-viT algorithms' results. The green and red bounding boxes represent the ground truth and predicted results, respectively. Even when compared to the SOTA algorithm V2X-ViT, our algorithm has a higher degree of matching. The scene in Fig. 7(a) shows the false detection problem. All the algorithms recognize the vehicles in the scene. However, except for our algorithm, the other algorithms all identify some trees as vehicles, as shown in the red circle. It can be seen that the plug-in SparePillar has a more robust feature extraction performance than PointPillar. Figure 7 (b) depicts the occlusion detection results. The F-cooper algorithm and the OPV2V algorithm have a significant deviation of the detection bounding box angle or even occlusion-related missed detection. As shown by the red circle, the V2X-ViT algorithm has the issue of failed point cloud vehicle self-detection. However, the fused point Fig. 6: Model parameter quantity and accuracy relationship graph. cloud at this location has the shape of the vehicle. According to the analysis, the intermediate fusion algorithm cannot accurately synthesize the contradictory results. Additionally, it demonstrates that the complex fusion strategy will impede the correct expression of the extracted features. Figure 7(c) shows the scene with multiple intersections and agents. At the edge positions, such as the left and bottom sides of the scene, F-cooper, OPV2V, and V2X-ViT have undetected situations, as shown by the red circle marks. In areas where the point cloud is sparse, as indicated by the green circle, detection is also missed by all algorithms. Although there are actual bounding boxes, there are only a few point clouds, no effective vehicle features can be formed, and the failure to detect is a reasonable phenomenon. #### V-A2 Multi-view feature fusion Figure 8 shows the aggregation process between fused view data and visualizes the attention weights. Infra represents roadside infrastructure, while SDV1 and SDV2 represent self-driving vehicles. With SDV1 functioning as the data receiver, Infra and SVD2 transmit data to SDV1. The point cloud map and the attention map correspond; the brighter the color, the more attention the area has received. In Figure 8 (a), because SDV2 blocks Car1, SDV1 pays less attention to the position of Car1 and has a darker color, while the corresponding position of SDV2 has a darker color to the position of car1. However, car 2 is located at an intersection, the detection distance to SDV1 is relatively great, and there is occlusion. SDV1 pays little attention to it, whereas the roadside equipment at the corresponding location demonstrates high attention. In Figure 8 (b), SDV1 is blocked Fig. 7: Qualitative comparison in different conditions. Green and red bounding boxes represent ground truth and prediction, respectively. In addition, we use circles to illustrate the key of the figure. by the intersection, resulting in a restricted field of view. The highlighted area of the SDV1 attention map is limited to some intersections, while areas 1 and 2 are not focused due to occlusion. Infra and SDV2 in the area1 and area2 positions with higher attention. The attention-based fusion strategy MSA proposed in this paper can effectively synthesize the extracted point cloud features and ultimately achieve the objectives of extending the detection range of targets and controlling occlusion. ### _Ablation experiments_ When IoU=0.7, the single module SparePillar, and AHD improve AP by 13.5% and 10.8%, respectively. The results indicate that SparePillar and the asymmetric distillation architecture can significantly improve perceptual accuracy. The MSA accuracy increased by 1.2%, indicating that the fusion module is capable of enhancing the feature expression under higher perceptual accuracy than the self-attention mechanism. Under IoU=0.5 and IoU=0.7, the method presented in this paper is 8.7% and 16.0% more efficient than the baseline method. Since a single view has acquired multi-view features, it has achieved a greater improvement under the IoU=0.7 condition, which necessitates greater precision. The above ablation experimental results demonstrate the effectiveness of each component. ## V Conclusion To solve the problem of low vehicle recognition accuracy caused by the unclear outline of the vehicle in the conventional single-vehicle point cloud target detection method. This paper proposes a multi-view vehicle-road collaborative perception framework V2X-AHD based on a distillation network. Compared with the conventional algorithm, the asymmetric heterogeneous distillation network AHD proposed in this paper can transfer the entire point cloud features of multiple views to a single view, improving object contour perception accuracy. The extractor Spare Pillar based on sparse convolution design improves the ability to extract features from point cloud data while simultaneously reducing the number of parameters. The feature fuser MSA performs feature layer fusion on multi-view feature data. This paper verifies the proposed method on the publicly available dataset V2XSet, achieving the best results, and demonstrates the effectiveness of the proposed method V2X-AHD through quantitative and qualitative analysis. The proposed method has only been validated in a simulated environment, and the difference between vehicle-side and road-side data is negligible. In future work, we will collect real scene point cloud data, construct our own real scene data set, and examine the differences between vehicle and roadside point cloud data. Fig. 8: Attention weight transfer. Scene1. Brighter parts of the image indicate higher attention to that area. Each Scene includes three agents: SDV1, SDV2, and, Infra, the SDV1 is the feature receiver.
2301.10720
Topological black holes in higher derivative gravity
We study static black holes in quadratic gravity with planar and hyperbolic symmetry and non-extremal horizons. We obtain a solution in terms of an infinite power-series expansion around the horizon, which is characterized by two independent integration constants -- the black hole radius and the strength of the Bach tensor at the horizon. While in Einstein's gravity, such black holes require a negative cosmological constant $\Lambda$, in quadratic gravity they can exist for any sign of $\Lambda$ and also for $\Lambda=0$. Different branches of Schwarzschild-Bach-(A)dS or purely Bachian black holes are identified which admit distinct Einstein limits. Depending on the curvature of the transverse space and the value of $\Lambda$, these Einstein limits result in (A)dS-Schwarzschild spacetimes with a transverse space of arbitrary curvature (such as black holes and naked singularities) or in Kundt metrics of the (anti-)Nariai type (i.e., dS$_2\times$S$^2$, AdS$_2\times$H$^2$, and flat spacetime). In the special case of toroidal black holes with $\Lambda=0$, we also discuss how the Bach parameter needs to be fine-tuned to ensure that the metric does not blow up near infinity and instead matches asymptotically a Ricci-flat solution.
Alena Pravdova, Vojtech Pravda, Marcello Ortaggio
2023-01-25T17:29:47Z
http://arxiv.org/abs/2301.10720v1
# Topological black holes in higher derivative gravity ###### Abstract We study static black holes in quadratic gravity with planar and hyperbolic symmetry and non-extremal horizons. We obtain a solution in terms of an infinite power-series expansion around the horizon, which is characterized by two independent integration constants - the black hole radius and the strength of the Bach tensor at the horizon. While in Einstein's gravity, such black holes require a negative cosmological constant \(\Lambda\), in quadratic gravity they can exist for any sign of \(\Lambda\) and also for \(\Lambda=0\). Different branches of Schwarzschild-Bach-(A)dS or purely Bachian black holes are identified which admit distinct Einstein limits. Depending on the curvature of the transverse space and the value of \(\Lambda\), these Einstein limits result in (A)dS-Schwarzschild spacetimes with a transverse space of arbitrary curvature (such as black holes and naked singularities) or in Kundt metrics of the (anti-)Nariai type (i.e., dS\({}_{2}\times\)S\({}^{2}\), AdS\({}_{2}\times\)H\({}^{2}\), and flat spacetime). In the special case of toroidal black holes with \(\Lambda=0\), we also discuss how the Bach parameter needs to be fine-tuned to ensure that the metric does not blow up near infinity and instead matches asymptotically a Ricci-flat solution. ## 1 Introduction Black holes can be regarded as the most fundamental objects in gravity, serving as theoretical laboratories to study various aspects of gravitational theories. In general relativity, Hawking's theorem states that spacelike cross sections of an event horizon in a stationary asymptotically flat spacetime are topologically 2-spheres, assuming also that the dominant energy condition holds [1, 2]. By relaxing some of the assumptions in Hawking's theorem, one can obtain more general horizon geometries. For instance, in a locally asymptotically anti-de Sitter (AdS) spacetime (where the asymptotic flatness and dominant energy condition are both violated), it is possible to construct topological black holes for which the spacelike cross section of the event horizon can be a compact Riemann surface of any genus \(g\)[3, 4, 5, 6, 7, 8]. Stemming from the early results of [9, 10], recently there has been a great interest in the study of static, spherically symmetric black holes in quadratic gravity [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], where corrections quadratic in the curvature are added to the Einstein-Hilbert action \[S{=}\int\mathrm{d}^{4}x\,\sqrt{-g}\mathcal{L}=\int\mathrm{d}^{4}x\,\sqrt{-g} \left(\gamma\left(R-2\Lambda\right)+\beta\,R^{2}-\alpha\,C_{abcd}\,C^{abcd} \right), \tag{1}\] where \(\gamma=1/(16\pi G)\), \(G\) is the Newtonian constant (we will use units such that \(G=1=c\)), \(\Lambda\) is the cosmological constant, and \(\alpha\), \(\beta\) are coupling constants of the theory. It is well known [22, 23] that all Einstein spaces \(R_{ab}=\Lambda g_{ab}\) automatically solve the vacuum field equations of quadratic gravity. Vacuum black holes appearing in Einstein's gravity are thus in a sense trivial solutions to quadratic gravity. However, recently it has been shown [11, 12] that, besides the standard Schwarzschild black hole, quadratic gravity also admits another static, spherically symmetric black hole solution over and above Schwarzschild. Extensions of such black holes with a non-vanishing cosmological constant \(\Lambda\) have been studied in [17, 20]. Although these non-Schwarzschild (or "Schwarzschild-Bach") black holes of [11, 12] and [17, 20] have nontrivial Ricci tensor, the Ricci scalar \(R\) is vanishing or constant, respectively. In fact, the Ricci scalar is constrained by the trace no-hair theorem of [12, 24] which states that for static, spherically symmetric black holes in quadratic gravity, the Ricci scalar is either zero (in the asymptotically flat case) or constant (assuming that \(R\) is sufficiently quickly approaching a constant at infinity) throughout the spacetime. Furthermore, for \(R=\)const, the field equations of quadratic gravity considerably simplify to (assuming in (1) \(\gamma\neq 0\)) \[R_{ab}-\Lambda\,g_{ab}=4k\,B_{ab}\,, \tag{2}\] where \(B_{ab}\) is the Bach tensor (5) and the constant \(k\) is defined by (8). The Schwarzschild-Bach black holes are thus clearly distinguished by a non-vanishing Bach tensor. In fact, it has been shown that for these black holes, the vanishing of the Bach tensor on the horizon guarantees the vanishing of the Bach tensor throughout the spacetime [16, 17, 19, 20]. These black holes are therefore characterized by two parameters, namely, the radius of the black hole \(\bar{r}_{0}\) and a Bach parameter \(b\) (denoted as \(\delta\) in some of the literature) measuring a deviation from the Schwarzschild solution and related to the value of the Bach invariant \(B_{ab}B^{ab}\) on the horizon. An exact solution in a closed form describing this black hole is unknown - in the \(\Lambda=0\) case, evidence for the existence of this black hole has been provided in [11, 12] by taking the first few terms in the near-horizon expansion and numerically integrating the solution out from the horizon to some point outside the horizon, before the numerical solution diverges (the fourth order equations of motion are numerically unstable). As it turns out, to ensure the asymptotic flatness of the spacetime (such as to kill the growing Yukawa modes), one needs to fine-tune the parameters \(\bar{r}_{0}\) and \(b\) using numerical methods, thus effectively ending up with a one-parameter family of solutions [11, 12, 13, 15, 18, 21]. Very recently, it has been shown that for a given \(\bar{r}_{0}\) there exist at least two values of \(b\) giving an asymptotically flat black hole [21]. However, introducing a non-zero cosmological constant or non-spherical horizon topologies into the picture may have significant consequences on the physics of these black holes. Various results for the case \(\Lambda\neq 0\) have been obtained in [17, 20, 25, 26].1 Footnote 1: More results, including several exact solutions, are available in special quadratic theories such as conformal or pure \(R^{2}\) gravity for spherical [27, 28, 29, 30] and topological [31, 32, 33, 34, 25] black holes. These special theories will not be considered in the present paper. In this work, we will broaden the search for black holes in quadratic gravity. In addition to static black holes with spherical symmetry, we will also include hyperbolic and planar symmetry. The corresponding metric ansatz thus reads \[\mathrm{d}s^{2}=-h(\bar{r})\,\mathrm{d}t^{2}+\frac{\mathrm{d}\bar{r}^{2}}{f( \bar{r})}+\bar{r}^{2}\mathrm{d}\omega^{2},\ \ \ \ \mathrm{d}\omega^{2}=\frac{\mathrm{d}x^{2}+\mathrm{d}y^{2}}{\left(1+\frac{ \epsilon}{4}\big{(}x^{2}+y^{2}\big{)}\right)^{2}}, \tag{3}\] where the transverse geometry is \(S^{2}\), \(E^{2}\), and \(H^{2}\) for \(\epsilon=+1\), \(0\), and \(-1\), respectively. As in Einstein's gravity, one can use the metric (3) with \(\epsilon=0\) or \(-1\) to construct topological black holes, for which the horizon is a flat torus (\(g=1\)) or a Riemann surface of genus \(g>1\), respectively (such compactifications are discussed in [3, 4, 5, 6, 7, 8]). It is well known that in Einstein's gravity, black holes with \(\epsilon\leq 0\) require \(\Lambda<0\). Here we will show that these constraints do not apply to quadratic gravity and that such black holes can exist for any sign of \(\Lambda\) as well as for \(\Lambda=0\). We will take the conformal-to-Kundt approach, recently employed in [16, 19] and [17, 20], to study static, spherically symmetric black holes in quadratic gravity with vanishing and nonvanishing \(\Lambda\), respectively. Accordingly, the standard metric (3) is rewritten in a form conformal to the Kundt metric. This greatly simplifies the field equations of quadratic gravity, at the price of working in somewhat physically less transparent coordinates. Together with the assumption \(R=\)const, motivated by the trace no-hair theorem, the resulting simplification of the field equations will enable us to obtain recurrent formulas for series coefficients of the metric functions in power-series expansions, and thus to include also analytical results in the study of these black holes. In section 2, we present necessary background material, such as the field equations of quadratic gravity following from the action (1), and the conformal-to-Kundt approach to simplify them. In section 3, the field equations are derived for the ansatz (3) reexpressed in the Kundt coordinates. We then use a Frobenius-like approach to solve the equations in the vicinity of a generic hypersurface of a constant radius. We use infinite power-series expansions and the indicial equations to determine the possible leading powers of solutions. In particular, some of these solutions admit extremal or non-extremal horizons. Section 4 then focuses on the study of solutions with non-extremal horizons, which is the main focus of the present paper. The recurrent formulas for the series coefficients are determined. Interestingly, depending on \(\Lambda\) and \(\epsilon\), the Einstein limit of the black hole solutions constructed here contains not only (A)dS-Schwarzschild black holes with a transverse space of arbitrary curvature, but also naked singularities, Kundt metrics of the (anti-)Nariai type - dS\({}_{2}\times\)S\({}^{2}\), AdS\({}_{2}\times\)H\({}^{2}\), and flat spacetime. As in the spherical case [11, 12, 13, 15, 16, 17, 18, 19, 20, 21], one might expect that, by fine-tuning the Bach parameter, a quadratic-gravity black hole with \(\Lambda=0\) will asymptote an (appropriate) Ricci-flat metric near spatial infinity. We will give evidence to support this expectation by fine-tuning a planar black-hole solution (\(\epsilon=0\)) with vanishing \(\Lambda\), using the polynomial expansion.2 In contrast, fine-tuning is not necessary to obtain an asymptotically Einstein spacetime in the case of nonvanishing \(\Lambda\) within a certain continuous range of parameters of the solution (see also sec. 5). Footnote 2: However, more solid evidence would be to match the expansion in the vicinity of the horizon with the asymptotic expansion in the form of logarithmic-exponential transseries (cf. [15]). Concluding comments are given in section 5, also on some related results obtained for the \(\Lambda<0\) case in [25, 26]. ## 2 Background ### Quadratic gravity The field equations following from action (1) read \[\gamma\left(R_{ab}-\tfrac{1}{2}R\,g_{ab}+\Lambda\,g_{ab}\right)-4\alpha\,B_ {ab}+2\beta\left(R_{ab}-\tfrac{1}{4}R\,g_{ab}+g_{ab}\,\square-\nabla_{b} \nabla_{a}\right)R=0\,, \tag{4}\] where \(B_{ab}\) is the Bach tensor \[B_{ab}\equiv\big{(}\nabla^{c}\nabla^{d}+\tfrac{1}{2}R^{cd}\big{)}C_{acbd}\;, \tag{5}\] which is traceless, symmetric, conserved, and well-behaved under a conformal transformation \(g_{ab}=\Omega^{2}\tilde{g}_{ab}\): \[g^{ab}B_{ab}=0\,,\qquad B_{ab}=B_{ba}\,,\qquad\nabla^{b}B_{ab}=0\,,\qquad B_{ ab}=\Omega^{-2}\tilde{B}_{ab}\,. \tag{6}\] For four-dimensional (conformally) Einstein spacetimes, the Bach tensor vanishes identically [27]. As discussed above, in this work we will restrict ourselves to solutions with \(R=\)const.3 Then the trace of (4) reduces to \(\gamma(R-4\Lambda)=0\). Assuming \(\gamma\neq 0\) (to keep the Einstein-Hilbert term in the action (1)) we thus have Footnote 3: This is not a restriction for Einstein-Weyl gravity, i.e., when \(\beta=0\), since it then clearly follows from (4). \[R=4\Lambda, \tag{7}\] and the field equations (4) simplify to (2), where we have defined \[k\equiv\frac{\alpha}{\gamma+8\beta\Lambda}\qquad(\gamma+8\beta\Lambda\neq 0). \tag{8}\] For later purposes, let us note that solutions with vanishing Bach tensor reduce to Einstein spacetimes (cf. (2)). The latter are of constant curvature if, in addition, \(\mathcal{H}^{\prime\prime}+2\epsilon=0\) (cf. (25)). We excluded the specially fine-tuned case \(\gamma+8\beta\Lambda=0\), for which the field equations (with (7)) reduce to \(\alpha B_{ab}=0\) (see, e.g., [35]), as in conformal gravity - all static (spherical, hyperbolic, or planar) black holes are already known in this case [27, 28, 29, 31]. If, in addition, also \(\alpha=0\), one has a special Einstein-\(R^{2}\) gravity for which any metric with \(R=4\Lambda\) is a solution (see, e.g., [30, 32, 36, 37] for some examples). ### Conformal-to-Kundt ansatz We are interested in static black-hole solutions with spherical, hyperbolic, or planar symmetry. Instead of the standard Schwarzschild coordinates, throughout the paper, we will mostly employ the conformal-to-Kundt form of the metric introduced in [16, 19, 35]. This enables one to describe such spacetimes in the form \[\mathrm{d}s^{2}\equiv\Omega^{2}(r)\,\mathrm{d}s_{\mathrm{K}}^{2}=\Omega^{2}(r) \Big{[}\,\mathrm{d}\omega^{2}-2\,\mathrm{d}u\,\mathrm{d}r+\mathcal{H}(r)\, \mathrm{d}u^{2}\,\Big{]}\,, \tag{9}\] for which the resulting field equations are considerably simpler. The metric (9) admits a gauge freedom \[r\to\lambda\,r+\upsilon\,,\qquad u\to\lambda^{-1}\,u\,, \tag{10}\] where \(\lambda\,,\upsilon\) are constants, i.e., it is invariant up to rescaling \(\mathcal{H}\to\lambda^{2}\mathcal{H}\). When \(\Omega^{\prime}\neq 0\neq\mathcal{H}\) one can also define the standard Schwarzschild coordinates by [35] \[\bar{r}=\Omega(r)\,,\qquad t=u-\int\!\frac{\mathrm{d}r}{\mathcal{H}(r)}\,, \tag{11}\] giving rise to the line-element (3) with \[h=-\Omega^{2}\,\mathcal{H},\quad f=-\left(\frac{\Omega^{\prime}}{\Omega} \right)^{2}\mathcal{H}, \tag{12}\] where a prime denotes differentiation with respect to \(r\). Field equations and classes of power-series solutions ### Ricci, Weyl, and Bach tensors for the Kundt seed metric The nontrivial Ricci tensor components and the Ricci scalar of the Kundt background metric \(\mathrm{d}s^{2}_{\mathrm{K}}\) of (9) read (cf. [19] for \(\epsilon=1\) case) \[R^{\mathrm{K}}_{ru} = -\tfrac{1}{2}\,\mathcal{H}^{\prime\prime}\,,\qquad\qquad R^{ \mathrm{K}}_{uu}=-\mathcal{H}\,R^{\mathrm{K}}_{ru}\,, \tag{13}\] \[R^{\mathrm{K}}_{xx} = R^{\mathrm{K}}_{yy}=\epsilon g^{\mathrm{K}}_{xx}\,,\qquad R^{ \mathrm{K}}=\mathcal{H}^{\prime\prime}+2\epsilon\,. \tag{14}\] The non-vanishing components of the Weyl and Bach tensors are, respectively, \[C^{\mathrm{K}}_{ruru} = -\tfrac{1}{6}R^{\mathrm{K}}\,,\qquad\qquad\qquad C^{\mathrm{K}}_{ riuj}=\tfrac{1}{12}R^{\mathrm{K}}\,g^{\mathrm{K}}_{ij}\,, \tag{15}\] \[C^{\mathrm{K}}_{kilj} = \tfrac{1}{6}R^{\mathrm{K}}\left(g^{\mathrm{K}}_{kl}g^{\mathrm{K} }_{ij}-g^{\mathrm{K}}_{kj}g^{\mathrm{K}}_{ii}\right),\quad C^{\mathrm{K}}_{ uiuj}=-\mathcal{H}\,C^{\mathrm{K}}_{riuj} \tag{16}\] and \[B^{\mathrm{K}}_{rr} = -\tfrac{1}{6}\,\mathcal{H}^{\prime\prime\prime\prime}\,,\qquad B ^{\mathrm{K}}_{ru}=\tfrac{1}{12}\left(2\,\mathcal{H}\mathcal{H}^{\prime\prime \prime\prime}+\mathcal{H}^{\prime}\mathcal{H}^{\prime\prime\prime}-\tfrac{1}{ 2}\mathcal{H}^{\prime\prime 2}+2\epsilon^{2}\right), \tag{17}\] \[B^{\mathrm{K}}_{uu} = -\mathcal{H}\,B^{\mathrm{K}}_{ru}\,,\qquad B^{\mathrm{K}}_{xx}=B^{ \mathrm{K}}_{yy}=\tfrac{1}{12}\,g^{\mathrm{K}}_{xx}\left(\mathcal{H}\mathcal{H }^{\prime\prime\prime\prime}+\mathcal{H}^{\prime}\mathcal{H}^{\prime\prime \prime}-\tfrac{1}{2}\mathcal{H}^{\prime\prime 2}+2\epsilon^{2}\right). \tag{18}\] ### Ricci and Bach tensors for the full metric The nontrivial Ricci tensor components and the Ricci scalar for the full metric (9) are \[R_{rr} = -2\Omega^{-2}\big{(}\Omega\Omega^{\prime\prime}-2\Omega^{\prime }{}^{2}\big{)}\,,\qquad R_{ru}=-\tfrac{1}{2}\Omega^{-2}\big{(}\Omega^{2} \mathcal{H}\big{)}^{\prime\prime}\,, \tag{19}\] \[R_{uu} = -\mathcal{H}\,R_{ru}\,,\qquad\qquad\qquad\qquad R_{xx}=R_{yy}= \Omega^{-2}g^{\mathrm{K}}_{xx}\left[\big{(}\mathcal{H}\Omega\Omega^{\prime} \big{)}^{\prime}+\epsilon\Omega^{2}\right],\] (20) \[R = 6\Omega^{-3}\big{[}(\mathcal{H}\Omega^{\prime})^{\prime}+\tfrac {1}{6}(\mathcal{H}^{\prime\prime}+2\epsilon)\Omega\big{]}\,. \tag{21}\] The Bach tensor of the full metric can be obtained by a rescaling (6) \[B_{ab}=\Omega^{-2}B^{\mathrm{K}}_{ab}\,, \tag{22}\] while \(C^{a}_{\phantom{a}bcd}=C^{\mathrm{K}\alpha}_{\phantom{k}bcd}\), as well known. The Ricci squared, Bach, and Weyl invariants read4 Footnote 4: Note that the field equations (2) have been used to obtain (23). \[R_{ab}\,R^{ab} =4\Lambda^{2}+16k^{2}\,B_{ab}B^{ab}\,, \tag{23}\] \[B_{ab}\,B^{ab} =\tfrac{1}{72}\,\Omega^{-8}\left[(\mathcal{B}_{1})^{2}+2( \mathcal{B}_{1}+\mathcal{B}_{2})^{2}\right],\] (24) \[C_{abcd}\,C^{abcd} =\tfrac{1}{3}\,\Omega^{-4}\left(\mathcal{H}^{\prime\prime}+2 \epsilon\right)^{2}, \tag{25}\] where the two independent components of the Bach tensor, \(\mathcal{B}_{1}(r)\) and \(\mathcal{B}_{2}(r)\), are \[\mathcal{B}_{1}\equiv\mathcal{H}\mathcal{H}^{\prime\prime\prime \prime}\,,\qquad\mathcal{B}_{2}\equiv\mathcal{H}^{\prime}\mathcal{H}^{\prime \prime\prime}-\tfrac{1}{2}\mathcal{H}^{\prime\prime}{}^{2}+2\epsilon^{2}\,. \tag{26}\] It is useful to note that \(B_{ab}=0\Leftrightarrow\mathcal{B}_{1}=0=\mathcal{B}_{2}\). It can be also verified easily that \(C_{abcd}=0\Leftrightarrow\mathcal{H}^{\prime\prime}+2\epsilon=0\). ### Derivation and simplification of the field equations Following [19, 20], in this section, we show that the field equations (2) for the full metric (9) reduce to two coupled autonomous nonlinear differential equations. The nontrivial components \(rr\), \(ru\), and \(xx\) of the field equations (2) read \[\Omega\Omega^{\prime\prime}-2\Omega^{\prime}{}^{2} =\tfrac{1}{3}k\,\mathcal{H}^{\prime\prime\prime\prime}\,, \tag{27}\] \[\big{(}\Omega^{2}\mathcal{H}\big{)}^{\prime\prime}-2\Lambda\Omega ^{4} =-\tfrac{2}{3}k\big{(}2\,\mathcal{H}\mathcal{H}^{\prime\prime \prime\prime}+\mathcal{H}^{\prime}\mathcal{H}^{\prime\prime\prime}-\tfrac{1}{2} \mathcal{H}^{\prime\prime}{}^{2}+2\epsilon^{2}\big{)}\,,\] (28) \[\big{(}\mathcal{H}\Omega\Omega^{\prime}\big{)}^{\prime}+\epsilon \Omega^{2}-\Lambda\Omega^{4} =\tfrac{1}{3}k\,\big{(}\mathcal{H}\mathcal{H}^{\prime\prime \prime\prime}+\mathcal{H}^{\prime}\mathcal{H}^{\prime\prime\prime}-\tfrac{1}{ 2}\mathcal{H}^{\prime\prime}{}^{2}+2\epsilon^{2}\big{)}\,. \tag{29}\] The \(yy\) component of (2) is identical to the \(xx\) component and the \(uu\) component is a multiple of the \(ru\) component. The trace (7) of the field equations takes the form \[\mathcal{T}\equiv\mathcal{H}\Omega^{\prime\prime}+\mathcal{H}^{\prime}\Omega ^{\prime}+\tfrac{1}{6}(\mathcal{H}^{\prime\prime}+2\epsilon)\Omega=\tfrac{2}{3 }\Lambda\,\Omega^{3}\,. \tag{30}\] As in [19, 20], let us introduce a conserved \((\nabla^{b}J_{ab}\equiv 0)\) symmetric tensor \(J_{ab}\) \[J_{ab}\equiv R_{ab}-\tfrac{1}{2}R\,g_{ab}+\Lambda\,g_{ab}-4k\,B_{ab}\,. \tag{31}\] The non-trivial components are \(J_{rr}\), \(J_{uu}=-\mathcal{H}\,J_{ru}\), and \(J_{xx}=J_{yy}\). The vacuum field equations (4), assuming \(R=\)const, then take the form \(J_{ab}=0\). When \(\Omega^{\prime}\neq 0\), one can show that once the field equations \(J_{rr}=0\) and \(J_{ru}=0\) hold, then also \(J_{xx}\) vanishes (see Appendix C in [20] for a more detailed discussion in the \(\epsilon=1\) case).5 Footnote 5: In this paper, we will not study non-Einstein spacetimes with \(\Omega=\)const, which correspond to Kundt metrics. The first field equation \(J_{rr}=0\) reduces to (27). Substituting for \(\mathcal{H}^{\prime\prime\prime\prime}\) from (27), the equation \(J_{ur}=0\) can be simplified and we arrive at the following final form of the field equations \[\Omega\Omega^{\prime\prime}-2\Omega^{\prime}{}^{2} =\tfrac{1}{3}k\,\mathcal{H}^{\prime\prime\prime\prime}\,, \tag{32}\] \[\Omega\Omega^{\prime}\mathcal{H}^{\prime}+3\Omega^{\prime 2} \mathcal{H}+\epsilon\Omega^{2}-\Lambda\Omega^{4} =\tfrac{1}{3}k\big{(}\mathcal{H}^{\prime}\mathcal{H}^{\prime\prime \prime}-\tfrac{1}{2}\mathcal{H}^{\prime\prime}{}^{2}+2\epsilon^{2}\big{)}\,. \tag{33}\] ### Classes of power-series solutions Let us assume that the metric functions \(\Omega(r)\) and \(\mathcal{H}(r)\) in (9) can be expanded as infinite power series in \(\Delta\equiv r-r_{0}\) around a hypersurface \(r=r_{0}\), i.e., \[\Omega(r)=\Delta^{n}\sum_{i=0}^{\infty}a_{i}\,\Delta^{i}\,,\qquad\mathcal{H}( r)=\Delta^{p}\,\sum_{i=0}^{\infty}c_{i}\,\Delta^{i}. \tag{34}\] Substituting these expansions into the field equations (32) and (33) and comparing the leading terms leads to constraints on possible values of \(n\) and \(p\). In Table 1, we summarize the classes allowing for a vanishing \(\Lambda\) or an arbitrary \(\Lambda\). Note that there exist further classes allowing only for certain discrete nonzero values of \(\Lambda\) - these are not included in the table and will be studied elsewhere. The case \(\epsilon=+1\) has been already analyzed in [19, 20] (including the discrete values of \(\Lambda\)). In the rest of the paper, we will study the \([0,1]\) case, which corresponds to spacetimes admitting a non-extremal Killing horizon. For certain ranges of parameters, this can be interpreted as a black hole horizon. The remaining cases (as well as additional classes obtained using asymptotic expansions in negative powers of \(r\)) will be studied elsewhere. ## 4 Case \([0,1]\): black holes with a non-extremal horizon ### Preliminaries From now on, we focus on solutions for which \(n=0\) and \(p=1\) in (34). This means that we are expanding the metric near a non-extremal Killing horizon, located at \(r=r_{0}\). Let us thus relabel \[r_{h}\equiv r_{0}. \tag{35}\] Because of the freedom (10), the particular value \(r_{0}\) has no physical meaning. However, in the physical coordinates (11), (3), the horizon radius is given by \[\bar{r}_{h}\equiv\Omega(r_{h})=a_{0}>0, \tag{36}\] which is a dimensionful scale set by \(a_{0}\) (which is effectively an integration constant, see the following for more comments). Without loss of generality, we have fixed the sign of \(a_{0}\) using the invariance of (9) under \(\Omega\to-\Omega\). Before discussing the metric on-shell, let us note that, when \(a_{1}\neq 0\), the leading order behaviour of the metric functions \(h\) and \(f\) in the Schwarzschild coordinates (3) is given by \[h=-\frac{c_{0}a_{0}^{2}}{a_{1}}(\bar{r}-\bar{r}_{h})+\mathcal{O}\left((\bar{r }-\bar{r}_{h})^{2}\right),\qquad f=-\frac{c_{0}a_{1}}{a_{0}^{2}}(\bar{r}-\bar{ r}_{h})+\mathcal{O}\left((\bar{r}-\bar{r}_{h})^{2}\right). \tag{37}\] In order to have an outer black hole horizon, we thus need to take \[c_{0}a_{1}<0\,, \tag{38}\] which ensures that both \(h\) and \(f\) are positive in the exterior region \(\bar{r}>\bar{r}_{h}\) in the vicinity of \(\bar{r}=\bar{r}_{h}\) (negative \(h\) and \(f\) would correspond, e.g., to an inner or a cosmological horizon). Note that, when \begin{table} \begin{tabular}{|c||c|c||c||} \hline Case & Class \([n,p]\) & \(\Lambda\) & \(\epsilon\) \\ \hline I & \([-1,2]\) & \(0\) & \(\neq 0\) \\ & \([-1,3]\) & \(0\) & \(0\) \\ & \([0,p>2]\) & \(0\) & \(0\) \\ \hline II & \([0,0]\) & any & any \\ & \([0,1]\) & any & any \\ & \([1,0]\) & any & any \\ \hline III & \([-1,0]\) & any & any \\ & \([0,2]\) & any & \(\neq 0\) \\ \hline \end{tabular} \end{table} Table 1: Values of \([n,p]\) compatible with indicial equations following from (32)–(34). Certain further cases allowing only for discrete nonzero values of \(\Lambda\) are not included in the table. See [16, 17, 19, 20] for details in the case \(\epsilon=+1\). \(a_{1}>0\),6 near and across the horizon \(\bar{r}\) is monotonically increasing with \(r\), while for \(a_{1}<0\), \(\bar{r}\) is monotonically decreasing with \(r\). Therefore \(\partial_{r}\) is outward/inward according to the sign of \(a_{1}\). Footnote 6: This can be always achieved using the gauge transformation (10). The special case \(a_{1}=0\) leads to fractional steps in the expansion of the metric functions \(f(\bar{r})\) and \(h(\bar{r})\) in the physical coordinates (3), and will not be considered in the following (see [19, 20] for the case \(\epsilon=+1\)). ### General solution The lowest nontrivial order of the trace equation (30) gives \[a_{1}=\frac{a_{0}}{3c_{0}}\left[2\Lambda a_{0}^{2}-(\epsilon+c_{1})\right]\,, \tag{39}\] and then the lowest nontrivial order of (33) implies \[c_{2}=\frac{1}{6kc_{0}}\left[2k(c_{1}^{2}-\epsilon^{2})+a_{0}^{2}(2\epsilon-c _{1}-\Lambda a_{0}^{2})\right]\,. \tag{40}\] At any arbitrary higher order, one finds that the \([0,1]\) solution to eqs. (32), (33) is given by the recurrent formulas \[c_{l+2} = \frac{3}{k\,(l+3)(l+2)(l+1)l}\,\sum_{i=0}^{l}a_{i}\,a_{l+1-i}(l+1- i)(l-3i)\qquad\forall\ l\geq 1\,, \tag{41}\] \[a_{l} = \frac{1}{l^{2}c_{0}}\Bigg{[}\tfrac{2}{3}\Lambda\sum_{j=0}^{l-1}a_ {l-1-j}\sum_{i=0}^{j}a_{i}\,a_{j-i}-\tfrac{1}{3}\,\epsilon a_{l-1}-\sum_{i=1}^ {l}c_{i}\,a_{l-i}\left[l(l-i)+\tfrac{1}{6}i(i+1)\right]\Bigg{]}\quad\forall\ l\geq 2\,. \tag{42}\] The three parameters \(a_{0}\), \(c_{0}\), and \(c_{1}\) remain arbitrary and can be thought of as integration constants. It is also useful to observe that the Bach and Weyl invariants (24), (25) at \(r=r_{h}\) read \[B_{ab}\,B^{ab}(r_{h}) = \left(\frac{\epsilon^{2}-c_{1}^{2}+3c_{0}c_{2}}{3a_{0}^{4}} \right)^{2}=\left(\frac{c_{1}-2\epsilon+\Lambda a_{0}^{2}}{6ka_{0}^{2}}\right) ^{2}=\frac{b^{2}}{4k^{2}a_{0}^{4}}\,, \tag{43}\] \[C_{abcd}\,C^{abcd}(r_{h}) = \frac{4}{3a_{0}^{4}}(\epsilon+c_{1})^{2}\,, \tag{44}\] where we have introduced a dimensionless Bach parameter \(b\) by \[b\equiv\frac{1}{3}\left(c_{1}-2\epsilon+\Lambda a_{0}^{2}\right)\,, \tag{45}\] which measures the strength of the Bach tensor at the horizon (it is proportional to \({\cal B}_{2}(r_{h})\), see (26)). For definiteness, using eqs. (39), (40) and the recurrent relations (41), (42), the first few coefficients expressed in terms of free parameters \(a_{0}\), \(c_{0}\), and \(b\) read \[a_{1}=-\frac{a_{0}}{c_{0}}\left[(\epsilon-\Lambda a_{0}^{2})+b \right]\,, \tag{46}\] \[a_{2}=+\frac{a_{0}}{c_{0}^{2}}\left[(\epsilon-\Lambda a_{0}^{2}) ^{2}+b\left(2\epsilon+a_{0}^{2}\left(\frac{1}{8k}-\frac{7\Lambda}{3}\right) \right)+b^{2}\right]\,,\] (47) \[c_{1}=2\epsilon-\Lambda a_{0}^{2}+3b\,,\] (48) \[c_{2}=-\frac{1}{6kc_{0}}\left[2k\left(\epsilon^{2}-\left(3b-a_ {0}^{2}\Lambda+2\epsilon\right)^{2}\right)+3ba_{0}^{2}\right]\,,\] (49) \[c_{3}=\frac{a_{0}^{4}b(3-8k\Lambda)}{96k^{2}c_{0}^{2}}\,. \tag{50}\] To summarize, the above solution contains three integration constants \(a_{0}\), \(c_{0}\), and \(b\), along with the expansion radius \(r_{0}=r_{h}\), the sign of the curvature of the transverse space \(\epsilon\), and the constants of the theory \(k\) and \(\Lambda\). However, thanks to the gauge freedom (10), the number of physical parameters boils down to _two_ - essentially, the mass and the Bach parameter. We will show below in section 4.3 that if \(b=0\), then \(B_{ab}=0\) everywhere (i.e., not just at the horizon), in which case the solution becomes Einstein (cf. also section 2.1). The parameter \(b\) thus measures how the solution departs from a (topological) Schwarzschild-(A)dS black hole and plays the role of a gravitational (Bachian) "hair" (see also section 4.4.1 below). Because of (36), (38) and (46), at an outer black-hole horizon, it will be constrained by \[\epsilon-\Lambda a_{0}^{2}+b>0\,. \tag{51}\] From (12), using (46)-(48), the first two leading terms in the expansion of the metric functions \(h(\bar{r})\), \(f(\bar{r})\) around any horizon \(\bar{r}_{h}=a_{0}\) read (assuming \(a_{1}\neq 0\), i.e., \(\epsilon-\Lambda a_{0}^{2}+b\neq 0\)) \[h=\frac{a_{0}c_{0}^{2}}{\epsilon-\Lambda a_{0}^{2}+b}\bar{ \Delta}-\frac{c_{0}^{2}}{(\epsilon-\Lambda a_{0}^{2}+b)^{3}}\left[(\epsilon- \Lambda a_{0}^{2})\epsilon+b\left(3\epsilon-\frac{7}{3}\Lambda a_{0}^{2}+\frac {1}{8k}a_{0}^{2}\right)+2b^{2}\right]\bar{\Delta}^{2}+\ldots\,, \tag{52}\] \[f=\frac{\epsilon-\Lambda a_{0}^{2}+b}{a_{0}}\bar{\Delta}-\frac{1 }{a_{0}^{2}(\epsilon-\Lambda a_{0}^{2}+b)}\left[(\epsilon-\Lambda a_{0}^{2}) \epsilon+b\left(3\epsilon-\Lambda a_{0}^{2}-\frac{3}{8k}a_{0}^{2}\right)+2b^{2 }\right]\bar{\Delta}^{2}+\ldots\,, \tag{53}\] where \(\bar{\Delta}=\bar{r}-\bar{r}_{h}\). Note that in the gauge \(a_{1}=a_{0}^{2}\) (then \(c_{0}=-(\epsilon-\Lambda a_{0}^{2}+b)/a_{0}\) from (46)), \(f\) and \(h\) coincide at the leading order, and \(f=h\) in the limit \(b\to 0\). ### Identifying the background Einstein spacetimes (\(b=0\)) Let us now discuss the subclass of solutions for which the Bach parameter \(b\) vanishes. As we will show below, this consists of two families of Einstein's spacetimes, for which the Bach tensor is necessarily zero. Because of (51), solutions with a non-extremal black hole horizon must now satisfy \[\epsilon-\Lambda a_{0}^{2}>0\,. \tag{54}\] In particular, for \(\Lambda>0\), only spherical black holes (\(\epsilon=+1\)) will be possible. With \(b=0\), the coefficients (41), (42) reduce to \[a_{i}=a_{0}\left(-\,\frac{\epsilon-\Lambda a_{0}^{2}}{c_{0}} \right)^{i}\qquad\forall\ i\geq 0\,, \tag{55}\] \[c_{1}=2\epsilon-\Lambda a_{0}^{2},\qquad c_{2}=\frac{1}{3c_{0}}( \epsilon-\Lambda a_{0}^{2})(3\epsilon-\Lambda a_{0}^{2})\,,\qquad c_{i}=0 \quad\forall\ i\geq 3\,. \tag{56}\] Note, in particular, that the \(c_{i}\) sequence is truncated as \[\mathcal{H}(r)=c_{0}(r-r_{h})+c_{1}(r-r_{h})^{2}+c_{2}(r-r_{h})^{3}\,, \tag{57}\] with (56). There appear two possibilities depending on whether \(\epsilon-\Lambda a_{0}^{2}=0\) or not. #### 4.3.1 Generic case \(\epsilon-\Lambda a_{0}^{2}\neq 0\): (A)dS-Schwarzschild metric Assuming \(\epsilon-\Lambda a_{0}^{2}\neq 0\), the \(a_{i}\) sequence is a geometric series, giving rise to \[\Omega(r)=a_{0}\,\sum_{i=0}^{\infty}\,\Big{(}-(\epsilon-\Lambda a_{0}^{2})\frac {\Delta}{c_{0}}\Big{)}^{i}=\frac{a_{0}\,c_{0}}{c_{0}+(\epsilon-\Lambda a_{0}^{2 })\Delta}=\frac{a_{0}\,c_{0}}{(\epsilon-\Lambda a_{0}^{2})(r-r_{h})+c_{0}}\,. \tag{58}\] Employing the gauge freedom (10), we can set \[a_{0}=-\frac{1}{r_{h}}\,,\qquad c_{0}=r_{h}\epsilon-\frac{\Lambda}{r_{h}} \tag{59}\] (which also means \(a_{1}=a_{0}^{2}\)), reducing the metric functions to \[\bar{r}=\Omega(r)=-\frac{1}{r}\,,\qquad\mathcal{H}(r)=\frac{\Lambda}{3}- \epsilon r^{2}-\left(\frac{\Lambda}{3}-\epsilon r_{h}^{2}\right)\frac{r^{3}}{ r_{h}^{3}}\,. \tag{60}\] Using (3), (12), it is easy to see that this solution corresponds to the well-known [38, 39] (A)dS-Schwarzschild solution with a transverse space of arbitrary curvature, for which the only integration constant is usually rewritten as \(2m=\left(\frac{\Lambda}{3}-\epsilon r_{h}^{2}\right)\frac{1}{r_{h}^{3}}\), and the metric functions in the physical coordinates (3) take the form \[h(\bar{r})=f(\bar{r})^{-1}=\epsilon-\frac{2m}{\bar{r}}-\frac{\Lambda}{3}\bar{ r}^{2}. \tag{61}\] In addition to the standard spherical Kottler metric (for \(\epsilon=+1\), cf., e.g., [40, 41]), it describes topological black holes in Einstein's gravity [3, 4, 5, 6, 7, 8]. Here, the condition (54) implies, for example, the known fact that black holes with \(\epsilon\leq 0\) require \(\Lambda<0\). When equality holds (i.e., \(\epsilon-\Lambda a_{0}^{2}=0\)) one obtains the standard extremality condition for Einstein black holes [7, 8, 40].7 In the case \(\epsilon-\Lambda a_{0}^{2}<0\), the metric is time-dependent in the vicinity of the horizon in the exterior region (cf. (52), (53) with \(b=0\)) so that the hypersurface \(\bar{r}=\bar{r}_{h}\) cannot be an outer black hole horizon (nevertheless, the spacetime can contain an outer black-hole horizon located elsewhere when \(\epsilon\Lambda>0\), cf. [7, 8, 40] for details). Footnote 7: Note indeed that, although we have obtained the closed form (60) of the solution under the assumption \(\epsilon-\Lambda a_{0}^{2}\neq 0\), the metric defined by (60) admits also the possibility \(\epsilon-\Lambda a_{0}^{2}=0\), giving rise to a spacetime with a degenerate horizon at \(r=r_{h}\) when \(\Lambda\neq 0\), and to a flat spacetime when \(\Lambda=0\). In the usual form of the solution expressed in terms of \(m\) (cf. above), one can further set \(\Lambda=0=\epsilon\) while keeping \(m\neq 0\), which corresponds to a naked singularity. #### 4.3.2 Special case \(\epsilon-\Lambda a_{0}^{2}=0\): direct product Einstein spacetimes When \(\epsilon-\Lambda a_{0}^{2}=0\) one simply obtains \(a_{i}=0\) for all \(i>0\), \(c_{1}=\epsilon\) and \(c_{i}=0\) for all \(i>1\), i.e., \[\Omega=a_{0},\qquad\mathcal{H}=c_{0}(r-r_{h})+\epsilon(r-r_{h})^{2}, \tag{62}\] which describes an Einstein spacetime of the Kundt class in the form of a direct product metric of the (anti-)Nariai type (namely dS\({}_{2}\times\)S\({}^{2}\), flat space, or AdS\({}_{2}\times\)H\({}^{2}\), depending on the value of \(\epsilon\)). The horizon at \(r=r_{h}\) is a Killing horizon, but not a black hole one (cf. (54)). With a coordinate transformation (cf. [38]), one can always set \(c_{0}=0=r_{h}\). This class of metrics is related to the near-horizon geometry of the extremal limits of the black holes of section 4.3.1 (cf., e.g., the review [42] and references therein). One might wonder why, although the field equations (32), (33) can be solved exactly in the Einstein limit, we have not recovered here all Einstein spacetimes of the form (9) (or (3)) satisfying \(\epsilon-\Lambda a_{0}^{2}=0\) For example, in the \(\epsilon=0=\Lambda\) case, such a solution is the AIII metric of [43] (see [38, 39] for further references and a physical interpretation) for which \(\Omega\propto 1/r\) and \({\cal H}\propto r^{3}\), which represents a naked singularity located at \(r\to\infty\) (cf. also footnote 7). Such a type of solutions does not appear here because, in this section, we have considered the Einstein limit only of the [0,1] class - which, by construction, contains a Killing horizon - and thus horizonless metrics belonging to other cases in Table 1 may not appear. ### More general solutions: black holes with nonvanishing Bach tensor #### 4.4.1 Generic case \(\epsilon-\Lambda a_{0}^{2}\neq 0\) As mentioned above, the general quadratic-gravity solution of section 4.2 is non-Einstein when \(b\neq 0\). In this section, we will study a subset of black holes obeying (51) which admit the (A)dS-Schwarzschild metric (60) as a \(b\to 0\) limit (cf. section 4.3.1). See the end of this section for a discussion of the Einstein limit for the distinct cases \(\epsilon>\Lambda a_{0}^{2}\) and \(\epsilon<\Lambda a_{0}^{2}\). In order to express the metric functions \(\Omega(r)\) and \({\cal H}(r)\) as the (A)dS-Schwarzschild background plus a quadratic-gravity correction (cf. (67), (68)), we reparametrize the series coefficients \(a_{i}\) and \(c_{i}\) by introducing coefficients \(\alpha_{i},\gamma_{i}\) as in [16, 17, 19, 35]. Using again the gauge (59), one obtains \[a_{i} \equiv \tilde{a}_{i}-\frac{b}{r_{h}}\,\frac{\alpha_{i}}{(-r_{h}\rho)^{i }}\,,\qquad\mbox{where}\quad\tilde{a}_{i}\equiv a_{i}(b=0)=\frac{1}{(-r_{h})^{ 1+i}}\quad i\geq 0\,, \tag{63}\] \[c_{1} \equiv 2\epsilon-\frac{\Lambda}{r_{h}^{2}}+3b\,\gamma_{1},\quad c_{2} \equiv\frac{3\epsilon r_{h}^{2}-\Lambda}{3r_{h}^{3}}+3b\,\frac{r_{h}\gamma_{2 }}{\epsilon r_{h}^{2}-\Lambda},\quad\ c_{i}\equiv 3b\,\frac{\gamma_{i}\,r_{h}^{i- 1}}{(\epsilon r_{h}^{2}-\Lambda)^{i-1}}\,\quad\ i\geq 3, \tag{64}\] where \[\rho\equiv\epsilon-\frac{\Lambda}{r_{h}^{2}}\,, \tag{65}\] and \[\alpha_{0}\equiv 0\,,\quad\alpha_{1}=1\,,\quad\gamma_{1}=1\,,\quad \gamma_{2}=\frac{1}{3}\Big{[}4\epsilon-\frac{1}{r_{h}^{2}}\Big{(}2\Lambda+ \frac{1}{2k}\Big{)}+3b\Big{]}\,. \tag{66}\] The remaining coefficients \(\alpha_{i}\) and \(\gamma_{i}\) will be specified shortly. The functions \(\Omega\) and \({\cal H}\) then read \[\Omega(r) =-\frac{1}{r}-\frac{b}{r_{h}}\sum_{i=1}^{\infty}\alpha_{i}\Big{(} \,\frac{r_{h}-r}{\rho\,r_{h}}\Big{)}^{i}\,, \tag{67}\] \[{\cal H}(r) =(r-r_{h})\bigg{[}\,\epsilon\frac{r^{2}}{r_{h}}-\frac{\Lambda}{3r _{h}^{3}}\,\big{(}r^{2}+rr_{h}+r_{h}^{2}\big{)}+3b\,\rho\,r_{h}\sum_{i=1}^{ \infty}\gamma_{i}\Big{(}\,\frac{r-r_{h}}{\rho\,r_{h}}\Big{)}^{i}\,\bigg{]}\,, \tag{68}\] indeed explicitly expressing the Bachian part of the metric as a correction to the (A)dS-Schwarzschild background, as desired. Using (41) and (42), the coefficients \(\alpha_{l},\gamma_{l+1}\) for \(l\geq 2\) are given by the recurrent relations \[\alpha_{l}= \ \frac{1}{l^{2}}\Bigg{[}-\frac{2\Lambda}{3r_{h}^{2}}\,\sum_{j=0}^{ l-1}\sum_{i=0}^{j}\Big{[}\alpha_{l-1-j}\rho^{j}+\big{(}\rho^{l-1-j}+b\,\alpha_{l-1-j} \big{)}\big{(}\alpha_{i}\rho^{j-i}+\alpha_{j-i}(\rho^{i}+b\,\alpha_{i})\big{)} \Big{]}\] \[\ \ \ \ -\frac{1}{3}\alpha_{l-2}(2\epsilon+\rho)\rho(l-1)^{2}+ \alpha_{l-1}\left[\frac{\epsilon}{3}+(\epsilon+\rho)\big{(}l(l-1)+\frac{1}{3} \big{)}\right]\] \[\ \ \ \ -3\sum_{i=1}^{l}(-1)^{i}\,\gamma_{i}\,(\rho^{l-i}+b\, \alpha_{l-i})\left[l(l-i)+\frac{1}{6}i(i+1)\right]\Bigg{]}\,,\] \[\gamma_{l+1}= \ \frac{(-1)^{l}}{kr_{h}^{2}\,(l+2)(l+1)l(l-1)}\sum_{i=0}^{l-1} \big{[}\alpha_{i}\rho^{l-i}+\alpha_{l-i}\big{(}\rho^{i}+b\,\alpha_{i}\big{)} \big{]}(l-i)(l-1-3i)\quad\forall\quad l\geq 2\,. \tag{69}\] Explicitly, the first few terms then read \[\alpha_{2} =2\epsilon-\left(\frac{7}{3}\Lambda-\frac{1}{8k}\right)\frac{1}{ r_{h}^{2}}+b\,,\] \[\alpha_{3} =\frac{1}{9}\left[25\epsilon^{2}+\left(\frac{29}{8k}-\frac{179}{ 3}\Lambda\right)\frac{\epsilon}{r_{h}^{2}}+\left(\frac{1}{16k^{2}}-\frac{77}{ 24k}\Lambda+\frac{298}{9}\Lambda^{2}\right)\frac{1}{r_{h}^{4}}\right]\] \[\ \ \ +\frac{1}{9}\left[23\epsilon+\left(\frac{35}{8k}-\frac{104}{ 3}\Lambda\right)\frac{1}{r_{h}^{2}}\right]\,b+\frac{7}{9}\,b^{2}\,,\quad\ldots\,, \tag{70}\] \[\gamma_{3} =\frac{1}{96k^{2}r_{h}^{4}}\left(1-\frac{8k}{3}\Lambda\right)\,,\] \[\gamma_{4} =\frac{1}{18kr_{h}^{2}}\left[\frac{\epsilon^{2}}{5}+\left(-\frac{ 1}{4k}+\frac{4}{15}\Lambda\right)\frac{\epsilon}{r_{h}^{2}}-\frac{1}{160k^{2}r _{h}^{4}}-\frac{1}{45r_{h}^{4}}\left(14\Lambda^{2}-\frac{75}{8k}\Lambda\right)\right]\] \[\ \ \ +\frac{1}{720kr_{h}^{2}}\left[16\epsilon+\left(-\frac{13}{k}+ \frac{56}{3}\Lambda\right)\frac{1}{r_{h}^{2}}\right]\,b+\frac{1}{90kr_{h}^{2}} \,b^{2}\,,\quad\ldots\,, \tag{71}\] which leads to \[\Omega(r) =-\frac{1}{r}+b\frac{(r-r_{h})}{\rho\,r_{h}^{2}}-\frac{b}{r_{h}} \left[2\rho+\frac{1}{24kr_{h}^{2}}(3-8k\Lambda)+b\right]\Big{(}\,\frac{r-r_{h} }{\rho\,r_{h}}\Big{)}^{2}+\cdots\,,\] (72) \[\mathcal{H}(r) =(r-r_{h})\bigg{\{}\,\epsilon\frac{r^{2}}{r_{h}}-\frac{\Lambda}{3 \,r_{h}^{3}}\left(r^{2}+rr_{h}+r_{h}^{2}\right)\] \[\ \ * **Case \(\epsilon-\Lambda a_{0}^{2}>0\): (topological) Schwarzschild-Bach-(A)dS black holes** In this case, in order to satisfy (51), \(b\) is bounded from below by \(\tilde{b}\equiv\Lambda a_{0}^{2}-\epsilon<0\). Thus we can approach \(b\to 0\) from both sides while keeping the inequality (51) satisfied. Therefore, in this limit, the quadratic-gravity black-hole horizon reduces to the (A)dS-Schwarzschild black-hole horizon and we will refer to them as _(topological) Schwarzschild-Bach-(A)dS black holes_. * at some point, one reaches a critical value \(b_{0}=\Lambda a_{0}^{2}-\epsilon>0\) corresponding to \(a_{1}=0\) and therefore the metric functions \(h(\bar{r})\) and \(f(\bar{r})\) cannot be expressed as power series in \(\bar{\Delta}\) with integer powers, see [19, 20] for the case \(\epsilon=+1\). Nevertheless, in the Kundt coordinates, the expressions (67), (68) still hold even for \(b=b_{0}\) and the limiting procedure can be performed. For all \(0\leq b<b_{0}\), the horizon at \(\bar{r}=\bar{r}_{h}\) is now a cosmological or an inner horizon. The \(b\to 0\) limit (cf. section 4.3.1) gives (A)dS-Schwarzschild metric with a cosmological/inner horizon at \(\bar{r}=\bar{r}_{h}\) which may or may not admit another (black-hole) horizon depending on the values of parameters \(\epsilon\), \(\Lambda\), and \(r_{h}\). More precisely, the black-hole horizon can appear either for \(\Lambda>0\) and \(\epsilon>0\) or \(\Lambda<0\) and \(\epsilon<0\) with additional conditions on the parameters given in [40] and [7, 8], respectively. Einstein limits of these quadratic-gravity black holes are thus either (A)dS-Schwarzschild black holes or naked singularities. If the limit is the (A)dS-Schwarzschild black hole, we will refer to this black hole as a _(topological) Schwarzschild-Bach-(A)dS black hole_. If the limit is a naked singularity, we will refer to this black hole a _purely Bachian (topological) black hole_. Let us conclude this section by presenting an example of a purely Bachian (toroidal) black hole with \(\epsilon=0\), \(\Lambda=0.2\), \(r_{h}=-1\), \(k=0.5\), \(b=0.3\) in Figure 1, where the series approximation is depicted together with a numerical solution. Note that black holes of the form (3) with \(\epsilon=0\) and \(\Lambda>0\) are not allowed in Einstein's gravity. Figure 2 shows that, within a certain continuous range of parameters, black holes of this section with \(\Lambda<0\) are asymptotically AdS (here depicted for the spherical case \(\epsilon=1\)). This is in agreement with previous numerical results of [25] obtained in the case of Einstein-Weyl gravity (see section 5 for further comments). #### 4.4.2 Special case \(\epsilon-\Lambda a_{0}^{2}=0\) In the special case \(\epsilon-\Lambda a_{0}^{2}=0\), solutions \([0,1]\) with series expansions (34) and coefficients \(a_{i}\) and \(c_{i}\) given in section 4.2 describe purely Bachian black holes provided condition (51) is satisfied, i.e., \(b>0\). This will be assumed in what follows. In the physical coordinates (3), the first few orders of the metric functions \(h\) and \(f\) are given by (52) and (53), respectively. In particular, within this special class, one can have black holes with \(\epsilon=0=\Lambda\). By contrast, recall that Einstein's gravity with \(\Lambda=0\) does not allow for flat (\(\epsilon=0\)) black-hole horizons in vacuum (3) (cf. (54)). This is thus a new feature of quadratic gravity and planar horizons (and compactifications thereof) with \(\Lambda=0\) are now allowed. For the special case of Weyl conformal gravity (i.e., \(\gamma=0=\beta\)), this was noted already in [31]. Coefficients \(a_{i}\) and \(c_{i}\) for these black holes can be obtained by substituting \(\epsilon=0=\Lambda\) in (41) and (42). An example of such a solution is given in Figure 3. However, similarly as in the case of spherical horizons with \(\Lambda=0\)[11, 12, 13, 15, 16, 18, 19, 21], it turns out that, for generic values of the parameters \(a_{0}\) and \(b\), the metric functions \(h\) and \(f\) diverge as \(\bar{r}\to+\infty\). In order to remedy this one needs to fine-tune the parameter \(b\) (for any given \(a_{0}\)), such that in the weak gravity regime, one recovers a solution of general relativity. We have obtained the evidence that this is indeed possible by performing fine-tuning using the first 100 terms in the series expansion of the solution (instead of using a numerical solution, as was done in [11, 12, 13, 15, 18, 21]) - this is shown in Figure 4. The asymptotic Ricci-flat spacetime (i.e., an AIII metric [38, 39, 43]) is characterized by \(h(\bar{r})\propto f(\bar{r})=2m/\bar{r}\) (the parameter \(m\) is determined by the parametres of the quadratic-gravity solution, \(a_{0}\) and \(b\); the equality \(h=f\) could be achieved by an appropriate gauge transformation \(t\to\sigma t\)). A more precise approach to fine-tuning would be to match the expansion in the vicinity of the horizon with an asymptotic expansion in the form of logarithmic-exponential transseries (cf. [15]) in the physical coordinates (3). The Einstein limit of these spacetimes belongs to the Kundt class in the form of a direct product metric of the (anti-)Nariai type (namely dS\({}_{2}\times\)S\({}^{2}\), flat space, or AdS\({}_{2}\times\)H\({}^{2}\), depending on the value of \(\epsilon\), see section 4.3.2. ## 5 Conclusions We have studied static black-hole solutions of the most general four-dimensional quadratic-gravity theory (1) with a non-zero Einstein term (\(\gamma\neq 0\)), under the assumption \(R=\)const (motivated by [12]). We have presented a solution representing black holes possessing a non-extremal compact horizon of arbitrary topology. The solution is given in terms of an infinite power-series expansion (based on a Figure 1: Series (67) and (68) for the metric functions \({\cal H}(r)\) (left) and \(\Omega(r)\) (right) for the first 20 (red), 50 (orange), 100 (green), and 200 (blue) terms and a numerical solution (black) of (32) and (33) for values \(\epsilon=0\), \(\Lambda=0.2\), \(r_{h}=-1\), \(k=0.5\), \(b=0.3\) (recall the gauge (59) is used here). From further analysis (not represented in the above figures, cf. [17, 20] in the case \(\epsilon=+1\)), the coefficients \(\alpha_{i}\) and \(\gamma_{i}\) seem to be approaching geometric series for large values of \(i\). This allows us to estimate the radius of the convergence and the interval of convergence indicated, in both graphs, by the two vertical dashed lines. Note that, in the interval of convergence, \(\bar{r}=\Omega(r)\) decreases as \(r\) grows. The horizon at \(r=-1\) separates a static, outer region (\(r<-1\), \({\cal H}<0\)) from a time-dependent, inner one (\(r>-1\), \({\cal H}>0\)). While it is straightforward to estimate the lower bound of the interval of convergence in the physical coordinate \(\bar{r}=\Omega(r)\) (using the right figure), it is difficult to estimate the upper bound, since it is determined by a (possible) intersection of \(\Omega(r)\) with the left vertical dashed line. The numerical solution blows up precisely in the vicinity of the left vertical dashed line, thus making an accurate estimate impossible from such a graph. Frobenius-like approach) around the horizon. Several different branches of the solution have been identified, which admit different Einstein limits - accordingly, they can thus be interpreted as either higher-derivative "corrections" to Einstein black holes, or as purely Bachian black holes (for which the horizon does not survive the Einstein limit). Figure 3: Series (41) and (42) for the metric functions \(\mathcal{H}(r)\) (left) and \(\Omega(r)\) (right) for the first 20 (red), 50 (orange), 100 (green), and 200 (blue) terms for values \(\epsilon=0\), \(\Lambda=0\), \(r_{h}=-1\), \(k=1/2\), \(c_{0}=-1\), \(a_{0}=1\), \(b=1/5\). Similarly as in Fig. 1, the coefficients \(a_{i}\) and \(c_{i}\) seem to be approaching geometric series for large values of \(i\), which can be used to estimate the interval of convergence, denoted by the vertical dashed lines. The horizon at \(r=-1\) separates a static, outer region (\(r>-1\), \(\mathcal{H}<0\)) from a time-dependent, inner one (\(r<-1\), \(\mathcal{H}>0\)). Figure 2: Functions \(f(\bar{r})/\bar{r}^{2}\) and \(h(\bar{r})/\bar{r}^{2}\) for spherical Schwarzschild-Bach-AdS black holes obtained from the first 200 terms of series (41) and (42) for parameters \(r_{h}=-1\), \(k=1/3\), \(b=1/5\), \(\Lambda=-3\), \(\epsilon=1\) (recall the gauge (59) is used here where, in general, \(h\neq f\) asymptotically; however, this could be remedied by a gauge transformation \(t\to\sigma t\)). Further calculations suggest that, also within a certain range of parameters of the solution, fine-tuning of the Bach parameter \(b\) is not necessary to obtain an asymptotically AdS spacetime (see section 5 for further comments). This observation for the quadratic-gravity black holes is in agreement with previous numerical results of [25] obtained in the case of Einstein-Weyl gravity. In contrast, in the \(\Lambda=0\) case, fine-tuning is necessary (see Figure 4 and the numerical results of [11, 12, 13, 15, 18, 21]). Although the general solution contains two independent integration constants (i.e., the black hole radius and the Bach parameter), for the special case of toroidal black holes with \(\Lambda=0\), we have given evidence that solutions of physical interest (i.e., matching asymptotically an Einstein solution) need to be fine-tuned, such that there is in fact only one free parameter. This resembles corresponding results obtained using numerical methods in the case of spherical, asymptotically flat black holes [11, 12, 13, 15, 18, 21]. Further investigation in this direction will be of interest and will play an important role in the study of thermodynamics of these topological black holes. On the other hand, for theories with \(\Lambda<0\) and \(k>-3/(4\Lambda)\), numerical results of [25] indicate that (for a given radius) there exists an open interval of values of \(b\) such that these black holes are asymptotically AdS, with no need of any fine-tuning - while the behavior becomes asymptotically Lifshitz at the extremes of that interval. This behaviour is preserved also in full quadratic gravity, provided the Ricci scalar is constant (cf. eqs. (2), (8), and Figure 2). The first law of the thermodynamics for these black holes was studied in [26] and contrasted with the asymptotically flat case [11, 12, 13, 15, 18, 21]. Finally, it is worth emphasizing that the simplified form of the field equations and the summary of classes of solutions provided in section 3 is of interest also in a broader context. Based on this, other types of solutions (such as extremal black holes, naked singularities, and wormholes) exist and will be studied elsewhere. ## Acknowledgments This work has been supported by the Czech Academy of Sciences (RVO 67985840) and research grant GACR 19-09659S.
2308.13722
Time-to-Pattern: Information-Theoretic Unsupervised Learning for Scalable Time Series Summarization
Data summarization is the process of generating interpretable and representative subsets from a dataset. Existing time series summarization approaches often search for recurring subsequences using a set of manually devised similarity functions to summarize the data. However, such approaches are fraught with limitations stemming from an exhaustive search coupled with a heuristic definition of series similarity. Such approaches affect the diversity and comprehensiveness of the generated data summaries. To mitigate these limitations, we introduce an approach to time series summarization, called Time-to-Pattern (T2P), which aims to find a set of diverse patterns that together encode the most salient information, following the notion of minimum description length. T2P is implemented as a deep generative model that learns informative embeddings of the discrete time series on a latent space specifically designed to be interpretable. Our synthetic and real-world experiments reveal that T2P discovers informative patterns, even in noisy and complex settings. Furthermore, our results also showcase the improved performance of T2P over previous work in pattern diversity and processing scalability, which conclusively demonstrate the algorithm's effectiveness for time series summarization.
Alireza Ghods, Trong Nghia Hoang, Diane Cook
2023-08-26T01:15:32Z
http://arxiv.org/abs/2308.13722v1
# Time-to-Pattern: Information-Theoretic Unsupervised Learning for Scalable Time Series Summarization ###### Abstract Data summarization is the process of generating interpretable and representative subsets from a dataset. Existing time series summarization approaches often search for recurring subsequences using a set of manually devised similarity functions to summarize the data. However, such approaches are fraught with limitations stemming from an exhaustive search coupled with a heuristic definition of series similarity. Such approaches affect the diversity and comprehensiveness of the generated data summaries. To mitigate these limitations, we introduce an approach to time series summarization, called Time-to-Pattern (T2P), which aims to find a set of diverse patterns that together encode the most salient information, following the notion of minimum description length. T2P is implemented as a deep generative model that learns informative embeddings of the discrete time series on a latent space specifically designed to be interpretable. Our synthetic and real-world experiments reveal that T2P discovers informative patterns, even in noisy and complex settings. Furthermore, our results also showcase the improved performance of T2P over previous work in pattern diversity and processing scalability, which conclusively demonstrate the algorithm's effectiveness for time series summarization. Autoencoders Pattern Mining Time Series ## 1 Introduction The rapid proliferation of IoT sensors and online data collection mechanisms [1] has led to remarkable growth in the amount and availability of vast, diverse time series data. Consequently, extracting meaningful patterns from complex time series data is increasingly crucial for effectively interpreting these resources [2]. Data summarization aims to produce a simplifying report that comprehensively describes the data [3, 4, 5]. A helpful summary contains a set of descriptive patterns. For example, a pattern can be defined as either a direct subsequence of the time series or its latent embedding, which summarizes its most salient information and can be decoded back to the original space of the time series. In this view, each pattern is expected to accurately reflect an important part of the original data (pattern _fidelity_) either as a direct subsequence of data or via its decoded representation. In addition, patterns must also be sufficiently diverse in content to represent all aspects of the time series, rather than just the most frequent ones (pattern _diversity_) [4, 5, 6]. These are all important desiderata as time series summarization plays a critical role in numerous tasks, including data management [7], data interpretation [8, 9], and boosting the performance of machine learning algorithms [8] with increased computational efficiency [10]. Despite the numerous approaches that have been attempted [8, 9, 11, 12, 13], limitations still must be addressed in identifying informative subsequences. Owing to their exhaustive search, these methods grapple with scalability issues, and their dependence on a similarity function inevitably ushers in a degree of bias. Instead, we propose a new approach that is based on a neural network method that does not rely on exhaustive search and benefits from GPU and TPU hardware for better scalability. Furthermore, we tackle this problem by identifying patterns within the data
2310.09530
Ergodicity, lack thereof, and the performance of reservoir computing with memristive networks
Networks composed of nanoscale memristive components, such as nanowire and nanoparticle networks, have recently received considerable attention because of their potential use as neuromorphic devices. In this study, we explore the connection between ergodicity in memristive and nanowire networks, showing that the performance of reservoir devices improves when these networks are tuned to operate at the edge between two global stability points. The lack of ergodicity is associated with the emergence of memory in the system. We measure the level of ergodicity using the Thirumalai-Mountain metric, and we show that in the absence of ergodicity, two memristive systems show improved performance when utilized as reservoir computers (RC). In particular, we highlight that it is also important to let the system synchronize to the input signal in order for the performance of the RC to exhibit improvements over the baseline.
Valentina Baccetti, Ruomin Zhu, Zdenka Kuncic, Francesco Caravelli
2023-10-14T08:19:59Z
http://arxiv.org/abs/2310.09530v1
# Ergodicity, lack thereof, and the performance of ###### Abstract Networks composed of nanoscale memristive components, such as nanowire and nanoparticle networks, have recently received considerable attention because of their potential use as neuromorphic devices. In this study, we explore the connection between ergodicity in memristive and nanowire networks, showing that the performance of reservoir devices improves when these networks are tuned to operate at the edge between two global stability points. The lack of ergodicity is associated with the emergence of memory in the system. We measure the level of ergodicity using the Thirumalai-Mountain metric, and we show that in the absence of ergodicity, two memristive systems show improved performance when utilized as reservoir computers (RC). In particular, we highlight that it is also important to let the system synchronize to the input signal in order for the performance of the RC to exhibit improvements over the baseline. ## I Introduction Memristive networks are electronic circuits that use memristive devices, a type of resistive switching memory element. These networks store and recall information by modifying the device's resistance. They find applications in computer memory, neuromorphic computing, and signal processing [1]. Memristive networks offer benefits such as high density, low power consumption, and potential for high-speed operation. Simultaneously, there is growing interest in alternative approaches to computation and optimization in response to the rapidly increasing demands on computing [2]. Various proposals have emerged to address this challenge, some of which involve the use of oscillators or frequency domain encoding [3; 4; 5; 6; 7], leveraging near- or in-memory computation [8; 9; 10; 11; 12; 13], and exploring memcomputing [12; 13]. These innovative techniques aim to provide more efficient solutions for complex optimization problems [3; 4; 5; 12; 14; 15; 16; 17; 18]. However, understanding large assemblies of memristive devices in non-equilibrium statistical mechanics remains a challenge. Recent research has focused on exploring the geometric and statistical properties of nanowire networks, where electrical junctions exhibit memristive behavior due to the interplay between tunneling and filament formation phenomena[19; 20]. Conductive nano-filament formation at nanowire-nanowire junctions creates a memristive device, while quantum tunneling contributes to additional nonlinearity in switching behavior, making the dynamics more complex [21]. As most of the voltage drop occurs at the junctions, a basic model for the conductance evolution of these networks involves an assembly of memristive devices with voltage or current generators. The dynamic behavior of these systems is currently being studied, especially regarding bias-induced conductance transitions [21; 22; 23; 24]. Various conductance transitions have been observed in memristive devices, often characterized by transient unstable dynamics of the memristive components and their internal memory parameter. One well-known transition, which has been defined as "rumbling", has been analytically identified in the simplest model of memristive networks [23]. A similar transition has also been observed in nanowire networks [21]. This transition involves the system effectively moving between different minima of an effective potential and is marked by bursts of transient positive Lyapunov exponents. It arises from the coexistence of multiple low-dimensional equilibrium points in the dynamics. The mean-field theory for such systems can be ensured by mapping them to a PEDS (projective embedding of dynamical systems) [25]. This is relevant to our study as we aim to identify non-ergodic behavior associated with these transitions. In certain memristive systems, as the applied voltage increases, an effective mean potential develops with multiple minima. As one minimum becomes dominant, the system undergoes a rapid chaotic transition towards this emerging stable fixed point. This transition is distinct from the conventional Landau picture of symmetry breaking with bifurcation, as it arises from the competition between two minima. While the understanding of these transitions is clearer in simplified memristive device models, it becomes less evident in more realistic systems. In this paper, we aim to describe the dynamic behavior of these transitions in two systems using ergodicity measures. Ergodicity, a concept in statistical mechanics and thermodynamics, pertains to the long-term behavior of a system. It states that the time average of a system over an extended period is equivalent to the ensemble average of the system. In essence, it implies that the system reaches a steady state, and its long-term average behavior aligns with the average behavior across many instances of the same initial conditions. Ergodicity plays a crucial role in various natural sciences, such as physics and chemistry, enabling an understanding of system evolution, equilibrium attainment, and the prediction of long-term behavior based on short-term observations. In the context of memory-dependent systems, the lack of ergodic behavior signifies (hard) ergodicity breaking, which occurs when symmetry breaking transpires in thermodynamic systems [26]. Typically, physical systems operate within a regime where ergodicity holds for a subset of possible phase space states. Quantifying ergodicity and gaining insight into the dynamical state of a physical system is vital for comprehending the system's operational regime. One of the goals of this paper will also be to test the hypothesis that computation near a transition point improves. To test this hypothesis, we will use reservoir computing (RC) as our computational model [27], which has been recently shown to be universal [28]. It has been reported for instance that the "edge of chaos" may be important for the performance of RC [29], but it is important to stress that critical states have been reported both for biological neuronal networks in the brain [30] and in other artificial neural networks [31]. This article is organized as follows: in section II we introduce the general definition of ergodicity and how it is related to memory; in section III we introduce the two memristive systems we are considering in this study; in section IV we give the definition of the TM metric for the two memristive systems we are considering, and test the RC task results for both memristive systems in terms of ergodicity breaking. Conclusions follow. ## II Memory, ergodic convergence and the Thirumalai-Mountain metric Ergodicity and the emergence of memory are closely intertwined concepts within the realms of statistical physics and complexity science. Ergodicity refers to a system's property of uniformly exploring all possible states in its phase space over an extended period. In essence, it characterizes a system's behavior as it traverses its accessible states in a time-averaged manner. On the other hand, memory denotes the influence of past states on the current state of a system. In many systems, the emergence of memory is closely connected to the violation of ergodicity. When a system is ergodic, its behavior remains independent of its past history, exhibiting no memory effect. However, when ergodicity is violated, the system may display persistent or long-term correlations, resulting in the emergence of memory. A notable example of the relationship between ergodicity and memory can be observed in spin-glasses, which are disordered magnetic systems that exemplify complex dynamics with persistent or long-term correlations that foster memory emergence. The behavior of spin glasses is often described through two-time correlation functions, which measure correlations between spin configurations at different points in time. Studying two-time correlation functions in spin glasses sheds light on system dynamics, including the emergence of memory and ageing phenomena. These functions measure correlations between spin configurations at different times, while the auto-correlation function reveals details about the system's relaxation dynamics. Slow decays observed in these functions signify the violation of ergodicity and the emergence of memory, while the dependence on both \(t\) and the waiting time \(t_{w}\) highlights the ageing phenomenon in spin glasses [32] and other frustrated systems [33; 34]. Ergodicity refers to a system's property where the long-term behavior can be deduced from a single, extended observation of the system, rather than relying on multiple independent realizations. According to Boltzmann's hypothesis, the trajectories of any dynamical system in its phase space eventually evolve into regions where macroscopic properties reach thermodynamic equilibrium [35]. The ergodic hypothesis states that ensemble averages and time averages coincide as time progresses. This means that the ensemble-averaged value of an observable, denoted as \(\langle g\rangle\), can be obtained by averaging its values over time during the observable's evolution. Mathematically for the observable \(g(t)\) we have: \[\langle g\rangle=\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}g(t)dt. \tag{1}\] It is important to note that various definitions of ergodicity exist in the Physics and Mathematics literature [36; 37; 38], with significant distinctions. For example, in Markov chains, ergodic behavior requires a strongly connected transition graph, which is rarely the case. In practice, a system can exhibit ergodicity within a specific subset of its phase space. The concept of "effective ergodicity" was introduced to describe scenarios where the system rapidly samples coarse-grained regions [37]. A measure of effective ergodic convergence is based on the observation that certain components of a system exhibit identical average characteristics at thermal equilibrium. These characteristics are determined by an observable defined on the system's phase space, denoted as \(\Gamma\). To assess the effective ergodic behavior of the observable, it is necessary to estimate its average value using an ensemble approach, such as a thermal ensemble. This estimation is typically performed using the Thirumalai-Mountain (TM) \(g\)-fluctuating metric, denoted as \(\Omega_{g}(t)\). The TM \(g\)-fluctuating metric, introduced by Thirumalai and Mountain [37; 39; 40], quantifies the difference between the ensemble-averaged value of the observable, \(g(\Gamma)\), and the sum of the instantaneous values of \(g(t)\) for each component of the system. At a given time \(t\), the TM \(g\)-fluctuating metric is expressed as: \[\Omega_{g}(t)=\frac{1}{N}\sum_{j=1}^{N}\big{[}\bar{g}_{j}(t)-\langle g(t)\rangle \big{]}^{2}, \tag{2}\] Here, \(\bar{g}_{j}(t)\) represents the time-averaged value per component, and \(\langle g(t)\rangle\) denotes the instantaneous ensemble average, defined as: \[\bar{g}_{j}(t)=\frac{1}{t}\sum_{i=0}^{t}g_{j}(t_{i}),\ \langle g(t)\rangle=\frac{1}{N} \sum_{j=1}^{N}g_{j}(t). \tag{3}\] This definition assumes that \(g(\Gamma)\) serves as a suitable physical order parameter, effectively characterizing the system's behavior. The definition of instantaneous ensemble average will depend on the considered system, as it will be shown in Sec. IV. The rate of ergodic convergence can be quantified using the derivative of the effective ergodic convergence, denoted as \(\Omega_{g}^{{}^{\prime}}\). This quantity is given by: \[\Omega_{g}^{{}^{\prime}}=\frac{\Omega_{G}(t)}{\Omega_{G}(0)}\to\frac{1}{tD_{g }}, \tag{4}\] in the diffusive regime. When the system is instead in a sub-diffusive or super-diffusive regime, the power of the relaxation in time changes from 1 to a different exponent. Coarse graining of the phase space leads to the clustering of the system's accessible states, making the concept of effective ergodicity more applicable. Effective ergodicity is achieved when the system uniformly explores the coarse-grained regions within a finite time [39]. Here, \(D_{G}\) represents the diffusion coefficient associated with the property being studied, and \(\Omega_{g}\) refers to the effective ergodic convergence. This definition aligns with other notions of ergodicity in cases where diffusion follows a power law [41]. The rate of ergodic convergence, determined using the TM metric, provides an estimate of the system's ergodicity. The behavior of the rate, as described by Eq. (4), indicates the system's attainment of effective ergodicity. For example, if the inverse of the rate scales linearly with time, the system reaches ergodicity in a diffusive manner: \(1/\Omega_{g}\to D_{G}t\), where \(D_{G}=\Omega_{G}(0)\) represents the diffusion coefficient of the property \(G\). It is generally expected that the rate scales with time as \(\Omega_{G}^{{}^{\prime}}(t)\sim t^{-p}\). When the inverse of \(\Omega_{g}^{{}^{\prime}}\) exhibits a linear relationship with time, it indicates that all points in the phase space are equally likely, resembling the behavior of Brownian motion. This approach has been applied in various contexts, such as simple liquids [37], earthquake fault networks [42; 43], and the Ising model [44]. ## III Models of Memristive Devices We would like to introduce the notion of memristive device that we will use in the following. We will consider two models of memristive device, a resistive current-controlled memristive device, which can be described by the equations \[V(t) = R\big{(}x\big{)}I(t), \tag{5}\] \[\frac{dx}{dt} = f(I,x).\] The second model we consider is a conductance based device, of the form \[I(t) = G\big{(}\lambda\big{)}V(t), \tag{6}\] \[\frac{d\lambda}{dt} = f(V,\lambda). \tag{7}\] Both models satisfy the pinched hysteresis property for the \(I-V\) curve, characteristic of memristive devices [45]. ### Memristive network toy model In [46], the dynamical equation for a circuit of memristors was derived under the assumption of a resistance current-controlled device. The resistance function used, \(R(x)=R_{on}(1-x)+xR_{off}\), approximates \(TiO_{2}\) memristors, where \(R_{on}\) and \(R_{off}\) represent the limiting resistances, and \(x\in[0,1]\) is _internal memory parameter_, the state variable describing the size of the oxygen-deficient conducting layer. At the lowest order, the evolution of the internal memory parameter can be described by a simple equation with hard boundaries: \[\frac{dx}{dt}=\frac{R_{off}}{\beta}I-\alpha x=\frac{R_{off}}{\beta}\frac{V}{R (x)}-\alpha x \tag{8}\] Here, \(\alpha\) and \(\beta\) are the decay constant and the effective activation voltage per unit of time, respectively, which determine the timescales of the system. While the model presented above provides a simple description of a polar resistive device, various extensions have been explored in the literature. For instance, to account for diffusive effects near the boundaries, some models remove the hard boundaries and introduce a window function [47; 48]. Although these models better capture the detailed IV curves of physical devices, they still exhibit the fundamental pinched hysteresis behavior observed in the linear model. In adimensional units (\(\tau=\alpha t\)), the equation for \(x(t)\) in a single memristor device under an applied voltage \(S\) can be derived using Ohm's law (\(S=RI\)), [23; 49]. The resulting equation is: \[\frac{d}{d\tau}x=\frac{S}{\alpha\beta}\frac{1}{1-\chi x}-x=-\partial_{x}V(x,s), \tag{9}\] Here, \(\chi=\frac{R_{off}-R_{on}}{R_{off}}\) and \(s=\frac{S}{\alpha\beta}\), with \(0\leq\chi\leq 1\) in relevant physical cases. The dynamics of the system, represented by a single memristor device, are fully characterized by the gradient descent in the effective potential given by [23]: \[V(x,s)=\frac{1}{2}x^{2}+\frac{s}{\chi}\log(1-\chi x), \tag{10}\] The potential exhibits two minima separated by a barrier, as depicted in Fig. 1 (top), for \(s=0.15\) and \(s=0.24\) with \(\chi=0.9\). The range of \(s\) for the existence of a barrier is limited, and when \(\chi\) approaches 1, the local minimum can move inside the domain \([0,1]\), leading to the emergence of an unstable fixed point (i.e., the peak of the barrier). Consequently, two basins of attraction and locally stable minima are formed. A pictorial phase diagram illustrating this behavior is shown in Fig. 1 (bottom), highlighting the critical voltage points at which such behavior occurs. Below a critical voltage point \(V_{c}\), only one fixed point is present. At the \(V_{c}\) value, a new fixed point emerges, but it becomes metastable in the presence of noise (dashed red line). At an intermediate point \(Vc<V_{m}<Vc^{*}\), the two metastable points have equal energy, representing the switching point where the original stable fixed point becomes metastable. At higher values of \(V=Vc^{*}\), the first fixed point disappears by merging with the barrier and becoming flat. Ultimately, only one fixed point remains at higher values. For a network of memristors of this type, the mean-field potential resembles exactly the single memristor defined above. For details, see Supp. Mat. A. The key difference is that the mean-field potential is only an approximation when the system is high dimensional, e.g. now there is an effective metastable region, as shown in Fig. 1, instead of two stable minima, when one of the minima is lower than the other. Such effective "tunneling" can be explained using the theory developed in [25], i.e. the fact that local mean field maxima become saddle points in high dimensions. Here we will use this fact to study effective ergodicity breaking in memristive networks. In fact, these two regimes can be characterized via the Thirumalai-Mountain (TM) metric introduced before. In Fig. 2 we show the TM metric as a function of time for \(\bar{s}=0.18\) (top), \(\bar{s}=0.24\) (center), and \(\bar{s}=0.25\) (numerically calculated for a circuit of N = 50 memristors with parameters \(\alpha=\beta=1\) and \(\chi=0.9\)). As we can see, in the former case the TM metric relaxes as a power law with an approximate intermediate exponent of \(p\in[-2,-1.5]\), typical of a diffusive regime. For \(\bar{s}=0.24\), we see instead a typical behavior of weak ergodicity breaking, e.g. a transient non-monotonicity of the TM metric, associated with the trajectories effectively "tunneling" through the mean field barrier. This behavior is indeed transient as for \(\bar{s}=0.25\) the TM metric relaxes again as a power law. Using the toy model, we can then pinpoint such non-monotonicity of the TM metric to a transient transition between two stable asymptotic states and the effective symmetry breaking of the potential. Thus, we can immediately identify the symmetry breaking of the potential as the culprit of such transient non-ergodic behavior. ### Nanowire networks We now consider a more realistic model of memristive networks, which can be associated with self-assemblies of silver nanowires. Via established bottom-up self-assembly techniques, one can readily synthesize nanowire networks (NWNs) [22; 50]. These NWNs typically have a 2D spatial distribution of randomly oriented nanowires that are interconnected by cross-point MIM junctions. The NWN we consider have densities of 10 junctions/\(\mu\)m2 and 0.5 nanowires/\(\mu\)m2. Device electrodes can be deposited onto the substrate using a mask, from which the conductance measurements can be read Figure 1: _Top:_ Evolution of the mean-field potential for the toy model as a function of voltage. _Bottom:_ Phase diagram of the toy model, and the appearance and disappearance of stable points. The intermediate region in voltage between \(Vc\) and \(Vc^{*}\) is characterized by the coexistence of two stable points. The buffer region corresponds to a metastable state which acts as the boundary between the attracting stable points for a single memristive device, while the shading represents the tunneling region for the network of devices. ily performed. This bio-inspired structure is difficult to design and fabricate using top-down techniques. Self-assembled NWNs exhibit topological properties, such as small-world propensity and modularity, that are similar to biological neural networks and distinct from random and grid-like networks [51]. Unlike fully connected bipartite networks in artificial neural networks, small-world networks have local connectivity and short path lengths, making them relatively sparse. Although small-worldness is necessary for important functional properties, such as synchronizability and information flow, it alone cannot explain the diverse range of dynamics across networks that exhibit this structural property. A model for the simulation of realistic NWNs has been introduced previously in the literature [21; 24; 52]. Fig. 3 (a) shows a visualization of a simulated nanowire network containing 1000 nanowires and 6877 junctions. Self-assembly is modeled by distributing nanowires on a \(3\times 3\,\mu\)m\({}^{2}\) 2D plane, with their centers uniformly sampled from \([0,3]\) and orientation uniformly sampled from \([0,\pi]\). The lengths of the nanowires are sampled from a gamma distribution (mean = 100 nm, standard deviation 10 nm), based on experimental measurements [22]. In theoretical studies, as illustrated in Fig. 3 (b), the NWN is transformed to the corresponding graphical representation, where the nodes represent nanowires and the edges are the cross-point junctions. In this work, all simulation results for nanowire networks are generated using a network comprised of 1000 nanowires and 6877 junctions. All variables, except the adjacency matrix \(A\), are time-dependent. A model for the conductance of a single junction, associated with the filament length is provided in the supplementary material, see Sec. A. Each junction evolves as voltage bias is continuously applied to the network. The modified nodal analysis approach is applied to the graphical representation to solve Kirchhoff's voltage and current conservation laws at each time step [53]. This is equivalent to the method used for the derivation of the exact network equation for the toy model, eqn. (A1). Although the NWN model is based on polymer-coated Ag nanowires, with memristive junction internal dynamics that differ from that of the toy model (based on metal-oxide memristors), the network dynamics are similar and one should think of the two models as equivalent from a physical perspective. For the purpose of using a NWN as a reservoir, a Mackey-Glass time-series signal with delay parameter \(\tau=17\) is delivered to a source electrode as the input voltage signal. Before implementing the time-series prediction task using RC, a DC input of varying duration is applied to the NWN to initialize the internal state of the network and prepare it for RC. We refer to this pre-initialization protocol as "priming the system" [54]. Fig. 4(a) shows the reservoir's conductance (blue curve) as a function of the DC input length \(T_{0}\). The shaded region represents the general conductance transition regime, identified from previous studies [21; 24], and the dashed line at \(T_{0}=2.17\,\)s represents when the first conductance pathways form between the source and drain nodes. The internal state of the network for different \(T_{0}\) is visualized in Fig. 4(b). In cases where the reservoir is under-activated (\(T_{0}<2\,\)s), the majority of memristive components remain inactive, resulting in insufficient dynamics from the network. When the reservoir is over-activated (\(T_{0}>8\,\)s), the internal dynamics of the system become saturated, limiting the system's capacity to process additional information. The conductance transition regime from Fig. 4(a) corresponds to an intermediate dynamical state of the reservoir, where conductance paths first span the network and the internal state of the system produces dynamical features that are more diverse than at other activation times. Figure 3: Example of a nanowire network generated with the random wire model. (a) Simulated NWN with 1024 nanowires and 6877 junctions. (b) Graphical representation of the NWN in (a). Figure 2: Thirumalai-Mountain metric as a function of time and amplitude, calculated for the stochastic memristive model with noise, with the sample average calculated using the mean field value introduced in (13). We observe that, in the regime in which the potential has a single minimum (\(V<Vc\)), the TM metric decays rapidly. Instead, for higher voltages in which two minima are present (\(Vc<V<Vc^{\ast}\)), the TM metric does not decay. This is a symptom of non-ergodic behavior. ## IV Ergodicity, Reservoir Computing and Bistability ### Toy model Reservoir computing using [55] was studied for the first time in [56], among other passive circuits. A brief recap of the procedures behind reservoir computing is provided in Supp. Mat. B. Here, we have used a similar scheme to the reservoir, using a memristive circuit comprising \(N=50\) idealized memristors. We have considered parameter values \(\alpha=\beta=1\), and \(\chi=0.9\), for which the system experiences a symmetry break, and where we expect its dynamics to become nonergodic, see III.1. This is the regime where the reservoir is at the _edge of stability_, as defined in [29], where in general, reservoirs can have optimal performance, although not always guaranteed. Weak ergodicity breaking in a dynamical system is associated with a strongly chaotic regime [41]. We will come back to this point later. We have also added a small noise value \(\sigma=0.01\). The equations (30) were numerically integrated with a time step \(dt=0.1\). For the input signal \(V(t)\) we have used a Mackey-Glass time series with delay parameter \(r=0.2\), \(\gamma=0.1\), \(\tau=17\), and \(n=10\). Our input signal was given by \[S(t)=\mathrm{b}+\mathrm{a}V(t), \tag{11}\] with \(a\in[0,10]\) a multiplicative factor of the input \(V(t)\) and \(b\in[-0.4,1]\) a parameter bias. An example of the input signal is shown in Fig. 6 (top). The parameter \(b\) represents the bias input to the network, while \(a\) is the amplitude. For each value of \(a\) and \(b\), we performed a Mackey-Glass reconstruction task using the internal memory states as the dynamical variables for the RC. The rMSE, which we use as a measure of the performance of the RC, is shown in Fig. 6. We can see from the figure that varying \(a\) leads to a reduced value of the rMSE, right where the rumbling transition is present, approximately for \(b=0.2\). The difference in the quality of the task can be observed, for the Mackey-Glass time series, in Fig. 5, for the toy model described in App. A. At a value of \(b=0.2\), a plot of the rMSE is shown in Fig. 7. As we can see, the value of the rMSE decreases as a function of \(a\), meaning that the larger the magnitude of the input signal the better the performance, but this happens only when \(b\) is carefully chosen near the transition point. The reason why this occurs can be inferred by analyzing the response of the internal memory values, which are reported in the Appendix, as a function of the parameter \(b\) and for \(a=0.2\), below, near, and above the transition point. When \(a\) is such that the potential has a single minimum, the value of the memory oscillates in the vicinity of that minimum. When however \(a\) is such that the potential has two minima, e.g. in Figure 4: (a) Collective conductance of the network between the source and drain nodes with a DC input of varying pulse width. Dashed red line indicates the edge of formation of a high conductance current pathway between the source and drain nodes. (b) Visualization of network activation levels for different pre-initilization pulse widths \(T_{0}\). Figure 5: Reservoir computing prediction task with the toy model, for \(b=0.2\), and \(a=0.01\) (a) and \(a=10\) (b)respectively. As we can see, the regime of optimal prediction corresponds both the regime in which the system’s Thirumalai-Mountain metric does not converge to zero. the symmetry-breaking regime, at sufficiently large values of \(b\) the memory values start to oscillate between the two minima. This implies that the time evolution of the system is much more dynamic in the symmetry-broken phase, and this effectively results in an improvement in the performance of the RC. As we can see, increasing the value of the amplitude leads to the internal memory values fluctuating more prominently between the two stable states. This is also shown in Fig. 7 (b), where we plot the TM metric evaluated on the response of the system in the small \(Amp=1\) and large \(Amp=10\) amplitude regimes. In one first case, the system is still ergodic, while in the case \(Amp=10\) the TM metric is not converging to a value zero. Furthermore, we have calculated the \(TM_{x}\) metric near the transition bias \(a=0.2\) for varying values of the amplitude \(b\) as \[\Omega_{x}(t)=\frac{1}{N}\sum_{j}^{N}\left[\bar{x}_{j}(t)-\langle x(t)\rangle \right]^{2}. \tag{12}\] As the ensemble average \(\langle x(t)\rangle\) we have consider the mean field value \(x_{cg}\) defined as [23] \[x_{cg}=\frac{1}{N}\sum_{ij}^{N}\mathcal{P}_{ij}x_{j}. \tag{13}\] We have chosen the mean field value \(x_{cg}\) as a natural order parameter that resembles, in definition, the general meaning of ensemble average. This is actually one of the advantages of studying this model first, as in this case, we know the details of the order parameter for the whole memristive network. ### Nanowire networks The nanowire connectome dynamics has been extensively explored across various studies. Prior research has highlighted that nanowire Networks (NWNs) showcase brain-like dynamics, demonstrating their optimal information storage and processing capabilities at conductance transition points [21; 23; 24]. More recently, a dynamical mean-field theoretical technique for polymer-coated Ag nanowires has uncovered emergent dynamical features [57] such as transitions. In the context of the two-terminal setup, these transitions are commonly observed within a regime termed as the 'edge of formation' [21]. This 'edge of formation' is delineated by the activation of memristive components, but precedes an exponential surge in the formation of parallel paths. As depicted in Fig. 4, this regime establishes a select few high-conductance current paths between the two electrodes (a phenomenon known as the 'winner-takes-all') [58]. Within this state, the internal dynamics of the NWN intricately map the input signal to a diverse feature space. In the case of the toy model, previous analytical studies provide a comprehensive understanding of both the system dynamics and the order parameters to be employed [23; 49], which allows us to apply a simplification for the TM metric. Nevertheless, the same technique cannot be utilized for the nanowire model since it is more realistic and cannot be characterized in the same way. For that reason, the collective conductance of the nanowire network is used as the order parameter, which is determined based on the individual conductances of individual memristive junctions and the underlying circuitry shaped by Figure 6: MSE of the Mackey-Glass reconstruction as a function of the parameters \(a\) (Amp) and \(b\) (bias). Figure 7: _(a)_: Reservoir computing prediction error as a function of the amplitude of the input signal, while the bias is fixed on the transition point. We can see that rMSE decreases as a function of the amplitude. _(a)_: Thirumalai-Mountain metric as a function of time for various amplitudes. We see that for increasing amplitudes, the metric ceases to converge to zero, indicating an effective non-ergodic behavior. In the intermediate regime, we see oscillations due to the fact that the system is jumping from one minimum to the other. the connectivity. In the meantime, the findings derived from the toy model will play an important role in interpreting the results of nanowires. #### iii.2.1 Thirumalai-Mountain metric We now wish to link computation performance and ergodicity, also in the case of NWNs, we used the TM metric to understand the effective ergodicity of these dynamical systems; in particular, eqn. 12 is also used to calculate the TM metric for NWNs. However, the meaning associated with each quantity is closer to the definition of the metric used in the original work on supercooled liquid, [37]. The overall observable of interest is the two-probe conductance, defined earlier for nanowires, which is a scalar. Then, the time average is calculated using the global conductance, while the collective conductance is employed as an ensemble average across various realizations of the initial conditions: \[\bar{g}(t) =\frac{1}{T}\int_{0}^{T}G^{i}(t)dt, \tag{14}\] \[\langle g(t)\rangle =\frac{1}{N}\sum_{i}G^{i}(t), \tag{15}\] where \(G^{i}(t)\) is the two-probe conductance of a particular realization at time \(t\) and \(G(t)\) is the collective and effective conductance of the network between the two points. For the time average quantity, note that we select a single element of the ensemble to evaluate the time average. The ensemble of different realizations is generated by randomly perturbing the filament levels of junctions in the network: \[\Lambda^{i}=(1+\delta)\Lambda_{0}, \tag{16}\] where \(\delta\) is randomly sampled from a flat distribution, while \(\Lambda^{i}\) is the parameter controlling the length of the filaments for all junctions, while \(\Lambda_{0}\) is the initial condition of the filament. The random variable \(\delta\) is sampled from a uniform distribution over \((-0.1,0.1)\). Thus, effectively we are sampling over the initial conditions of the system. The TM metric, as a function of the bias, is shown in Fig. 8 (b). As we can see, and consistently with the case of the toy model, for small and large values of the bias, the metric decays. At intermediate values of the bias, at the edge of the transition between low and high conductance states, where the conductance synchronizes with the input, the TM metric fails to decay. This is exactly the regime in which the conductance transition occurs and is a signature of a special state for the nanowire network, highly synchronized with the input. #### iii.2.2 Reservoir Computing For the realistic nanowire network model above, the implementation of RC involved the two probe conductance, with designated input and readout nodes. Such construction has already been considered in the literature [54; 59]. The fitting task of the Mackey-Glass time series was performed using NWNs under the RC framework, with two nanowire nodes in the network selected as the source and the drain. The MG signal was linearly transformed as described in Eq. 11 and delivered to the network as input, while the same MG signal with 5 steps ahead was employed as the target. The task can be breakdown into three phases: 1. _Priming:_ A \(2\,V\) DC input of varying length \(T_{0}\) was applied to drive the internal state of the network. The first 1000 data points of the MG time-series were delivered subsequently to wash out the influence from the initial state. 2. _Training:_ The effective conductance of the network corresponding to \(t=1000-4000\,\mathrm{ms}\) was measured and multiplexed using the virtual node technique [60] to provide training features (see details in Appendix). The readout layer was trained using a linear regression with a ridge parameter \(r=0.01\). 3. _Testing:_ The effective conductance during \(t=4000-5000\,\mathrm{ms}\) was measured and multiplexed in the same fashion as the training phase, while the trained readout layer was applied to make predictions. The performance of the reservoir can thus be evaluated accordingly. We consider two regimes: one in which the system is Figure 8: (a) Nanowire system simulated conductance, using the Mackey-Glass time series as input, for various values of the bias voltage \(b\). As we can see, for small values and larger values of the bias, the conductance decays and grows to the asymptotic value. At intermediate values of the bias, the conductance oscillates. (b) Thirumalai-Mountain metric for the effective conductance of the nanowire network. initialized away from the voltage transition point between the high and low conductance state, and one in which the system sits at the boundary between the two. As we can see from Fig. 9 (top), the RMSE of the RC model has a behavior very similar to the one observed for the toy model of Fig. 6. At values of the amplitude close to the transition point, in which the Thirumalai-Mountain fails to converge to zero, the physical RC performance peaks. Our intuition is that the system tuned at the point in which it is at the edge of a transition, is more prone to synchronization [29] with the input signal. This, combined with existing knowledge [54] on the average "priming" time that it takes for the system to reach synchronization with the signal, shows that the combination of input time and choice of input voltage leads to optimal performance. We can see the difference between the tuned and non-tuned network in Fig. 10. ## V Conclusions Memory effects are a critical aspect of many physical and biological systems, and they have been shown to play a vital role in the behavior of complex systems. Meanwhile, ergodicity is a property of systems that describes the degree to which they explore their phase space. In recent years, in particular, physical systems with memory such as nanowires or nanoparticle connectomes, memristive, and other nanoscale devices, have become increasingly essential candidates as substrates for synthetic intelligent devices, e.g. brain-like physical material. In particular, there are strong indications, both in theoretical models and experiments, that these devices exhibit conductance transitions both of the first and second order. These transitions have been linked to the edge of chaos, a concept that refers to the boundary between ordered and chaotic behavior in complex systems. The present study explores the interplay between ergodicity breaking and memory in two models of memristive devices. The first model we studied was a toy model introduced in the literature to understand analytically the properties of purely memristive networks and has been instrumental in understanding their non-equilibrium properties such as Lyapunov functions [61] or conductance transitions [23]. The second is a more realistic model for memristive networks composed of nanowires [21; 24; 54; 62], used to understand the properties of polymer-coated Ag nanowires [22; 63; 64]. In both cases, we studied the Thirumalai-Mountain [37] metric to understand how the systems relax when driven by different inputs, and in particular the ergodic properties of these systems. Both in the case of the toy model, for which the conductance transition has been studied ana Figure 10: Fitting result for different biases, for amp = 4.9 V, \(T_{0}\) = 2 s. As we can see, near the transition the performance of the RC prediction task increases dramatically. The red dashed line represents the divide between training and test sets. Figure 9: RMSE of RC prediction task with the simulated nanowire system as a function of driving. _Top_: RMSE as a function of the bias and the amplitude of the input signal. As we can see, the minimum is located exactly at the numerically observed transition point in bias. lytically using a variety of techniques [65, 66, 23, 67, 68], and in the case of the nanowire model [21], we found that the Thirumalai-Mountain metric signals lack of ergodicity near the voltage bias value where the conductance transitions are expected to be. This is, in particular, interesting in view of the fact that it has been recently observed that edge of instability can be linked to a computational advantage [68, 29]. A similar result is observed in this paper. We did observe that, in particular, this is not necessarily true unless the dynamical system under study is synchronized to the input signal, as suggested in [29]. In fact, this improved performance occurs only after a transient period ("priming") which allows the nanowire network to synchronize to the input signal, as shown in Fig. 9. The results described above are consistent across two different models: the toy model in which conductance transitions can be understood quantitatively and analytically, and the more realistic nanowire model able to capture the experimentally observed conductance in Ag nanowires. Thus, these results suggest that there might be an underlying common theory to explain these transitions. In conclusion, by connecting memory effects, ergodicity, and the edge of chaos, we have identified a set of principles that can be used to create more effective computational models. Our research has shown for the first time that non-ergodic behavior can be linked to the effectiveness of reservoir computing, leading to new approaches for developing more advanced and efficient computational tools. We believe that our findings will inspire further investigations into the connections between memory effects, ergodicity, and the edge of chaos, and will lead to new and exciting developments in the field of machine learning and beyond. In particular, in future work we will discuss the application of the ideas developed in this paper to meta-plasticity with memristive systems, [70, 69], in particular in view of the recent results on the meta-plasticity of nanowire networks shown in [64]. ###### Acknowledgements. The work of FC was carried out under the auspices of the NNSA of the U.S. DoE at LANL under Contract No. DE-AC52-06NA25396, and in particular support from LDRD via 20230338ER and 20230627ER. RZ is supported by a Postgraduate Research Excellence Award scholarship from the University of Sydney. VB acknowledges funding through the RMIT Vice-Chancellor's Research Fellowship.
2304.05295
A Comprehensive Study on Object Detection Techniques in Unconstrained Environments
Object detection is a crucial task in computer vision that aims to identify and localize objects in images or videos. The recent advancements in deep learning and Convolutional Neural Networks (CNNs) have significantly improved the performance of object detection techniques. This paper presents a comprehensive study of object detection techniques in unconstrained environments, including various challenges, datasets, and state-of-the-art approaches. Additionally, we present a comparative analysis of the methods and highlight their strengths and weaknesses. Finally, we provide some future research directions to further improve object detection in unconstrained environments.
Hrishitva Patel
2023-04-11T15:45:03Z
http://arxiv.org/abs/2304.05295v1
# A Comprehensive Study on Object Detection Techniques in Unconstrained Environments ###### Abstract Object detection is a crucial task in computer vision that aims to identify and localize objects in images or videos. The recent advancements in deep learning and Convolutional Neural Networks (CNNs) have significantly improved the performance of object detection techniques. This paper presents a comprehensive study of object detection techniques in unconstrained environments, including various challenges, datasets, and state-of-the-art approaches. Additionally, we present a comparative analysis of the methods and highlight their strengths and weaknesses. Finally, we provide some future research directions to further improve object detection in unconstrained environments. Keywords:object detection, unconstrained environments, deep learning, convolutional neural networks, computer vision + Footnote †: journal: Computer Vision ## 1 Introduction and Background Object detection is a fundamental problem in computer vision, with numerous applications spanning fields such as surveillance, robotics, autonomous vehicles, augmented reality, and human-computer interaction. The primary goal of object detection is to recognize and localize instances of objects belonging to predefined classes in images or videos. In recent years, significant progress has been made in the development of object detection algorithms, mainly due to the emergence of deep learning and Convolutional Neural Networks (CNNs). These advancements have led to impressive performance improvements in various benchmark datasets, such as PASCAL VOC, ImageNet, and MS COCO. Despite these successes, object detection in unconstrained environments remains a challenging task. Unconstrained environments are characterized by variations in lighting conditions, viewpoint changes, occlusions, object deformations, scale changes, and the presence of cluttered backgrounds. These factors can severely affect the performance of object detection algorithms, making it difficult to achieve high detection accuracy and robustness. In recent years, significant progress has been made in object detection, particularly in the area of deep learning and Convolutional Neural Networks (CNNs) [1]. These techniques have significantly improved the performance of object detection algorithms, particularly in unconstrained environments where objects may appear at different scales, angles, and orientations. Region-based object detectors, such as Region-based Convolutional Neural Networks (R-CNN) [2], operate by first generating region proposals using a selective search algorithm, which generates around 2000 regions per image. Each region is then passed through a CNN to generate a fixed-length feature vector, which is fed into a support vector machine (SVM) [3] to classify the region and predict its bounding box coordinates. Finally, non-maximum suppression is applied to eliminate redundant detections. While R-CNN was a significant breakthrough in object detection, it has several limitations, such as slow training and inference times. To address these issues, researchers have proposed several variants of R-CNN, such as Fast R-CNN [4], which shares convolutional features across region proposals, and Faster R-CNN, which introduces a Region Proposal Network (RPN) to generate region proposals in an end-to-end manner. These variants significantly improve the speed and accuracy of R-CNN, making it a popular choice for object detection in unconstrained environments. The purpose of this paper is to provide a comprehensive overview of object detection techniques in unconstrained environments, addressing the challenges, datasets, and state-of-the-art approaches. The paper is organized as follows: Section 2 discusses the challenges encountered in object detection in unconstrained environments, highlighting the factors that contribute to the complexity of the problem. Section 3 presents a review of the commonly used datasets for evaluating object detection techniques in unconstrained environments. Section 4 presents state of the art Objection detection techniques. Section 5 presents a comparative analysis of the surveyed methods, emphasizing their strengths and weaknesses in terms of accuracy, computational complexity, and robustness to variations in the unconstrained environment. Section 6 concludes the paper by highlighting some of the open research questions and future directions in the field of object detection in unconstrained environments. ## 2 Challenges in Object Detection in Unconstrained Environments ### Illumination Changes Variations in lighting conditions, such as shadows and overexposure, can significantly impact the appearance of objects, making it difficult for detection algorithms to identify and localize them accurately. Variations in lighting conditions, such as shadows, overexposure, or underexposure, can significantly impact the appearance of objects in images [5]. These changes can make it difficult for detection algorithms to identify and localize objects accurately. To address this issue, several approaches have been proposed, including color constancy techniques [6] and deep learning-based methods that can learn illumination invariant features [7]. ### Viewpoint Variation Changes in the viewpoint or camera angle can alter the object's appearance, causing the detection algorithm to fail in recognizing the object or produce inaccurate bounding boxes [8]. Several methods have been proposed to tackle this issue, such as viewpoint invariant features and multi-view object detectors [8]. ### 2.3 Occlusion Objects in the scene may be partially or entirely occluded by other objects, making it challenging for the detection algorithm to identify and localize them correctly [9]. To address occlusion, some methods employ part-based models [10] or leverage context information from surrounding regions. ## 3 Datasets Object detection is a vital task in computer vision that involves identifying the presence and location of objects in an image or video. To evaluate the performance of object detection techniques in unconstrained environments, several benchmark datasets have been created. These datasets provide a standardized set of images with labeled objects, enabling researchers to compare the accuracy and speed of different algorithms. Some popular datasets include: ### Pascal VOC The PASCAL VOC (Visual Object Classes) dataset is one of the oldest and most popular datasets for object detection. It contains 17,125 images with 20 object classes, such as person, car, and dog. The dataset provides bounding box annotations for each object in the image. PASCAL VOC has been used as a benchmark dataset for several years, and many state-of-the-art object detection techniques have been evaluated on this dataset. ### ImageNet The ImageNet dataset is a massive dataset that contains 1.2 million images with 1,000 object classes. Unlike PASCAL VOC, ImageNet does not provide annotations for object detection. However, many researchers have used this dataset to pre-train their models on a large amount of data before fine-tuning them on smaller object detection datasets. ### Coco The COCO (Common Objects in Context) dataset is a newer dataset that contains 330,000 images with 80 object classes. COCO provides more detailed annotations than PASCAL VOC, including segmentation masks for each object in the image. This makes COCO a more challenging dataset for object detection algorithms to perform well on. ### Open Images The Open Images dataset is another large-scale dataset that contains 1.7 million images with 600 object classes. It provides both bounding box and segmentation mask annotations and has been used as a benchmark for object detection algorithms that require large amounts of training data. These datasets vary in size, number of classes, and annotation types, allowing researchers to test their algorithms on a wide range of scenarios. The following table summarizes some key information about the four popular benchmark datasets used for evaluating object detection techniques: ## 4 State-of-the-art Object Detection Techniques We categorize the state-of-the-art object detection techniques into two main groups: two-stage detectors and single-stage detectors. ### Two-stage detectors Two-stage detectors consist of a region proposal stage followed by a classification stage. Some prominent two-stage detectors include: #### 4.1.1 R-Cnn R-CNN (Region-based Convolutional Neural Networks) is an object detection model that was proposed in 2014 by Ross Girshick et al. R-CNN is a two-stage object detection framework that uses a region proposal mechanism to generate potential object regions in an image and then applies a convolutional neural network (CNN) to classify and refine these regions. The R-CNN framework consists of the following steps: Figure 1: Milestones of object detection [15]. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Dataset Name** & **Number of Images** & **Number of Classes** & **Annotation Type** \\ \hline PASCAL VOC[11] & 17,125 & 20 & Bounding Boxes \\ \hline ImageNet [12] & 1.2 million & 1,000 & Bounding Boxes \\ \hline COCO [13] & 330,000 & 80 & Bounding Boxes \\ \hline Open Images [14] & 1.7 million & 600 & Mask RCNN \\ \hline \end{tabular} \end{table} Table 1: Summary of key information about benchmark datasets for object detection Region Proposal: The first stage of R-CNN generates potential object regions by using a selective search algorithm that combines low-level features, such as color and texture, with high-level cues, such as edges and corners. Selective search generates around 2,000 region proposals for each image. 2. Feature Extraction: In the second stage, each region proposal is warped to a fixed size and fed through a pre-trained CNN, such as Alex Net or VGG, to extract a feature vector for that region. 3. Object Classification and Refinement: The feature vector for each region proposal is then fed into a set of fully connected layers that perform object classification and bounding box regression. The classification layer outputs the probability of each region proposal containing a particular object class, while the regression layer outputs the refined bounding box coordinates for that object class. #### Fast R-CNN Faster R-CNN (Region-based Convolutional Neural Networks): Faster R-CNN is a two-stage object detection model that uses a Region Proposal Network (RPN) to generate object proposals and a Fast R-CNN network to classify and refine the proposals. The RPN generates region proposals by sliding a small network over the convolutional feature map and predicting objectness scores and bounding box offsets. Faster R-CNN is known for its accuracy and has been widely used in object detection tasks. ### Single-stage detectors Single-stage detectors directly predict object bounding boxes and class probabilities from an image. Some popular single-stage detectors include: #### Yolo YOLO is another one-stage object detection model that predicts object class scores and bounding box offsets directly from the entire image. YOLO divides the image into a grid of cells and predicts the class and bounding box for each cell. YOLO uses a single neural network to make predictions and is known for its speed and real-time performance. #### Ssd The Single Shot MultiBox Detector (SSD) extends the concept of YOLO by predicting bounding boxes and class probabilities at multiple scales, which improves the detection of objects with varying sizes. SSD uses a feature extractor to generate convolutional feature maps and applies a set of convolutional filters to predict class scores and offsets for each default box. SSD is known for its speed and efficiency and has been used in real-time object detection applications. #### 4.2.3 Retina Net Retina Net introduces the Focal Loss, which addresses the issue of class imbalance by down weighting the contribution of easy examples and focusing on hard examples during training. This results in improved detection performance, particularly for small objects. Retina Net uses a novel focal loss function that assigns higher weights to hard examples and reduces the effect of easy examples during training. Retina Net also uses a Feature Pyramid Network (FPN) to handle objects at different scales and has achieved state-of-the-art performance on several object detection benchmarks. Figure 2: One stage vs two stage object detection. The below table summarizes some key features of these state-of-the-art object detection techniques: ## 5 Comparative Analysis In this section, we compare the performance of various object detection techniques on the COCO dataset [5]. The results are summarized in Table 1. The results in Table 1 show that two-stage detectors, such as Faster R-CNN, generally achieve higher average precision (AP) compared to single-stage detectors like YOLOv3 and SSD. However, single-stage detectors are faster in terms of frames per second (fps), making them more suitable for real-time applications. ## 6 Conclusion and Future Directions In this paper, we have presented a comprehensive study on object detection techniques in unconstrained environments. We have discussed the challenges associated with object detection \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline **Method** & **Average Precision (AP)** & **Speed (fps)** \\ \hline **R-CNN** & 53.3 & 0.5 \\ \hline **Fast R-CNN** & 70.0 & 5 \\ \hline **Faster R-CNN** & 73.2 & 7 \\ \hline **YOLOv3** & 57.9 & 45 \\ \hline **SSD** & 72.1 & 19 \\ \hline **RetinaNet** & 74.8 & 12 \\ \hline \end{tabular} \end{table} Table 1: Comparison of object detection techniques on the COCO dataset Figure 3: Comparison of object detection techniques on the COCO dataset in such environments, presented popular datasets, and provided an overview of the state-of-the-art techniques. Additionally, we have compared the performance of various methods and highlighted their strengths and weaknesses. Despite the significant progress made in recent years, object detection in unconstrained environments remains a challenging problem. Future research directions could focus on the following aspects: * Developing more robust algorithms capable of handling occlusions, lighting variations, and background clutter. * Investigating techniques for efficient and accurate detection of small-scale objects. * Exploring the integration of other sensor modalities, such as LiDAR or depth information, to enhance object detection performance. * Developing unsupervised or weakly supervised object detection techniques to reduce the reliance on large-scale annotated datasets. By addressing these challenges and exploring new approaches, we believe that object detection in unconstrained environments can be further improved, paving the way for more reliable and efficient applications in various domains, such as autonomous vehicles, robotics, and surveillance systems.
2301.00533
Analytical comparison between X(3) and X(5) models of the Bohr Hamiltonian
The 3-D Bohr-Mottelson Hamiltonian for $\gamma$-rigid prolate isotopes, known as $X(3)$, is solved via inverse square potential having only one free parameter, $\beta_{0}$. The exact form of the wave functions and the energy spectra are obtained as a function of the free parameter of the potential that determines the changes in the spectra ratios and the $B(E2)$. Since $X(3)$ is an exactly separable $\gamma$-rigid version of $X(5)$, the solutions are compared with the $X(5)$ model and some new set of equations that show the relationships between the two models are stated. In other to show the dynamical symmetry nature of the solutions, the entire solutions from $\beta_{0}=0$ to $\beta_{0}=\infty$ are compared with $U(5)$, $X(5)$ and $SU(3)$. The solutions spread from the region around $U(5)$ over $X(5)$ and approach $SU(3)$ at $\beta_{0}=\infty$. The exact solutions obtained via variational procedure are compared favourably with some existing $X(3)$ models found in the literature. The strong agreement between the present model and $X(3)$ via infinite square well potential is discussed. Twelve best critical point isotopes, $^{102}$Mo, $^{104-108}$Ru, $^{120-126}$Xe, $^{148}$Nd, $^{184-188}$Pt are chosen for experimental realization of the model and moderate agreements are recorded. An excellent agreement which appears in the first $\beta$-excited state in the comparison of the present model with three $N=90$ isotones: $^{150}$Nd, $^{154}$Gd, and $^{156}$Dy, known to be $X(5)$ candidates, suggests that the present model compensates the $X(5)$ models whose predictions are excellent in the ground states but moderately bad in the first $\beta$-excited states.
Kayode Richard Ajulo, Kayode John Oyewumi
2023-01-02T05:53:32Z
http://arxiv.org/abs/2301.00533v2
###### Abstract ###### Abstract Via the inverse square potential, the solutions of the \(X(3)\) model which is a \(\gamma\)-rigid form of the \(X(5)\) critical point symmetry have been achieved. The paper presents \(X(3)\), through the variational technique, as another "window" through which the "pictures" of \(X(5)\longrightarrow SU(3)\) symmetry region can be seen. The analytical solutions of the \(X(3)\) are compared with the solutions of the \(X(5)\) model. Some new and unique equations connecting the two models in the: critical order, energy bands, spectra ratios, \(R_{L/2}\), and \(B(E2)\) transitional probabilities are presented. These equations should hold in other potentials with one-parameter such as Kratzer potential, Davidson potential etc. The spectra ratios and the \(B(E2)\) transitional probabilities are optimized via the optimization procedure. The experimental data of some selected isotopes are placed accordingly for the theoretical predictions. The deviations from the experiments are found to be quite small. **Analytical comparison between \(X(3)\) and \(X(5)\) models of the Bohr Hamiltonian** **K.R. Ajulo1**; **K.J. Oyewumi2** Footnote 1: E-Mail: [email protected] Footnote 2: E-Mail: [email protected] \({}^{1,2}\)University of Ilorin, Ilorin, Nigeria. **Keywords**: Bohr Hamiltonian, \(X(3)\), \(X(5)\), variation technique, optimization procedure, \(\beta\)-variable, \(\gamma\)-rigid. ## 1 Introduction \(X(3)\) which has been presented in [1-4] is said to be an exactly separable \(\gamma\)-rigid form of the \(X(5)\) critical point symmetry [5]. The \(X(3)\) model is defined by the collective coordinate \(\beta\) and two Euler angles since the \(\gamma\) is assumed to be zero unlike the case of \(X(5)\), where \(\gamma\) is varied around \(\gamma^{0}=0\) value in the harmonic oscillator potential [5]. This implies that, only three variables: \(\beta\) and \(\theta_{i}\) are involved in the \(X(3)\) model. An exact separation of the \(\beta\) variable from the Euler angles is quite easily achievable. In the Bohr Hamiltonian model [6-9], \(X(5)\) critical point symmetry is one of the two critical point symmetries: it is a phase transition of the first order shape, which were originally proposed in the works of Iachello [10], while \(E(5)\) is the phase transition of the second order shape [10]. In the present work, the nuclei are taken to be \(\gamma\)-rigid, with the axially symmetric prolate shape obtained at \(\gamma^{0}=0\). The work presents the usefulness of a one-sided bound inverse square potential with one parameter. The one-parameter inverse square potential chosen is of the form \[V(\beta)=\begin{cases}\dfrac{\beta_{0}}{\beta^{2}},\text{ if }0\leq\beta\leq \beta_{0},\\ \infty,\text{ if }\beta>\beta_{0},\end{cases} \tag{1}\] where \(\beta_{0}\) is a variation parameter that changes the signatures of the nuclei, as it changes. It is expected that the solutions should shift forward as the \(\beta_{0}\) shifts forward and solutions should shift backward as the \(\beta_{0}\) shifts backward. A typical inverse square potential is bound on the left and unbound on the right, and it has a minimum at some positive values of \(\beta_{0}\) that forces the particles to infinity as \(\beta_{0}\to 0\). As a result, the particle's energy states is one-sided, with energies escaping through the unbound side. The work is structured as follows: Section 2. presents the methodology and the solutions of \(X(3)\) model via inverse square potential. These solutions are: the wave functions, the normalization constants and the energy eigenvalues. The \(B(E2)\) transition rates are presented in Section 3. The analytical results, the numerical results, their applications in certain isotopes are presented and discussed in Section 4. The work is concluded and summarized in the Section 5. ## 2 Methodology of \(X(3)\) model with the inverse square potential In the \(X(3)\) model, the Bohr Hamiltonian operator is written as [1,2] \[\hat{H}=-\frac{\hbar^{2}}{2B}\left[\frac{1}{\beta^{2}}\frac{\partial}{\partial \beta}\beta^{2}\frac{\partial}{\partial\beta}+\frac{1}{3\beta^{2}}\left(\frac{ 1}{\sin\theta}\frac{\partial}{\partial\theta}\sin\theta\frac{\partial}{ \partial\theta}+\frac{1}{\sin^{2}\theta}\frac{\partial^{2}}{\partial\phi^{2}} \right)-V(\beta)\right], \tag{2}\] where the term, \[\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\sin\theta\frac{\partial}{ \partial\theta}+\frac{1}{\sin^{2}\theta}\frac{\partial^{2}}{\partial\phi^{2}}, \tag{3}\] inside the bracket, represents the angular part of the Laplacian [1,2]. \(B\), \(\beta\) and \(V(\beta)\) are respectively the mass parameter, the collective coordinate and the \(\beta\)-dependent potential. The wave equation of the Eq.(2) is \[\hat{H}\Psi(\beta,\theta,\phi)=E\Psi(\beta,\theta,\phi). \tag{4}\] By the usual method of separation of variable employed in some quantum texts, \[\Psi(\beta,\theta,\phi)=\chi(\beta)Y_{L,M}(\theta,\phi), \tag{5}\] where \(Y_{L,M}(\theta,\phi)\) is the spherical harmonics and \(\chi(\beta)\) is the radial part of Eq.(4). The separated angular part obtained reads [1,2] \[-\left(\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\sin\theta\frac{ \partial}{\partial\theta}+\frac{1}{\sin^{2}\theta}\frac{\partial^{2}}{ \partial\phi^{2}}\right)Y_{L,M}(\theta,\phi)=L(L+1)Y_{L,M}(\theta,\phi), \tag{6}\] where \(L\) is the angular momentum quantum number. The simplified form of the radial part equation [1,2], \[\left(\frac{1}{\beta^{2}}\frac{d}{d\beta}\beta^{2}\frac{d}{d\beta}-\frac{L(L +1)}{3\beta^{2}}+\frac{2B}{\hbar^{2}}[E-V(\beta)]\right)\chi(\beta)=0 \tag{7}\] reads \[\frac{d^{2}}{d\beta^{2}}\chi(\beta)+\frac{2}{\beta}\frac{d}{d\beta}\chi(\beta )-\frac{L(L+1)}{3\beta^{2}}\chi(\beta)-[v(\beta)-\epsilon]\chi(\beta)=0, \tag{8}\] where \(\epsilon=\frac{2B}{\hbar^{2}}E\) and \(v(\beta)=\frac{2B}{\hbar^{2}}V(\beta)\) are the reduced energy and reduced potential respectively [5]. ### Determination of the wave functions By substituting Eq.(1) for \(v(\beta)\) in Eq.(8) and solving the simplified equation using MAPLE software, the eigenfunctions obtained read \[\chi_{s,\nu,L}(\beta)=\beta^{-1/2}\left[C_{1,L}J_{\nu}(\sqrt{\epsilon}\beta)+ C_{2,L}Y_{\nu}(\sqrt{\epsilon}\beta)\right], \tag{9}\] where \(C_{1,L}\) and \(C_{2,L}\) are the normalization constants associated with the Bessel functions of the first kind, \(J_{\nu}\), and second kind, \(Y_{\nu}\), respectively. In the domain of Eq.(1), the critical order associated with the \(X(3)\) model in Eq.(9) is \[\nu^{X(3)}=\sqrt{\frac{L}{3}(L+1)+\beta_{0}+\frac{1}{4}}. \tag{10}\] If a boundary condition \(\chi_{s,\nu,L}(\beta_{0})=0\) is considered, then \(C_{2,L,n}Y_{\nu}(\sqrt{\epsilon}\beta)\) vanishes and the wave functions become \[\chi_{s,\nu,L}(\beta)=\beta^{-1/2}\left[C_{1,L}J_{\nu}(\sqrt{\epsilon}\beta) \right]. \tag{11}\] ### Determination of the energy eigenvalues and the spectral ratio The procedure for finding the eigenvalues is written in ref. [11]. If the first condition of the listed procedure is considered, then the acceptable expression for the energy eigenvalues is written as: \[E_{s,L,n_{\beta}}=\frac{\hbar^{2}}{2B}k_{s,\nu,n_{\beta}}^{2},\quad k_{s,\nu,n_{ \beta}}^{2}=\epsilon_{s,\nu,n_{\beta}},\quad k_{s,\nu,n_{\beta}}^{2}=\frac{x_{ n_{\beta},s,\nu}}{\beta_{0}}, \tag{12}\] where \(s=n_{\beta}+1\), \(x_{s,\nu,n_{\beta}}\) is the \(s\)-th zeros of the Bessel function of order \(\nu\). The energy eigenvalues of the \(\beta\)-part in \(\hbar\omega=1\) unit reads: \[\epsilon_{s,L,n_{\beta}}=2n_{\beta}+1+\nu^{X(3)}=2n_{\beta}+1+\sqrt{\frac{L}{ 3}(L+1)+\beta_{0}+\frac{1}{4}}:\quad n_{\beta}=0,1,2,... \tag{13}\] For the \(X(3)\) model, the ground state energy levels are defined with \(s=1\), the quasi-\(\beta_{1}\) levels are defined with \(s=2\) and the quasi-\(\beta_{2}\) levels are defined with \(s=3\). \(L=0,2,4,6...\) There exist no \(\gamma\)-bands in the \(X(3)\) model because, \(\gamma^{0}=0\). Eq.(13) is the similar to the energy eigenvalues obtained in the \(\beta\)-part of \(X(5)\) model [12], the difference is observed in their critical orders where \[\nu^{X(5)}=\sqrt{\frac{L}{3}(L+1)+\beta_{0}+\frac{9}{4}}. \tag{14}\] Since \(s=n_{\beta}+1\), \(\epsilon_{s,L,n_{\beta}}\) can be reduced to \(\epsilon_{s,L}\), then the spectra ratios can be written as \[R_{L/2}=\frac{\epsilon_{s,L}-\epsilon_{1,0}}{\epsilon_{1,2}-\epsilon_{1,0}}. \tag{15}\] ### Determination of the normalization constants and the complete wave functions The normalization condition for the Hamiltonian operator in Eq.(2) is written as [1,2] \[\int_{0}^{\beta_{0}}\beta^{2}\mid\chi_{s,\nu,L,n_{\beta}}(\beta)\mid^{2}d\beta =1, \tag{16}\] such that \[\mid\chi_{s,\nu,L,n_{\beta}}(\beta)\mid^{2}\to 0\quad\mbox{for}\quad\beta \to 0;\quad\mid\chi_{s,\nu,L,n_{\beta}}(\beta)\mid^{2}\beta^{2}\to 0\quad\mbox{for} \quad\beta\to\infty. \tag{17}\] If these conditions are satisfied, then \(\int_{0}^{\beta_{0}}\beta^{2}\mid\chi_{s,\nu,L,n_{\beta}}(\beta)\mid^{2}d\beta <\beta_{0}\). Using the identity [13-14] \[J_{\nu}(\sqrt{\epsilon}\beta)J_{\nu}(\sqrt{\epsilon}\beta)=\sum_{n_{\beta}=0} ^{\infty}\frac{\left(\frac{1}{2}\sqrt{\epsilon}\beta\right)^{2\nu+2n_{\beta}} (2\nu+n_{\beta}+1)_{n_{\beta}}}{n_{\beta}![\Gamma(\nu+n_{\beta}+1)]^{2}} \tag{18}\] in Eq.(16), the simplified normalization constants read \[C_{1,L,n_{\beta}}=\left[\sum_{n_{\beta}=0,1,2,3...}\frac{(n)_{n_{\beta}}\left( \frac{k_{s,\nu,n_{\beta}}}{2}\right)^{\xi-2}\beta_{0}^{(\xi)}}{\begin{array}[ ]{c}\\ \\ n_{\beta}!\quad\xi\end{array}\quad\left[\Gamma\left(\frac{\xi}{2}\right) \right]^{2}\end{array}}\right]^{-1/2}, \tag{19}\] where \[\xi=2\nu+2n_{\beta}+2,\quad\eta=2\nu+n_{\beta}+1\quad\mbox{and}\quad(\eta)_{n _{\beta}}=\eta(\eta+1)(\eta+2)...(\eta+n_{\beta}-1), \tag{20}\] with \((\eta)_{0}=1\). Hence, Eq.(11) becomes \[\chi_{s,\nu,L,n_{\beta}}(\beta)=\left[\sum_{n_{\beta}=0,1,2,3...}\frac{(\eta)_{n _{\beta}}\left(\frac{k_{s,\nu,n_{\beta}}}{2}\right)^{\xi-2}\beta_{0}^{(\xi)}}{ n_{\beta}!\quad\xi\quad\left[\Gamma\left(\frac{\xi}{2}\right)\right]^{2}} \right]^{-1/2}\beta^{-1/2}J_{\nu}(\sqrt{\epsilon}\beta). \tag{21}\] ## 3 \(B(e2)\) transition rates The electric quadrupole operator is written as [1,2] \[T_{\mu}^{E2}=t\beta\left[D_{\mu,0}^{(2)}(\theta_{i})\cos\gamma+\frac{1}{\sqrt{ 2}}\left(D_{\mu,2}^{(2)}(\theta_{i})+D_{\mu,-2}^{(2)}(\theta_{i})\right)\sin \gamma\right], \tag{22}\] where \(D(\theta_{i})\) are the Wigner functions of the Euler angle and \(t\) is known as a scale factor. For \(\gamma^{0}=0\), \[T_{\mu}^{E2}=t\beta\sqrt{\frac{4\pi}{5}}Y_{2\mu}(\theta,\phi). \tag{23}\] The \(B(E2)\)[1,2,5,15] is written as \[B(E2;sL\longrightarrow s^{\prime}L^{\prime})=\frac{1}{2sL+1}|\left<s^{\prime} L^{\prime}||T^{E2}||sL\right>|^{2}, \tag{24}\] \[=\frac{2s^{\prime}L^{\prime}+1}{2sL+1}B(E2;s^{\prime}L^{\prime} \longrightarrow sL). \tag{25}\] Eq.(24) or Eq.(25) has been solved in ref. [1] as: \[B(E2;sL\longrightarrow s^{\prime}L^{\prime})=t^{2}\left(C_{L0,20}^{L^{\prime}0 }\right)^{2}I_{sL;s^{\prime}L^{\prime}}^{2}, \tag{26}\] where the coefficients, \(C_{L0,20}^{L^{\prime}0}\) are the Clebsch-Gordan coefficients, and \[I_{sL;s^{\prime}L^{\prime}}=\int_{0}^{\beta_{0}}\beta\chi_{s,\nu,L,n_{\beta}} (\beta)\chi_{s^{\prime},\nu^{\prime},L^{\prime},n^{\prime}_{\beta}}(\beta) \beta^{2}d\beta, \tag{27}\] are the integrals over \(\beta\). ## 4 Numerical results, analytical results, applications and discussion Some important solutions for the collective model of Eq.(2) are the energy levels, the spectra ratios and the \(B(E2)\) transitions. Their theoretical predictions are important when energy spectra are assigned to the states for which experimental data are not available. The numerical calculations, the analytical comparisons and how the search for the experimental realizations of the model was achieved are discussed accordingly in this section. Both the \(X(3)\) and the \(X(5)\) have their critical orders, \(\nu(L,\beta_{0})\), from their Bessel functions which describes their energy spectra. Firstly, in the comparison of the Eq.(10) and Eq.(14), it can be deduced from the numerical computation of \(\nu\), shown in Table 1., that \[\nu^{X(3)}(\beta_{0}=c+2)=\nu^{X(5)}(\beta_{0}=c):\quad c=0,1,2,... \tag{28}\] In both cases, it increases with increase in the angular momentum, \(L\), and with increase in the variation parameter, \(\beta_{0}\). These effects of \(L\) and \(\beta_{0}\) in \(\nu\) are also seen in the energy values of Eq.(13). \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline & \multicolumn{10}{c}{\(\nu(L)\)} \\ \hline \(L\) & \(\beta_{0}=0\) & & \(\beta_{0}=2\) & & \(\beta_{0}=4\) & & \(\beta_{0}=6\) & & \(\beta_{0}=102\) & \(\beta_{0}=100\) \\ \hline & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) \\ \hline 0 & 0.500 & 1.500 & 1.500 & 2.062 & 2.062 & 2.500 & 2.500 & 2.062 & 10.112 & 10.112 \\ 2 & 1.500 & 2.062 & 2.062 & 2.500 & 2.500 & 2.872 & 2.872 & 2.500 & 10.210 & 10.210 \\ 4 & 2.630 & 2.986 & 2.986 & 3.304 & 3.304 & 3.594 & 3.594 & 3.304 & 10.436 & 10.436 \\ 6 & 3.775 & 4.031 & 4.031 & 4.272 & 4.272 & 4.500 & 4.500 & 4.272 & 10.782 & 10.782 \\ 8 & 4.924 & 5.123 & 5.123 & 5.315 & 5.315 & 5.500 & 5.500 & 5.315 & 11.236 & 11.236 \\ 10 & 6.076 & 6.238 & 6.238 & 6.397 & 6.397 & 6.551 & 6.551 & 6.397 & 11.786 & 11.786 \\ \hline & \(\beta_{0}=1\) & & \(\beta_{0}=3\) & & \(\beta_{0}=5\) & & \(\beta_{0}=7\) & & \(\beta_{0}=101\) & \(\beta_{0}=103\) \\ \hline 0 & 1.118 & 1.803 & 1.803 & 2.291 & 2.291 & 2.693 & 2.693 & 3.041 & 10.062 & 10.259 \\ 2 & 1.803 & 2.291 & 2.291 & 2.693 & 2.693 & 3.041 & 3.041 & 3.354 & 10.161 & 10.356 \\ 4 & 2.814 & 3.149 & 3.149 & 3.452 & 3.452 & 3.731 & 3.731 & 3.990 & 10.388 & 10.579 \\ 6 & 3.905 & 4.153 & 4.153 & 4.387 & 4.387 & 4.610 & 4.610 & 4.823 & 10.735 & 10.920 \\ 8 & 5.025 & 5.220 & 5.220 & 5.408 & 5.408 & 5.590 & 5.590 & 5.766 & 11.191 & 11.369 \\ 10 & 6.158 & 6.318 & 6.318 & 6.474 & 6.474 & 6.627 & 6.627 & 6.776 & 11.747 & 11.913 \\ \hline \end{tabular} \end{table} Table 1: The comparison in the critical order, \(\nu\), of the \(X(5)\)[12], with the \(\nu\) of Eq.(10). Figure 1: (a) Comparison in the energy levels of the \(X(3)\) and \(X(5)\) models [15] at \(\beta_{0}=2\) from the \(gsb\) up to the quasi-\(\beta_{2}\) band. (b): the variation of the critical order, \(\nu\), of the \(X(5)\) as a function of \(\beta_{0}\), is compared with \(\nu\) of the \(X(3)\) at constant angular momenta, \(L=0,2\) and \(L=4\). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \multicolumn{2}{c}{\(\beta_{0}=2\)} & \multicolumn{2}{c}{\(\beta_{0}=3\)} & \multicolumn{2}{c}{\(\beta_{0}=4\)} & \multicolumn{2}{c}{\(\beta_{0}=15\)} \\ \hline \(L\) & \multicolumn{8}{c}{\(n_{\beta}=0\);} \\ & \multicolumn{8}{c}{\(s=1\)} \\ \hline & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: Ground state energies, the energies of the quasi-\(\beta_{1}\) and the quasi-\(\beta_{2}\) denoted by \(n_{\beta}=0,s=1\); \(n_{\beta}=1,s=2\); and \(n_{\beta}=2,s=3\) respectively for the \(X(3)\) and \(X(5)\) symmetry [12] in \(\hbar\omega=1\) unit. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \(L_{s,n_{g}}\) & \(\beta_{0}=0\) & \(\beta_{0}=0\) & \(\beta_{0}=2\) & \(\beta_{0}=2\) & \(\beta_{0}=4\) & \(\beta_{0}=4\) & \(\beta_{0}=\infty\) & \(\beta_{0}=\infty\) \\ \hline & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) & \(X(3)\) & \(X(5)\) \\ \hline \(gsb\) & & & & & & & & \\ \(0_{1,0}\) & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ \(2_{1,0}\) & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ \(4_{1,0}\) & 2.130 & 2.646 & 2.646 & 2.834 & 2.834 & 2.938 & 3.296 & 3.296 \\ \(6_{1,0}\) & 3.275 & 4.507 & 4.507 & 5.042 & 5.042 & 5.372 & 6.806 & 6.808 \\ \(8_{1,0}\) & 4.424 & 6.453 & 6.453 & 7.421 & 7.421 & 8.508 & 11.413 & 11.423 \\ \(10_{1,0}\) & 5.576 & 8.438 & 8.438 & 9.887 & 9.887 & 11.881 & 16.991 & 17.013 \\ \(12_{1,0}\) & 6.728 & 10.445 & 10.445 & 12.404 & 12.404 & 15.686 & 23.409 & 23.450 \\ \(14_{1,0}\) & 7.882 & 12.465 & 12.465 & 14.951 & 14.951 & 19.740 & 30.544 & 30.611 \\ \hline & \(\beta_{0}=1\) & \(\beta_{0}=1\) & \(\beta_{0}=3\) & \(\beta_{0}=3\) & \(\beta_{0}=5\) & \(\beta_{0}=5\) & \(\beta_{0}=15\) & \(\beta_{0}=15\) \\ \hline \(0_{1,0}\) & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ \(2_{1,0}\) & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ \(4_{1,0}\) & 2.476 & 2.756 & 2.756 & 2.893 & 2.893 & 2.946 & 3.128 & 3.148 \\ \(6_{1,0}\) & 4.070 & 4.812 & 4.812 & 5.224 & 5.224 & 5.529 & 6.058 & 6.136 \\ \(8_{1,0}\) & 5.706 & 6.995 & 6.995 & 7.767 & 7.767 & 8.638 & 9.508 & 9.690 \\ \(10_{1,0}\) & 7.360 & 9.243 & 9.243 & 10.424 & 10.424 & 11.915 & 13.297 & 13.620 \\ \(12_{1,0}\) & 9.024 & 11.525 & 11.525 & 13.145 & 13.145 & 15.854 & 17.307 & 17.800 \\ \(14_{1,0}\) & 10.694 & 13.829 & 13.829 & 15.907 & 15.907 & 19.899 & 21.468 & 22.152 \\ \hline \end{tabular} \end{table} Table 3: Comparison of the ground state spectra ratios, defined in Eq.(15), of the inverse square potential in the \(X(3)\) model at different values of the \(\beta_{0}\), compared with the \(X(5)\)[12]. It can be seen that \(X(3)(\beta_{0}=\infty)\approx X(5)(\beta_{0}=\infty)\). Figure 2: (a) The plots showing the values of \(\beta_{0}\) at which energies are minimum. (b) The rate of energy with respect to \(\beta_{0}\), showing non stationary property of \(\beta_{0}\). \begin{table} \begin{tabular}{c c c c c c c c} \hline \(L_{s,n_{\beta}}\) & \(\beta_{0}=0\) & \(\beta_{0}=1\) & \(\beta_{0}=2\) & \(\beta_{0}=3\) & \(\beta_{0}=4\) & \(\beta_{0}=15\) & \(\beta_{0}=\infty\) \\ \hline quasi-\(\beta_{1}\) & & & & & & & \\ \(0_{2,1}\) & 2.000 & 2.921 & 3.562 & 4.094 & 4.562 & 8.058 & 20.124 \\ \(2_{2,1}\) & 3.000 & 3.921 & 4.562 & 5.094 & 5.562 & 9.058 & 21.124 \\ \(4_{2,1}\) & 4.130 & 5.397 & 6.208 & 6.850 & 7.395 & 11.187 & 23.420 \\ \(6_{2,1}\) & 5.275 & 6.991 & 8.069 & 8.906 & 9.603 & 14.115 & 26.929 \\ \(8_{2,1}\) & 6.424 & 8.626 & 10.014 & 11.090 & 11.982 & 17.567 & 31.537 \\ \(10_{2,1}\) & 7.576 & 10.281 & 11.999 & 13.337 & 14.449 & 21.356 & 37.116 \\ \(12_{2,1}\) & 8.728 & 11.945 & 14.007 & 15.619 & 16.965 & 25.366 & 43.534 \\ \(14_{2,1}\) & 9.882 & 13.615 & 16.027 & 17.923 & 19.513 & 29.526 & 50.668 \\ quasi-\(\beta_{2}\) & & & & & & & \\ \(0_{3,2}\) & 4.000 & 5.842 & 7.123 & 8.188 & 9.123 & 16.117 & 40.249 \\ \(2_{3,2}\) & 5.000 & 6.842 & 8.123 & 9.188 & 10.123 & 17.117 & 41.249 \\ \(4_{3,2}\) & 6.130 & 8.318 & 9.769 & 10.944 & 11.957 & 19.245 & 43.545 \\ \(6_{3,2}\) & 7.275 & 9.912 & 11.630 & 13.000 & 14.165 & 22.174 & 47.054 \\ \(8_{3,2}\) & 8.424 & 11.547 & 13.576 & 15.184 & 16.544 & 25.625 & 51.662 \\ \(10_{3,2}\) & 9.576 & 13.201 & 15.561 & 17.431 & 19.010 & 29.414 & 57.240 \\ \(12_{3,2}\) & 10.728 & 14.866 & 17.568 & 19.713 & 21.527 & 33.424 & 63.658 \\ \(14_{3,2}\) & 11.882 & 16.536 & 19.589 & 22.018 & 24.074 & 37.584 & 70.792 \\ \hline \end{tabular} \end{table} Table 4: \(R_{L/2}\) ratios, defined in Eq.(15), of the quasi-\(\beta_{1}\) and quasi-\(\beta_{2}\) bands of the inverse square potential in the \(X(3)\) model at different values of the \(\beta_{0}\). Figure 4: (a) The comparison in the \(R_{4/2}\) of the \(X(3)\) and \(X(5)\) for \(\beta_{0}=\infty\) and for different values of the \(\beta_{0,max}\) labelled as \(X(3)-\)var and \(X(5)-\)var respectively, peculiar to each angular momentum. (b): the comparison in the \(R_{0/2}\) of the \(X(3)\) and \(X(5)\) for \(\beta_{0}=\infty\) and for different values of the \(\beta_{0,max}\) labelled as \(X(3)-\)var and \(X(5)-\)var respectively, peculiar to each angular momentum. Figure 5: (a) and (b) The visual plots of the potentials correspond to \(R_{4/2}\) and \(R_{0/2}\) respectively. The values of \(\beta_{0}\) used correspond to \(X(3)\)-var and \(X(5)\)-var in the \(gsb\) and quasi-\(\beta_{1}\) bands. Figure 6: (a) and (b) present the \(R_{L/2}\) ratios for the ground state and the quasi-\(\beta_{1}\) bands of the \(X(3)\) model of inverse square potential respectively, at different values of \(\beta_{0}\) compared with \(X(3)\)-IW and \({}^{162}\)Dy. (c): the \(R_{L/2}\) ratios for the quasi-\(\beta_{2}\) bands of the \(X(3)\) model of inverse square potential at different values of \(\beta_{0}\) compared with \(X(3)\)-IW [1]. It appears that the \(gsb\) solutions of \(X(3)\) at \(\beta_{0}=\infty\) lie on the experimental data of \({}^{162}\)Dy, which is a typical \(SU(3)\) candidate. The available data on the first exited state lie very close to one another. Figure 7: (a) and (b) present the \(R_{L/2}\) ratios for the ground state and the quasi-\(\beta_{1}\) bands of the \(X(3)\) and the \(X(5)\) models of inverse square potentials respectively, obtained at different values of \(\beta_{0,max}\), labeled \(X(3)\)-var and \(X(5)\)-var, are compared with the \({}^{172-180}\)Os chain. Figure 8: The Neutron-\(\beta_{0}\) distribution is employed to show the relative positions of \({}^{104-108}\)Ru, \({}^{120-126}\)Xe, \({}^{184-188}\)Pt and \({}^{172-180}\)Os along their common chain. Figure 9: The \(B(E2)\) transition rates of the \(X(3)\) normalized to the \(B(E2:2_{1,0}\to 0_{1,0})=100\) units within: (a) the ground state bands at \(\beta_{0}=0,1,2,\infty\) and \(B(E2)\)-var compared with the \(X(3)\)-IW [1], \(X(5)\) experimental data [34] and \({}^{158}\)Gd [35], which is a typical \(SU(3)\) candidate. (b): the \(\beta_{1}\) state bands at \(\beta_{0}=0,1,2,\infty\) and \(B(E2)\)-var compared with the \(X(3)\)-IW [1] and \({}^{158}\)Gd. (c): the \(\beta_{2}\) state bands at \(\beta_{0}=0,1,2,\infty\) and \(B(E2)\)-var compared with the \(X(3)\)-IW [1]. [Note:-IW denotes infinite well potential.] \begin{table} \begin{tabular}{c c c c} \hline \(L_{s,n_{\beta}}\) & \(\beta_{0,max}\) & \(X(3)\)-var & \(X(3)\)-IW \\ \hline \(gsb\) & & & \\ \(0_{1,0}\) & \(\beta_{0}\) & 0.000 & 0.000 \\ \(2_{1,0}\) & \(\beta_{0}\) & 1.000 & 1.000 \\ \(4_{1,0}\) & 0.844 & 2.440 & 2.440 \\ \(6_{1,0}\) & 1.576 & 4.244 & 4.230 \\ \(8_{1,0}\) & 2.033 & 6.383 & 6.350 \\ \(10_{1,0}\) & 2.143 & 8.666 & 8.780 \\ \(12_{1,0}\) & 2.695 & 11.421 & 11.520 \\ \(14_{1,0}\) & 3.643 & 14.573 & 14.570 \\ quasi-\(\beta_{1}\) & & & \\ \(0_{2,1}\) & 0.815 & 2.703 & 2.870 \\ \(2_{2,1}\) & 2.101 & 4.619 & 4.830 \\ \(4_{2,1}\) & 3.729 & 7.255 & 7.370 \\ \(6_{2,1}\) & 5.213 & 10.327 & 10.290 \\ \(8_{2,1}\) & 6.098 & 13.493 & 13.570 \\ \(10_{2,1}\) & 6.855 & 16.908 & 17.180 \\ \(12_{2,1}\) & 8.106 & 21.009 & 21.140 \\ quasi-\(\beta_{2}\) & & & \\ \(0_{3,2}\) & 2.524 & 7.701 & 7.650 \\ \(2_{3,2}\) & 4.497 & 10.553 & 10.560 \\ \(4_{3,2}\) & 6.523 & 14.088 & 14.190 \\ \(6_{3,2}\) & 8.567 & 18.172 & 18.220 \\ \(8_{3,2}\) & 10.438 & 22.613 & 22.620 \\ \(10_{3,2}\) & 11.932 & 25.999 & - \\ \(12_{3,2}\) & 13.011 & 28.928 & - \\ \hline \end{tabular} \end{table} Table 5: The \(R_{L/2}\) ratios, defined in Eq.(15), for the \(X(3)\) version of inverse square potential, labelled \(X(3)\)-var, calculated at different values of \(\beta_{0,max}\), are compared with the \(X(3)\)-IW [1]. [Note: IW denotes infinite well potential]. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \(L_{s,n_{\beta}}^{(i)}\) & \(L_{s,n_{\beta}}^{(f)}\) & \(\beta_{0}=0\) & \(\beta_{0}=1\) & \(\beta_{0}=2\) & \(\beta_{0}=\infty\) & \(\beta_{0,max}^{(i)}\rightarrow\beta_{0,max}^{(f)}\) & \(B(E2)-var\) & \(X(3)\)-IW & \({}^{176}\)Os-Exp \\ \(2_{1,0}\) & \(0_{1,0}\) & \(100.000\) & \(100.000\) & \(100.000\) & \(100.000\) & \(\beta_{0}\rightarrow\beta_{0}\) & \(100.000\) & \(100.00\) & \(100.00\) \\ \(4_{1,0}\) & \(2_{1,0}\) & \(237.513\) & \(190.935\) & \(178.005\) & \(143.992\) & \(0.844\rightarrow\beta_{0}\) & \(189.495\) & \(189.90\) & \(193.00\) \\ \(6_{1,0}\) & \(4_{1,0}\) & \(380.702\) & \(286.006\) & \(270.996\) & \(167.292\) & \(1.576\to 0.844\) & \(250.995\) & \(248.90\) & \(267.00\) \\ \(8_{1,0}\) & \(6_{1,0}\) & \(523.695\) & \(384.599\) & \(369.090\) & \(185.328\) & \(2.033\to 1.576\) & \(293.038\) & \(291.40\) & \(297.00\) \\ \(10_{1,0}\) & \(8_{1,0}\) & \(667.003\) & \(486.036\) & \(469.991\) & \(202.099\) & \(2.143\to 2.033\) & \(324.599\) & \(323.80\) & \(352.50\) \\ \(12_{1,0}\) & \(10_{1,0}\) & \(810.954\) & \(587.658\) & \(559.744\) & \(229.986\) & \(2.695\to 2.143\) & \(350.710\) & \(349.50\) & - \\ \(14_{1,0}\) & \(12_{1,0}\) & \(954.746\) & \(690.364\) & \(671.484\) & \(253.007\) & \(3.643\to 2.695\) & \(371.992\) & \(370.70\) & - \\ \(2_{2,1}\) & \(0_{2,1}\) & \(166.813\) & \(160.292\) & \(152.428\) & \(69.929\) & \(2.011\to 0.815\) & \(78.922\) & \(80.60\) & - \\ \(4_{2,1}\) & \(2_{2,1}\) & \(320.619\) & \(260.000\) & \(243.186\) & \(123.888\) & \(3.729\to 2.101\) & \(139.471\) & \(140.10\) & - \\ \(6_{2,1}\) & \(4_{2,1}\) & \(470.001\) & \(355.240\) & \(343.376\) & \(156.031\) & \(5.213\to 3.729\) & \(181.730\) & \(182.40\) & - \\ \(8_{2,1}\) & \(6_{2,1}\) & \(617.075\) & \(456.792\) & \(439.962\) & \(189.542\) & \(6.098\to 5.213\) & \(213.899\) & \(215.50\) & - \\ \(10_{2,1}\) & \(8_{2,1}\) & \(763.927\) & \(557.317\) & \(542.129\) & \(215.763\) & \(6.855\to 6.098\) & \(242.026\) & \(242.40\) & - \\ \(12_{2,1}\) & \(10_{2,1}\) & \(899.983\) & \(656.999\) & \(639.677\) & \(233.499\) & \(8.106\to 6.855\) & \(268.543\) & \(265.10\) & - \\ \(14_{2,1}\) & \(12_{2,1}\) & \(1009.079\) & \(759.642\) & \(736.660\) & \(251.643\) & \(9.441\to 8.106\) & \(281.320\) & - & - \\ \(2_{3,2}\) & \(0_{3,2}\) & \(233.504\) & \(221.942\) & \(209.888\) & \(56.684\) & \(4.497\to 2.524\) & \(72.090\) & \(73.50\) & - \\ \(4_{3,2}\) & \(2_{3,2}\) & \(401.982\) & \(327.461\) & \(311.072\) & \(82.831\) & \(6.523\to 4.497\) & \(118.990\) & \(120.50\) & - \\ \(6_{3,2}\) & \(4_{3,2}\) & \(559.801\) & \(422.880\) & \(408.564\) & \(116.085\) & \(8.567\to 6.523\) & \(154.892\) & \(154.20\) & - \\ \(8_{3,2}\) & \(6_{3,2}\) & \(708.989\) & \(523.436\) & \(512.997\) & \(139.859\) & \(10.438\to 8.567\) & \(183.019\) & \(181.20\) & - \\ \(10_{3,2}\) & \(8_{3,2}\) & \(858.095\) & \(624.555\) & \(609.096\) & \(166.646\) & \(11.932\to 10.438\) & \(202.222\) & - & - \\ \(12_{3,2}\) & \(10_{3,2}\) & \(1003.933\) & \(727.909\) & \(715.990\) & \(182.910\) & \(13.011\to 11.932\) & \(218.753\) & - & - \\ \(14_{3,2}\) & \(12_{3,2}\) & \(1151.239\) & \(832.003\) & \(819.115\) & \(202.421\) & \(14.629\to 13.011\) & \(229.986\) & - & - \\ \hline \end{tabular} \end{table} Table 7: The \(B(E2)\) transition rates of the \(X(3)\) model at \(\beta_{0}=0,1,2,\infty\) and its values obtained at \(\beta_{0,max}\) peculiar to each angular momentum, normalized to the \(B(E2;2_{1,0}\to 0_{1,0})=100\) units are compared with the \(X(3)\)-IW model [1] and with the experimental data of \(X(5)\)[34]. [Note: -IW denotes infinite well potential.] Secondly, the exact relationship between the \(\nu^{X(3)}\) and the \(\nu^{X(5)}\) stated in Eq.(28) does not reflect in the exact comparison of their energy levels. That is, it can be inferred from the results that \[\epsilon^{X(3)}(\beta_{0}=c+2)\neq\epsilon^{X(5)}(\beta_{0}=c), \tag{29}\] because the total energy of the \(X(5)\) contains the \(\gamma\)-part solutions. However, the relation \[\epsilon_{gs,L}=2+\epsilon_{\beta_{1},L}=4+\epsilon_{\beta_{2},L}, \tag{30}\] holds in all the levels for both \(X(3)\) and the \(\beta\)-part of \(X(5)\): this third remark is shown in the Table 2. Another significant remark is such that, the values of \(\nu\), for the case of \(X(5)\) at \(L=2\), correspond to those of \(X(3)\), at \(L=0\). This is shown in Table 1. and the visual comparison is shown with the lines in Figure 1(b). Analytically, the behaviour of the energies of the \(X(5)\) and the \(X(3)\) at constant value of variation parameter, \(\beta_{0}\), is shown in the Figure 1(a). The critical orders, \(\nu(L,\beta_{0})\), of the \(X(5)\) and that of the \(X(3)\), which define their energy levels, are plotted against the variation parameter, \(\beta_{0}\), at constant angular momenta and shown in the Figure 1(b): it is shown, with the numerical values of \(\nu\), in the Table 1., that \[\nu^{X(5)}(L=0)=\nu^{X(3)}(L=2)\quad\forall\quad\beta_{0}. \tag{31}\] The derivatives of \(\nu\) with respect to the \(\beta_{0}\) are shown in Figures 2(a) and 2(b). The first and the second derivatives are carried out in order to show the stationary properties of \(\beta_{0}\) and the values of \(\beta_{0}\) at which the energy is minimum. The variation of the ratio \(\frac{\epsilon_{s,L}}{\epsilon_{1,2}}\) with respect to the variation parameter, \(\beta_{0}\), for both \(X(3)\) and \(X(5)\) are respectively shown in the Figures 3(a) and 3(b). For all values of \(\beta_{0}\), its values increase at \(L=0\), are constant at \(L=2\), that is \(\frac{\epsilon_{s,L}}{\epsilon_{1,2}}\)=1 and decrease at \(L>2\). The ground state bands (\(gsb\)) are defined with \(s=1;\quad n_{\beta}=0\). The quasi-\(\beta_{1}\) bands and the quasi-\(\beta_{2}\) bands are defined by \(s=2;\quad n_{\beta}=1\) and \(s=3;\quad n_{\beta}=2\) respectively. The \(\gamma\) bands do not exist for \(X(3)\) model because, \(\gamma^{0}=0\). The increase in the angular momentum, \(L\), at constant value of \(\beta_{0}\), increases the energies, in all energy levels. Also, at constant values of the angular momentum, the increase in the \(\beta_{0}\) increases the energy levels. The Table 2. shows the numerical solutions of Eq.(13) obtained for the ground states and the \(\beta\)-bands at \(\beta_{0}=2,3,4\) and at \(\beta_{0}=15\). The Figure 4(a) shows the comparison, in the \(R_{4/2}\), of the \(X(3)\) with \(X(5)\) at \(\beta_{0}=\infty\) and at \(\beta_{0,max}\) unique to each angular momentum, labelled as \(X(3)-\)var and \(X(5)-\)var respectively. The comparison in the \(R_{0/2}\) of the \(X(3)\) with \(X(5)\), at \(\beta_{0}=\infty\) and at different values of the \(\beta_{0,max}\) peculiar to each angular momentum, labelled as \(X(3)-\)var and \(X(5)-\)var respectively is shown in Figure 4(b). The 'nature' of critical point symmetry transitions for different isotopes, constrained to one-parameter potentials, can be investigated using a variational technique. This technique was used in ref. [11] to retrieve the \(U(5)\) and \(O(6)\) ground state bands from the \(E(5)\) within the domain of the one-parameter inverse square potential. The technique has also been used in ref. [12] and employed in ref. [16] to construct 'image' of the \(X(5)\) critical symmetry and to construct the \(Z(5)\) critical symmetry respectively. The forward variation of the 'control parameter', \(\beta_{0}\), causes the nuclei transition from \(X(5)\) to \(SU(3)\) (i.e. \(X(5)\longrightarrow SU(3)\) transition symmetry). The nature of the critical symmetry or the nuclear shape phase region under investigation predicts the directions of the variation: whether forward variation or backward variation, and also depends on the potential's boundary conditions. The rate of change of \(R_{L/2}(\beta_{0})\) is maximized for each \(L\) by using this approach. As shown in Table 3., each angular momentum is considered and treated separately in terms of the variation parameter, \(\beta_{0}\), as the critical values of \(R_{L/2}\) are distinct. Each value of \(\beta_{0}\) implies a distinct potential with which the energy is maximized. The method is comparable to the "normal" variational principle used in some quantum books, in which trial wave functions are chosen and energy is minimized. The comparisons of the ground state spectra ratios, defined in Eq.(15) with the \(X(5)\) model [11], at different values of \(\beta_{0}\) corresponds to the potentials are displayed in Figures 5(a) and 5(b) and also shown in Table 3: the visual comparison is shown in Figures 6(a), 6(b) and 6(c). It can be observed that the solutions of \(X(3)(\beta_{0}=\infty)\approx X(5)(\beta_{0}=\infty)\). The forward variation of \(\beta_{0}\) shifts the solutions to \(X(3)\). The solutions leave \(X(3)\) and approach \(SU(3)\) as \(\beta_{0}\) tends to \(\infty\). The available experimental data of \({}^{162}\)Dy [17], which is a typical \(SU(3)\) candidate are placed for comparison in Figure 6(a) and Figure 6(b). This is another remark that isotopes which have \(X(3)\) signatures must lie between \(U(5)\longrightarrow SU(3)\) symmetrical plane. The two other important relations that can be deduced from the comparison are: \[R_{L/2}(gsb)=2+R_{L/2}(\beta_{1})=4+R_{L/2}(\beta_{2}), \tag{32}\] at \(\beta_{0}=0\) as shown numerically in the Table 3. and Table 4. This is an observable effect or a signature from Eq.(30) while the effect of Eq.(28) is observed in the spectral ratios of \(X(3)\) and \(X(5)\) such that \[R_{L/2}^{X(3)}(\beta_{0}=c+2)=R_{L/2}^{X(5)}(\beta_{0}=c):\quad c=0,1,2,... \tag{33}\] In order to obtained the exact solutions of \(R_{L/2}\) ratios rather than vary \(\beta_{0}\), the technique of optimizing \(\beta_{0}\) employed in refs. [11,12,15,16] and others has been used to obtained the solutions of \(R_{L/2}\) at certain values of the \(\beta_{0}\) peculiar to the angular momenta. These special values of \(\beta_{0}\) are labelled \(\beta_{0,max}\) and they produce exact solutions labelled \(X(3)\)-var, shown in Table 5. The values obtained at different values of \(\beta_{0,max}\), are compared with \(X(3)\)-IW solutions. For all \(\beta_{0,max}\), \(0_{0,0}\) and \(2_{0,0}\) levels yield \(0.000\) and \(1.000\) respectively. \(\beta_{0,max}\) increases with increase in the angular momentum and its values are obtained at the points where the increases in \(\beta_{0}\) become steep. \(\frac{d}{d\beta_{0}}R_{L/2}|_{\beta_{0}=max}\) is achieved via a numerical procedure as \(\frac{d^{2}}{d\beta_{0}^{2}}R_{L/2}\) vanished. The \(R_{L/2}\) ratios for the ground state and the quasi-\(\beta_{1}\) bands of the \(X(3)\) and the \(X(5)\) models of inverse square potentials obtained at different values of \(\beta_{0,max}\), labeled \(X(3)\)-var and \(X(5)\)-var, are compared with the experimental data of \({}^{172,176,178,180}\)Os [18-21] chain, as shown in Figures 7(a) and 7(b). The ground state solutions of the \(X(3)\) for \(L=0\) up to \(L=10\) are in good agreement with \({}^{172}\)Os while those of \(X(5)\) are seen lying closer to \({}^{176}\)Os than \({}^{178}\)Os and \({}^{180}\)Os: the generalized comparison is moderate in the first excited state. This suggests that \({}^{172}\)Os is a good candidate for \(X(3)\) model while \({}^{176}\)Os shows a signature of \(X(5)\) model. The \(R_{L/2}\) theoretical predictions of the \(X(3)\) model are compared with the experimental data of some selected isotopes: \({}^{102}\)Mo [22], \({}^{104-108}\)Ru [23-25], \({}^{120-126}\)Xe [26-29], \({}^{148}\)Nd [30] and \({}^{184-188}\)Pt [31-33] as shown in Table 6. Each energy level is normalized to the particular \(2_{0,0}\) state. The energy obtained in Eq.(13) is fitted with the experimental energy of each of the isotopes considered. The equivalent values of the \(\beta_{0}\) for the isotopes are recorded. The quality factor, \(\sigma\), used is obtained from \[\sigma=\sqrt{\frac{\sum_{i}^{m}[(R_{s,L})_{i}^{Exp}-(R_{s,L})_{i}^{Theor}]^{2 }}{m-1}}, \tag{34}\] where \(m\) is the number of available experimental states, \((R_{s,L})_{i}^{Exp}\) and \((R_{s,L})_{i}^{Theor}\) represent the experimental and the theoretical spectral ratios of the \(i^{th}\) levels normalized to the ground state with \(L=2\), \(s=1\) and \(n_{\beta}=0\) respectively. Against the neutron numbers, \(N\), of the chains of the isotopes: \({}^{104-108}\)Ru, \({}^{120-126}\)Xe, \({}^{184-188}\)Pt, \({}^{172-180}\)Os, considered for the comparison, the neutron-\(\beta_{0}\) distribution, showing the relative positions of the isotopes, is shown in the Figure 8. The comparison in the ground state, the quasi-\(\beta_{1}\) bands and the quasi-\(\beta_{2}\) bands of the \(B(E2)\) transition probabilities at \(\beta_{0}=0,1,2\) and \(\beta_{0}=\infty\), normalized to the \(B(E2:2_{1,0}\to 0_{1,0})=100\) units with the \(X(3)\)-IW [1] and experimental data on \(X(5)\) [34] are presented in the Table 7. The values of \(\beta_{0,max}\) peculiar to each angular momentum, obtained from the optimization of \(\beta_{0}\), in Table 5., are employed to compute the optimized \(B(E2)\) transition probabilities, labelled \(B(E2)\)-var. The visuals of these comparisons are shown in the Figures 9(a), 9(b) and 9(c). In order to show the nature of the solutions along the \(X(5)\longrightarrow SU(3)\) symmetry region, the experimental data on the \({}^{158}\)Gd [35], which is a typical \(SU(3)\) candidate, are placed for comparison in Figures 9(a) and 9(b): the solutions at \(\beta_{0}\rightarrow\infty\) are seen lying close to the \({}^{158}\)Gd [35]. The values of the \(B(E2)\) transition probabilities decrease as the variation parameter, \(\beta_{0}\), increases: they increase as the angular momentum increases. The forward variation, as the \(\beta_{0}\) increases, pushes the solutions to \(X(5)\) and the solutions tend to the \(SU(3)\) as \(\beta_{0}\) tends to \(\infty\). ## 5 Conclusion The \(X(3)\) solutions of the Bohr Hamiltonian are obtained by solving the radial function of the Hamiltonian with an inverse square potential with the aid of MAPLE software. Analytically, an expression for the energy levels is determined from the zeros of the Bessel functions. Through the use of the variational approach and the optimization procedure, the spectra ratios and the \(B(E2)\) transition probabilities are computed. The analytical solutions of the \(X(3)\) model are compared with the \(X(5)\) model of the inverse square potentials. It is worth noting that, \(X(3)\) model is another "window" through which \(X(5)\) and \(SU(3)\) "pictures" can be seen: \(X(3)\) lies between \(U(5)\) and \(SU(3)\), hence, \(X(5)\) lies between \(X(3)\) and \(SU(3)\). It has been shown via variational procedure, that the solutions shift to \(X(5)\) from \(X(3)\) and approach \(SU(3)\) as the variation parameter shifts forward. The theoretical predictions on \(R_{L/2}\) and \(B(E2)\) with the experimental data for some selected isotopes are found to be proficient in the \(gsb\) and moderate in other levels. This is shown as the theoretical deviations from the experiments are quite small. The same manner in which the Davidson potential is employed in ref. [4], the employment of the one parameter-dependent inverse square potential in the form of Eq.(1), its properties, is efficient in the variational procedure. Eq.(1) is also a good choice of potential which can be employed for the description of the nuclei transition at the critical points. For the comparison of \(X(3)\) and \(X(5)\) models of Bohr Hamiltonian, with the same formalism employed in this work, it is expected that Equations (28), (29), (30), (31), (32) and (33) should hold in any one-parameter-dependent potential domain such as Kratzer potential, Davidson potential and others. ## Data availability statement All the sources of data included in this article for comparison purpose, are cited and referenced accordingly, in the article. ## Funding Information No funding of any form is received for the course of this work.
2303.16528
Building a Knowledge Graph of Distributed Ledger Technologies
Distributed ledger systems have become more prominent and successful in recent years, with a focus on blockchains and cryptocurrency. This has led to various misunderstandings about both the technology itself and its capabilities, as in many cases blockchain and cryptocurrency is used synonymously and other applications are often overlooked. Therefore, as a whole, the view of distributed ledger technology beyond blockchains and cryptocurrencies is very limited. Existing vocabularies and ontologies often focus on single aspects of the technology, or in some cases even just on one product. This potentially leads to other types of distributed ledgers and their possible use cases being neglected. In this paper, we present a knowledge graph and an ontology for distributed ledger technologies, which includes security considerations to model aspects such as threats and vulnerabilities, application domains, as well as relevant standards and regulations. Such a knowledge graph improves the overall understanding of distributed ledgers, reveals their strengths, and supports the work of security personnel, i.e. analysts and system architects. We discuss potential uses and follow semantic web best practices to evaluate and publish the ontology and knowledge graph.
Lukas König, Sebastian Neumaier
2023-03-29T08:34:01Z
http://arxiv.org/abs/2303.16528v1
# Building a Knowledge Graph of Distributed Ledger Technologies ###### Abstract Distributed ledger systems have become more prominent and successful in recent years, with a focus on blockchains and cryptocurrency. This has led to various misunderstandings about both the technology itself and its capabilities, as in many cases blockchain and cryptocurrency is used synonymously and other applications are often overlooked. Therefore, as a whole, the view of distributed ledger technology beyond blockchains and cryptocurrencies is very limited. Existing vocabularies and ontologies often focus on single aspects of the technology, or in some cases even just on one product. This potentially leads to other types of distributed ledgers and their possible use cases being neglected. In this paper, we present a knowledge graph and an ontology for distributed ledger technologies, which includes security considerations to model aspects such as threats and vulnerabilities, application domains, as well as relevant standards and regulations. Such a knowledge graph improves the overall understanding of distributed ledgers, reveals their strengths, and supports the work of security personnel, i.e. analysts and system architects. We discuss potential uses and follow semantic web best practices to evaluate and publish the ontology and knowledge graph. **URI:** [https://w3id.org/DLTontology](https://w3id.org/DLTontology) **DOI:** 10.5281/zenodo.6497619 Keywords: Distributed Ledger Technology, Blockchain Security, Ontology, Knowledge Graph ## 1 Introduction While the success of blockchains and especially cryptocurrencies has helped to spread the word about distributed ledger technology (DLT), it has lead to misunderstandings about what the technology actually consists of and what it is capable of. In [1] this is highlighted very well with a comparison to facial tissues and the brand product Kleenex, which often get used interchangeably, even though the latter is just one implemented variant of the former. Such a misunderstanding of the technology can hinder its adoption. Each variant of distributed ledger technology operates in a slightly different way with regard to distinct requirements, which therefore means that there are also different usage scenarios and implementations that can be realized with distributed ledgers. Limiting the broad field of the technology to just one single variant of it leads to a distorted over-representation and use of that one variant, because the valid alternatives are simply not emphasized in the same way. There are observable attempts of reverting this trend by stating the technicalities and comparing different types of distributed ledger systems to one another, with new research and reviews stating up to 5 different types that are regularly seen as equally valid implementations of a distributed ledger system [2, 3].12 One way to unify the attempts of providing clarity on distributed ledgers is to use a knowledge graph to capture different technical capabilities and the wider ecosystem. In recent years, the concept of knowledge graphs has become increasingly popular, the idea being to use a graph-based data model to collect and convey knowledge about the real world [4]. To define the semantics of the terms used in the graph, an ontology, i.e. a formal representation of the relationships and classes in the graph, is used [5]. As we point out in detail in Section 5, an all-inclusive approach towards a distributed ledger ontology is currently still lacking. Existing ontologies and vocabularies typically focus on a limited sub-group of distributed ledger systems, or build an ontology for a single distributed ledger product or application. Blockchains and blockchain-based systems are often the focus of such works, which leaves a large part of distributed ledgers unaccounted for. This work therefore establishes a modular ontology, which encompasses and relates the following concepts and areas: Distributed ledger technology with a broader perspective than just a limited focus on blockchains, broader security considerations and technical implications of such systems, as well as organizational boundaries and real world applicability. Additionally, this ontology and the respective knowledge graph aims to clarify the distinctions between different types and systems of distributed ledgers, as it includes a broader spectrum than merely the view on blockchain as the paragon of the entire technology. The model and subsequent knowledge graph created in this work can be used in a multitude of ways: Besides the already mentioned perspective of the entire ecosystem, it also allows for a more distinct observation of the included subjects. For instance, it can be used for threat analysis and risk assessments by identifying attacks and vulnerabilities of a system. In particular, security analysts will benefit from such a model, as it enables them to store and retrieve critical information about systems they might use, and to highlight further measures where they are needed to secure a system. The main contributions of this work can be summarized as follows: * a survey of literature and information on distributed ledger technologies, use-cases, fields of application, standardizations and legal issues; * a set of competency questions that cover the main aspects of the collected information; * an RDFS-based ontology that allows to model the surveyed aspects; * a knowledge graph of relevant entities and relations, extracted from the literature; * an evaluation of the model via SPARQL queries; an online interface to execute the queries. The remainder of this paper is structured as follows: Section 2 provides an overview about distributed ledger technologies, as well as a short introduction to Ontologies, Knowledge Graphs and their terminologies. Section 3 details the methods, i.e., the information collection, the competency questions and evaluation of the ontology. Section 4 describes the ontology in detail, and explains the elements and contents of the knowledge graph. In Section 5 we discuss related work and conclude in Section 6. ## 2 Background ### DLT Data Structures Blockchain.The most well-known example of a distributed ledger technology is a blockchain system. Especially the specific blockchain application Bitcoin sparked a hype around cryptocurrencies that lead to an often misunderstood terminology where Bitcoin and Blockchain are used interchangeably [6; 7], which is not true however. There are many more blockchain applications; cryptocurrencies are merely one aspect of the broader picture. Other examples of blockchain technologies are Ethereum and its renewed introduction of smart contracts, which have become increasingly popular over the last years, as well as the Hyperledger project by the Linux Foundation and IBM [8]. At its technological core, a blockchain is a data structure that (cryptographically) links blocks of data into a chain of blocks. Blockchains are usually further separated into either public, private or federated blockchains, according to the degree of access that is implemented where a public blockchain is accessible to anyone and a private blockchain is restricted and usually used internally [2; 9]. Directed Acyclic Graph.Directed Acyclic Graphs (DAG) as a data structure used for distributed ledger systems started to emerge with the applications NXT, and IOTA with its _Tangle_, which is especially focused on IoT, micro transactions and devices with lower computational power [2; 3]. Instead of having data blocks hashed into one single line, DAGs are used where connections between transactions in the graph show where a valid transaction is present. Since each transaction is processed individually and does not require the formation of blocks as with a regular blockchain, there is technically no transaction limit for DAGs and therefore they are very scaleable. Hashgraph.The most prominent product for Hashgraphs is _Hedera Hashgraph_ as a next generation DLT that focuses on fairness and transaction speed with regard to the order of events as they unfold. It is very efficient to use as it uses virtual voting for reaching consensus and information about transactions are shared with a gossip protocol. Technically, the data structure used for Hashgraphs is similar to a DAG, which is bound to the sequence of time and the nodes participating in the network. By that they fundamentally differentiate themselves from regular DAGs [2; 3]. Holochain.Contrary to the previous types of DLTs, the major change with _Holochain_ is the shift from a focus on data itself towards an agent-centric structure which some consider the purest form of distributed ledger, as every node virtually is responsible for its own ledger. Validation is done by the introduction of a ruleset to the Holochain network. This ruleset is referred to as DNA and can be used to verify a node and to spot malicious actors. The changes introduced with Holochain make mining obsolete, which makes it a very energy efficient alternative [3]. ### Security Challenges, Opportunities and Use Cases for Blockchains & Distributed Ledger Systems As with regular ICT systems, distributed ledger technologies are not immune to threats. While the introduction of distributed ledgers by itself can mitigate certain risks already, it also introduces new dangers that need to be considered. Especially since there are a number of myths revolving around the technology that make it seem almost invincible [6; 8]. Organizational Challenges.One of the challenges of using blockchains for supply chains or as a mean of ensuring transparency and auditability is that if manual input is required, the people involved are still prone to errors and false input [10]. Additionally, the knowledge about and familiarity with distributed ledgers is still limited generally speaking. This lack of knowledge also hampers the willingness to become an early adopter of the technology [6; 7; 10]. Another challenge would be a lack of standardization for distributed ledgers or the integration into existing standardization frameworks or questions about governance, legal requirements, and regulations, especially for cyber security [11; 12]. This challenge is only aggravated by the fact that there is no unified terminology or vocabulary as well [13]. Technical Challenges.So far there has been intensive research on the vulnerabilities and attacks on distributed ledger systems. A major part of these focuses on blockchain systems where _Bitcoin_ and _Ethereum_ are often times the center of attention. Examples for blockchain attacks are 51%-Attacks, Fork Attacks, and Selfish Mining [14]. Additionally, blockchains are often affected by scalability issues and low transaction rates due to mining and consensus mechanisms [15; 16]. For other forms of distributed ledger, like a DAG, there is a strong focus on IoT and IIoT [17], where security features that are present in a blockchain structure are sacrificed for an increase in performance. Opportunities and Use CasesThere are plenty of other fields of use for distributed ledger systems outside of the world of cryptocurrencies. Since that is where the hype started, the introduction of other financial applications that use the technology came to no surprise. This includes not only payment services but also the financial infrastructure and international finance for example [9; 18]. However, distributed ledgers are also already extensively used in supply chains around the globe as well. Instances of that would be shipping and logistics, ethical supply chains, or food safety [6; 10]. Another opportunity for distributed ledgers is their use for government services, be it for voting, record-keeping, transparency or combating fraudulent activities [9; 18]. Other recommended and proposed fields for the use of distributed ledger technology are for example healthcare, identity management, the internet of things, notary and insurance services, and (intellectual) property ownership [6; 19]. ## 3 Methodology Methodologically, our ontology creation process follows the recommendations by Noy and McGuiness [5]. The modelling and creation of the ontology followed an iterative process and was done using Protege. The applied steps can be summarised as follows: 1. In the first step we determine the _domain and scope_ of the ontology. 2. We gather literature about distributed ledger technologies, their components and existing standards, application domains, security vulnerabilities and threats (cf. Sec. 3.2). 3. We define a set of questions "that a knowledge base based on the ontology should be able to answer." [5] We derive use cases and create _competency questions_ based on the collected information (cf. Section 3.3). 4. We create an ontology to express the collected information (cf. Section 4). 5. We evaluate the ontology by providing SPARQL queries to answer the collected competency questions. 6. Finally, we instantiate the developed ontology with named entities and relations extracted from the literature (cf. Section 4.1). ### Ontology Scope The scope of the ontology is a model for a holistic, modular approach on the field of distributed ledger technology that does not focus on a single or a subset of technologies, but rather the wider distributed ledger ecosystem. The goal is to include factors which affect the implementation and operation of distributed ledgers, including existing standards and its usage across the industries. As this ontology does not serve as an overview of one single product or technology stack, the technological composition of a system is merely a starting point for a broader observation that includes its strengths and weaknesses, as well as the general applicability of a system for specific use cases. It therefore serves as an application-neutral knowledge base for the broader field of distributed ledger technology. However, to achieve this general applicability it was necessary to focus on the broader picture, which means that detailed technical implementations and processes like message flows and triggers, and involved accounts are out of scope. ### Information Collection The core concept of this ontology encompasses the three major areas _technology_, _business & market use_, and _legal & standardization_. In Section 2 we already discussed major structural differences between distributed ledger systems, including challenges and threats. The research in this mentioned section forms the basis for the technical part of the ontology, with additional information from further comparisons3. Additionally, technological differences between variants of blockchain/distributed ledger systems or smart contract platforms have been discussed and compared in [20; 21], from where we drew additional input for the ontology. Further information on the technological composition of distributed ledgers and possible technology stacks can be found in [22; 23] and from technical reviews4. Footnote 3: [https://f.pinimg.com/originals/9d/d/1/lb9ddl1bbaf5f025f3bd345e6816a4fc16.png](https://f.pinimg.com/originals/9d/d/1/lb9ddl1bbaf5f025f3bd345e6816a4fc16.png) Footnote 4: [https://101blockchains.com/web-3-0-blockchain-technology-stack/](https://101blockchains.com/web-3-0-blockchain-technology-stack/) For use cases and the real-world application of blockchains and distributed ledgers, we build upon existing assessments and research on the possibilities and likelihood of distributed ledgers being used as a tool in different business sectors, which can be found in e.g. [6; 9]56. On top of that, these business sectors can further be split up into specific use cases, with a majority of these being mentioned in Section 2.2 as well. Footnote 5: [https://101blockchains.com/blockchain-digital-transformation/](https://101blockchains.com/blockchain-digital-transformation/) When it comes to organizational standardization of distributed ledger technology, finished work that is entirely focused on it is scarce. Reports like [7] offer a prediction and assessment of when and what to expect regarding standardization areas, others see it as a hurdle which is responsible for slow adoption rates of the technology, as mentioned in 2.2. However, there are many standards in the works and several standardization organizations have set up focus groups or committees to create new normative standardization reference material specifically catering towards distributed ledger technology and blockchain [11]. Legal issues on the other hand are still mostly unresolved and require further considerations [24]. Especially when it comes to governance and (shared) responsibility or liability when using blockchains. The introduction of specified laws and a legal basis, in particular for governance and adoption of the technology, is seen as a crucial factor by the authors of [25]. With ever rising numbers in cyber crime and based on the existing catalogues of laws on technology, it can be expected that distributed ledgers will receive their own set of laws in the future. ### Evaluation The competency questions in Table 1 are based on the collected information (see section 3.2) and grouped into three main categories: * _Technology and Security_ involves questions regarding the components of a DLT system, as well as technical threats and vulnerabilities of systems and components. From this category we derive relations between vulnerabilities of the respective systems and components, and attacks that exploit these vulnerabilities. * _Industry and Application_ involves questions about applications, business sectors, and use cases of DLT systems. Relevant relations in this category are the applications of DLT systems in specific use cases, as well as the mapping of use cases to a business sector. * _Standardization and Regulation_ involves questions about standards, technical controls, standardization organizations, and relevant laws. From this category we derive relations regarding compliance to standards, as well as the technical control of specific DLT components. In Listing 1, 2, and 3 we give three SPARQL translations of the competency questions. In 1 we ask for existing technical threats that potentially threaten the consensus algorithm of a system; in 2 we translate question I4, which asks for specific use cases of a DLT system; the query in Listing 3 lists standardization organizations that actively publish DLT standards. \begin{table} \begin{tabular}{l l} \hline \hline ID & Competency Question \\ \hline \multicolumn{3}{l}{**Technology and Security**} \\ T1 & Which components are part of the distributed ledger system? \\ T2 & Which technical threats have to be considered regarding the consensus algorithm of the system? \\ T3 & What are known smart contract vulnerabilities? \\ T4 & What are the data structures of the system? \\ T5 & Which types of DLT attacks could be used against a system or its components? \\ \hline \multicolumn{3}{l}{**Industry and Application**} \\ I1 & Which use cases can be realised with distributed ledger technology? \\ I2 & Which industries could use distributed ledger systems? \\ I3 & Which distributed ledger systems are used for public transportation and smart cars? \\ I4 & Which types of record keeping could be realized with distributed ledger technology? \\ \hline \multicolumn{3}{l}{**Standardization and Regulation**} \\ S1 & Which standardization organizations are active in regards to distributed ledger technology? \\ S2 & Which normative references do exist for distributed ledger systems? \\ S3 & Which industry standards do exist? \\ S4 & What are relevant laws in regards to distributed ledger systems? \\ S5 & What are organizational controls and mitigations for a distributed ledger system and/or component? \\ S6 & Is there an industry initiative that directs and regulates the used distributed ledger system? \\ \hline \hline \end{tabular} \end{table} Table 1: Competency Questions used to evaluate and validate the ontology. ``` ``` PEETIX:<[https://w3id.org/DLTontology#](https://w3id.org/DLTontology#)> SELECT?threatWHERE{ ?threat:threatens[a:ConsensusAlgorithm] } ``` Listing 1 T2 - Which technical threats have to be considered regarding the consensus algorithm of the system? ``` SELECT?idtsystem?usecaseWHERE{ ?dltsystem:isSpecializedFor?usecase. ?usecasea:RecordKeeping. } ``` Listing 2 I4 - Which types of record keeping could be realized with distributed ledger technology? ``` SELECT?stdorg?standardWHERE{ ?stdorga:StandardizationOrganization; :creates?standard. ?dltsystem:compliantTo?standard. } ``` Listing 3 S1 - Which standardization organizations are active in regards to distributed ledger technology? The complete list of questions as SPARQL queries for all competency questions, including the respective classes and properties, can be found in the online documentation: [https://w3id.org/DLTOntology](https://w3id.org/DLTOntology). ## 4 DLT Ontology and Knowledge Graph Based on the collected information and competency questions we developed _DLT Ontology_. The ontology covers the technical setup and components of a DLT system, security aspects, as well as its applications and use cases. It covers 115 classes and 15 properties, and consists of a total of 571 triples. The DLT ontology provides classes and properties which are used to describe various aspects of a distributed ledger ecosystem. The core concepts are displayed in Figure 1: _DLTSystem_ and _DLTComponent_ describe the essential parts of a DLT system; _Vulnerability_ and _Attack_ connect potential security aspects of the system; _UseCase_, and _BusinessSector_ link existing applications of the described systems. The relations between the included classes are described in detail in Table 2, where a property-centric view of the core concepts, including domains and ranges of the properties of the ontology is provided. ### Knowledge Graph The knowledge graph uses the above introduced DLT Ontology, and is based on the information collection described in Section 3.2. It consists of three parts: (i) standards and legal authorities [11], (ii) technical details, vulnerabilities and security aspects [14], and (iii) use cases and business sectors [26]. Additionally, we reviewed white papers for further information about the technical collocation of DLT systems [27].7 Footnote 7: [https://www.r3.com/reports/corda-technical-whitepaper/](https://www.r3.com/reports/corda-technical-whitepaper/), [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/) In total, the knowledge graph consists of 746 triples; Table 3 lists the number of entities for the core classes. ### Availability of the Resources **DLT Ontology and Knowledge Graph.** The knowledge graph and respective documentation are available online at [https://w3id.org/DLTOntology](https://w3id.org/DLTOntology) under the CC-by-4.0 license; a DOI is provided via the Zenodo repository.8 The \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}} \hline **Property** & **Description** \\ \hline **canExploit** & The _canExploit_ property links an attack on a distributed ledger system or component to the respective vulnerability, which could be used by an attacker to enable further malicious activity in an effort to manipulate or gain control of the system. \\ \hline **threatens** & This property indicated that there is an existing attack on a distributed ledger system or one of its components. Attacks can manipulate specific components and data, or generally weaken the structural integrity of the entire system. \\ \hline **hasVulnerability** & This property indicates that one or multiple of the components used in a distributed ledger system carry with them one or an array of vulnerabilities. Having an overview of existing vulnerabilities is crucial for securing the system. \\ \hline **mitigates** & Once disaster strikes, it is important to have a series of mitigations in place, to reduce the impact of an attack, malicious event or generally in case of the system breaking down. This property links a technical or organizational mitigation measure to a distributed ledger system or specific component. \\ \hline **hasComponent** & It links an individual component to the DLT system. It is used to show what a system is comprised of and to get a better understanding of the elements and building blocks. \\ \hline **Range:** DLTComponent & Such an overview allows for an analysis or the implementation of measures for a single component specifically and to tailor solutions to certain requirements. \\ \hline **isCompliantTo** & When it comes to using distributed ledger systems in an enterprise setting, national and/or international standardization can become a necessity. This property expresses compliance to existing standardization or normative references on a system or component level. \\ \hline **controls** & Implemented controls can secure components and the entire system itself. This property links a technical or organizational control measure to a distributed ledger system or individual component. \\ \hline **createsStandard** & Behind every normative reference material and standardization there is an organization with the needed authority to publish such material, which can be either a national institute or an international organization. This property links a standard to its organization. \\ \hline **isRegulatedBy** & As with regular ICT-systems, a distributed ledger system will have to comply with the legal status of the respective country it is used in by any organization. This property \\ \hline **_Range:_ Law & connects existing legislation for ICT and DLT systems. \\ \hline **isSpecializedFor** & Not all implementations of distributed ledger systems are the same or similar in their nature and complexion. This property links existing DLT systems which are tailored for the delivery of a specific use case. \\ \hline **hasUseCase** & This property is used to show which use cases could be used in a selected industry. Not all forms of usage of distributed ledger systems are fit for all types of business. \\ \hline **isUsedFor** & There are certain distributed ledger systems that are specialized for the use in one specific field or industry. This property is used to highlight this exact relation. \\ \hline **senseFrom** & It is entirely possible for several organizations of an industry to form an initiative to steer and develop the future of distributed ledger technology within that industry in a common effort. This property shows the origins of an industry initiative with regards to the industry of involved parties. \\ \hline **hasBusinessSector** & Distributed ledger technology can be used to realize a variety of different use cases. However, not all of them are a fit for any field or industry. This property highlights the connection between a use case and the specific business sector/industry it is used in. \\ \hline \end{tabular} \end{table} Table 2: Main Properties of the DLT Ontology. online documentation consists of a comprehensive overview, examples of how to use the ontology, and SPARQL queries for the respective competency questions. **SPARQL Interface.** We provide a SPARQL query interface at [https://w3id.org/DLTOntology](https://w3id.org/DLTOntology) which allows to execute the competency questions provided in Section 3.3. The interface is based on the Comunica SPARQL Widget [28]. ## 5 Related Work Glossaries and Vocabularies for Distributed Ledgers.There have been made various attempts and proposals on the creation of terminologies, glossaries and vocabularies regarding blockchains and distributed ledgers. Most importantly, there is the ISO working group TC 307 [29], the ITU-T _Focus Group on Application of Distributed Ledger Technology_[30], and the German Institute for Standardization (DIN) [31]. On top of that there is a multitude of private companies and blockchain enthusiasts that offer their own glossaries for blockchains and distributed ledgers like Blockchainhub Berlin9, 101 Blockchains10, and ConsenSys [32]. Footnote 9: [https://blockchainhub.net/blockchain-glossary/](https://blockchainhub.net/blockchain-glossary/) Footnote 10: [https://101blockchains.com/blockchain-definitions/](https://101blockchains.com/blockchain-definitions/) _Ontologies for Distributed Ledgers.EthOn_ (the Ethereum Ontology) [33] is an approach to model the major concepts of the Ethereum blockchain, e.g., Blocks, Accounts, Transactions, etc. The ontology itself is based on OWL [34] and there exist extensions that can be used to describe ERC20 compliant tokens11 and smart contracts.12 While EthOn focuses on modelling the technical details of a particular blockchain, we propose an approach that allows to model DLT systems, their specific use cases, and their potential security threats, independent of the concrete implementation. Figure 1: Overview of the core concepts of the DLT Ontology. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline Triples & Standard & Std.Orga. & DLTComponent & UseCase & BusinessSector & Vuln. & Attack \\ \hline 746 & 18 & 8 & 9 & 55 & 9 & 7 & 11 \\ \hline \hline \end{tabular} \end{table} Table 3: Number of triples and entities in the knowledge graph. Also related to our efforts is the BLONDiE ontology (Blockchain Ontology with Dynamic Extensibility) [35]. BLONDiE provides classes and properties to describe the structure and related information of the three most prominent blockchain projects - Bitcoin, Ethereum and Hyperledger. However, while our goal is to cover the overall ecosystem and landscape of existing DLT technologies, BLONDiE describes concepts that specifically relate to the implementation of the three technologies (e.g., "Ethereum Payload", and "Hyperledger Transaction"). ## 6 Conclusion Although distributed ledger technology has seen an enormous increase in usage and spread over the last decade, there are still many misconceptions and misunderstandings revolving around it. First and foremost is the equalization of distributed ledger technology with blockchains, where the latter is simply one of many manifestations of the former. Furthermore, there is a lack of holistic vocabularies and conceptualization of the technology as a whole, rather than just one of its manifestations or even a single product. To tackle and overcome present issues, we have contributed the following: * We developed an ontology for distributed ledgers. This ontology considers various aspects of DLT systems, including threats, vulnerabilities, and the legal situation and standardization of the technology; an overview of the market situation and real-world applications is also included. * We have demonstrated the use of the ontology by building a knowledge graph containing entities and relations for all the core classes of the ontology, i.e. (i) standards and legal authorities, (ii) components and technical details of DLT systems, as well as corresponding vulnerabilities and (iii) exemplary use cases. * We have validated the ontology based on a pre-defined set of competency questions. On top of that, a set of SPARQL queries is provided to evaluate the competency questions and thus the capabilities of the knowledge graph. * The ontology and the knowledge graph are available for public use under an open license; the documentation and downloads can be found at [https://w3id.org/DLTOntology](https://w3id.org/DLTOntology). One limitation of our work in this regard is the lack of existing data that can be integrated in a knowledge graph: while it was possible to gather information on the technical components of DLT systems, existing standardization documents, and potential use cases in different industries, this requires intensive research, extensive manual work, and expert knowledge in the domain. ## Acknowledgements This research was funded by the Josefessel Center for Blockchain Technologies & Security Management (BLOCKCHAINS). Sebastian Neumaier received funding through the Austrian Research Promotion Agency (FFG) Bridge project 880592 "SecDM - Sichere und vertrauens-wurdige on-premise data markets". The financial support by the FFG and the Christian Doppler Research Association is gratefully acknowledged.
2302.00063
Quantum energy inequalities in integrable models with several particle species and bound states
We investigate lower bounds to the time-smeared energy density, so-called quantum energy inequalities (QEI), in the class of integrable models of quantum field theory. Our main results are a state-independent QEI for models with constant scattering function and a QEI at one-particle level for generic models. In the latter case, we classify the possible form of the stress-energy tensor from first principles and establish a link between the existence of QEIs and the large-rapidity asymptotics of the two-particle form factor of the energy density. Concrete examples include the Bullough-Dodd, the Federbush, and the $O(n)$-nonlinear sigma models.
Henning Bostelmann, Daniela Cadamuro, Jan Mandrysch
2023-01-25T17:45:28Z
http://arxiv.org/abs/2302.00063v2
# Quantum energy inequalities in integrable models with several particle species and bound states ###### Abstract We investigate lower bounds to the time-smeared energy density, so-called quantum energy inequalities (QEI), in the class of integrable models of quantum field theory. Our main results are a state-independent QEI for models with constant scattering function and a QEI at one-particle level for generic models. In the latter case, we classify the possible form of the stress-energy tensor from first principles and establish a link between the existence of QEIs and the large-rapidity asymptotics of the two-particle form factor of the energy density. Concrete examples include the Bullough-Dodd, the Federbush, and the \(O(n)\)-nonlinear sigma models. ## 1 Introduction It is well known that the energy operator in quantum field theory (QFT) is positive, while the energy density \(T^{00}\) may be locally negative. However, for physically reasonable theories, bounds on this negativity are expected when local averages are taken: quantum energy inequalities (QEIs). They may, for example, take the form \[\langle\varphi,\int dtg(t)^{2}T^{00}(t,x)\varphi\rangle\geq-c_{g}\|\varphi\|^{ 2}, \tag{1.1}\] where the constant \(c_{g}\) does not depend on \(\varphi\), and the inequality holds for a suitably large set of vectors \(\varphi\). Without these bounds, accumulation of negative energy might lead to violations of the second law of thermodynamics [10]. They also have significant importance in semiclassical gravity, where the expectation value of \(T^{\mu\nu}\) appears on the right-hand side of the Einstein equations. In this context, QEIs can yield constraints on exotic spacetime geometries and lead to generalized singularity theorems extended from classical results in general relativity; see [21, Sec. 5] for a review. QEIs have been established quite generically in linear QFTs, including QFTs on curved spacetimes; see [14] for a review. They are also known in 1+1-dimensional conformal QFTs [15]. However, their status is less clear in self-interacting models, i.e., models with a nontrivial scattering matrix between particles. Some generic results, weaker than (1.1), can be obtained from operator product expansions [16]. Concrete results in models with self-interaction are rare, though. The situation is somewhat better in 1+1-dimensional integrable models. In these models, the scattering matrix is constrained to be factorizing but nonetheless allows for a large class of interactions; see, e.g., [17, 18, 19]. A QEI in this context was first established in the Ising model [1]. Also, a QEI at one-particle level (i.e., where (1.1) holds for one-particle states \(\varphi\)) has been obtained more generally for models with one scalar particle type and no bound states [1]. The class of integrable models is much richer, though - they can also describe several particle species with a more complicated scattering matrix between them or particles with inner degrees of freedom; further, these particles may form bound states1. This article aims to generalize the results of [1, 1] to these cases. Footnote 1: Bound states are understood as poles of the scattering matrix within the so-called physical strip. See [11, Sec. 2.2] for further details. As an a priori problem, one may ask what form the energy density operator \(T^{00}\) takes in these models, even at one-particle level. The classical Lagrangian is often used as heuristic guidance; however, if one takes an inverse scattering approach to integrable models, starting by prescribing the two-particle scattering function, then a classical Lagrangian may not even be available in all cases. Instead, we will restrict the possible form of the energy density starting from generic physical assumptions (such as the continuity equation, but initially disregarding QEIs); see Theorem 3.2 below. We then ask whether QEIs can hold for these energy densities. Our main results are as follows: For a class of models with \(\mathit{rapidity}\)-\(\mathit{independent}\) scattering function, with a canonical choice of energy density, we establish a QEI in states of arbitrary particle number (Theorem 4.3). For generic scattering functions, we give necessary and sufficient criteria for QEIs to hold _at one-particle level_ (Theorem 5.1); it turns out that the existence of QEIs critically depends on the large-rapidity behaviour of the two-particle form factor \(F_{2}\) of the energy density. We apply our results to several concrete examples, namely, to the Bullough-Dodd model (Sec. 7.1) which has bound states, to the Federbush model (Sec. 7.2) as an interacting model with rapidity-independent scattering function, and to the \(O(n)\) nonlinear sigma model (Sec. 7.3) which features several particle species. In particular, we investigate how QEIs further restrict the choice of the stress-energy tensor in these models, sometimes fixing it uniquely. In short, the remainder of this article is organized as follows. We recall some background on integrable QFTs in Section 2 and discuss the possible form of the energy density in Section 3. Section 4 establishes a QEI in models with constant scattering function, and Section 5 for more generic scattering functions but only at one-particle level. For controlling the large-rapidity asymptotics of \(F_{2}\), critically important to our results in Section 5, we first explain the relation between the scattering function and the so-called "minimal solution" in Section 6, with technical details given in the appendix (which contains known facts as well as original results). This is then applied to examples in Section 7. Conclusion and outlook follow in Section 8. ## 2 Preliminaries ### General notation We will work on \(1+1\)-dimensional Minkowski space \(\mathbb{M}\). The Minkowski metric \(g\) is conventionally chosen to be \(\mathrm{diag}(+1,-1)\) and the Minkowski inner product will be denoted by \(p.x=g_{\mu\nu}p^{\mu}x^{\nu}\). A single parameter, called _rapidity_, conveniently parametrizes the mass shell on \(\mathbb{M}\). In this parameterization, the momentum at rapidity \(\theta\) is given by \(p^{0}(\theta;m):=m\operatorname{ch}\theta\) and \(p^{1}(\theta;m):=m\operatorname{sh}\theta\), where \(m>0\) denotes the mass. We will use \(\theta,\eta,\lambda\) to denote real and \(\zeta\) to denote complex rapidities. Introducing the open and closed strips, \(\mathbb{S}(a,b):=\mathbb{R}+i(a,b)\) and \(\mathbb{S}[a,b]:=\mathbb{R}+i[a,b]\), respectively, the region \(\mathbb{S}[0,\pi]\) will be of particular significance and is referred to as the _physical strip_. In the following, let \(\mathcal{K}\) be a finite-dimensional complex Hilbert space with inner product \((\cdot,\cdot)\), linear in the second position. We denote its extension to \(\mathcal{K}^{\otimes 2}\) as \((\cdot,\cdot)_{\mathcal{K}^{\otimes 2}}\) and the induced norm as \(\|\cdot\|_{\mathcal{K}^{\otimes 2}}\); i.e., for \(v_{i},w_{i}\in\mathcal{K}\), \(i=1,2\) we have \((v_{1}\otimes v_{2},w_{1}\otimes w_{2})_{\mathcal{K}^{\otimes 2}}=(v_{1},w_{1})(v_{2 },w_{2})\). For computations, it will be convenient to choose an orthonormal basis \(\{e_{\alpha}\},\alpha\in\{1,\ldots,\dim\mathcal{K}\}\). In this basis, we denote \(v\in\mathcal{K}^{\otimes m}\) and \(w\in\mathcal{B}(\mathcal{K}^{\otimes m},\mathcal{K}^{\otimes n})\) in vector and tensor notation by \[v^{\boldsymbol{\alpha}}:=(e_{\boldsymbol{\alpha}},v),\quad w^{\boldsymbol{ \alpha}}_{\boldsymbol{\beta}}:=(e_{\boldsymbol{\alpha}},we_{\boldsymbol{ \beta}}). \tag{2.1}\] Operators on \(\mathcal{K}\) or \(\mathcal{K}^{\otimes 2}\) will be denoted by uppercase Latin letters. This also applies to vectors in \(\mathcal{K}^{\otimes 2}\), which are identified with operators on \(\mathcal{K}\) as follows: For an antilinear involution \(J\in\mathcal{B}(\mathcal{K})\) (to be fixed later), the map \(A\mapsto\hat{A}\) defined by \[\forall u,v\in\mathcal{K}:\quad(u,\hat{A}v):=(u\otimes Jv,A)_{\mathcal{K}^{ \otimes 2}} \tag{2.2}\] yields a vector space isomorphism between \(\mathcal{K}^{\otimes 2}\) and \(\mathcal{B}(\mathcal{K})\). In particular, we consider the special element \(I_{\otimes 2}\in\mathcal{K}^{\otimes 2}\) defined by \(\widehat{I_{\otimes 2}}=\mathbb{1}_{\mathcal{K}}\). For an arbitrary orthonormal basis \(\{e_{\alpha}\}_{\alpha}\) of \(\mathcal{K}\) it is explicitly given by \[I_{\otimes 2}=\sum_{\alpha}e_{\alpha}\otimes Je_{\alpha}. \tag{2.3}\] _Remark 2.1_.: \(I_{\otimes 2}\) is invariant under the action of \(U^{\otimes 2}\) for any \(U\in\mathcal{B}(\mathcal{K})\) with \(U\) unitary or anti-unitary and \([U,J]=0\). ### One-particle space and scattering function **Definition 2.2**.: _A **one-particle little space (with a global symmetry)**\((\mathcal{K},V,J,M)\) is given by a finite-dimensional Hilbert space \(\mathcal{K}\), a unitary representation \(V\) of a compact Lie group \(\mathcal{G}\) on \(\mathcal{K}\), an antiunitary involution \(J\) on \(\mathcal{K}\), and a linear operator \(M\) on \(\mathcal{K}\) with strictly positive spectrum. We further assume that \(M\) commutes with \(V(g)\) and \(J\)._ Given such a little space \((\mathcal{K},V,J,M)\), we define the _one-particle space_\(\mathcal{H}_{1}:=L^{2}(\mathbb{R},\mathcal{K})\cong L^{2}(\mathbb{R})\otimes \mathcal{K},\) on which we consider the (anti-)unitary operators, \(\varphi\in\mathcal{H}_{1}\), \[(U_{1}(x,\lambda)\varphi)(\theta) :=e^{ip(\theta;M).x}\varphi(\theta-\lambda),\quad(x,\lambda)\in \mathcal{P}_{+}^{\uparrow} \tag{2.4}\] \[(U_{1}(j)\varphi)(\theta) :=J\varphi(\theta),\] (2.5) \[(V_{1}(g)\varphi)(\theta) :=V(g)\varphi(\theta),\quad g\in\mathcal{G}. \tag{2.6}\] This defines a unitary strongly continuous representation of the proper Poincare group \(\mathcal{P}_{+}\) and of \(\mathcal{G}\), where the antiunitary \(U_{1}(j)\) is the PCT operator, representing spacetime reflection. We will denote the spectrum of the mass operator \(M\) as \(\mathfrak{M}\subset(0,\infty)\) and its spectral projections as \(E_{m},m\in\mathfrak{M}\). Moreover, introduce the _total energy-momentum operator_\(P^{\mu}\) on \(\mathcal{H}_{1}^{\otimes 2}\) by \[(P^{\mu}\varphi)(\boldsymbol{\theta}):=P^{\mu}(\boldsymbol{\theta})\varphi( \boldsymbol{\theta}),\quad P^{\mu}(\theta_{1},\theta_{2}):=p^{\mu}(\theta_{1} ;M)\otimes\mathbb{1}_{\mathcal{K}}+\mathbb{1}_{\mathcal{K}}\otimes p^{\mu}( \theta_{2};M),\quad\varphi\in\mathcal{H}_{1}^{\otimes 2}, \tag{2.7}\] as well as the _flip operator_\(\mathbb{F}\in\mathcal{B}(\mathcal{K}^{\otimes 2})\) given by \(\mathbb{F}(u_{1}\otimes u_{2})=u_{2}\otimes u_{1}\) (\(u_{1,2}\in\mathcal{K}\)). **Definition 2.3**.: _Let \((\mathcal{K},V,J,M)\) be a one-particle little space. A meromorphic function \(S:\mathbb{C}\to\mathcal{B}(\mathcal{K}^{\otimes 2})\) with no poles on the real line is called **S-function** iff for all \(\zeta,\zeta^{\prime}\in\mathbb{C}\) the following holds:_ 1. _Unitarity:_ \(S(\bar{\zeta})^{\dagger}=S(\zeta)^{-1}\)_._ 2. _Hermitian analyticity:_ \(S(\zeta)^{-1}=S(-\zeta)\)_._ 3. _CPT invariance:_ \(J^{\otimes 2}\mathbb{F}S(\zeta)\mathbb{F}J^{\otimes 2}=S(\zeta)^{\dagger}\)_._ 4. _Yang-Baxter equation:_ \((S(\zeta)\otimes\mathbb{1}_{\mathcal{K}})(\mathbb{1}_{\mathcal{K}}\otimes S( \zeta+\zeta^{\prime}))(S(\zeta^{\prime})\otimes\mathbb{1}_{\mathcal{K}})=( \mathbb{1}_{\mathcal{K}}\otimes S(\zeta^{\prime}))(S(\zeta+\zeta^{\prime}) \otimes\mathbb{1}_{\mathcal{K}})(\mathbb{1}_{\mathcal{K}}\otimes S(\zeta))\)_._ 5. _Crossing symmetry:_ \(\forall\,u_{i},v_{i}\in\mathcal{K},i=1,2:\)__ \((u_{1}\otimes u_{2},S(i\pi-\zeta)\,v_{1}\otimes v_{2})_{\mathcal{K}^{\otimes 2 }}=(Jv_{1}\otimes u_{1},S(\zeta)\,v_{2}\otimes Ju_{2})_{\mathcal{K}^{\otimes 2 }}\)_._ 6. _Translational invariance:_ \((E_{m}\otimes E_{m^{\prime}})S(\zeta)=S(\zeta)(E_{m^{\prime}}\otimes E_{m}), \quad m,m^{\prime}\in\mathfrak{M}\)_._ 7. \(\mathcal{G}\) _invariance:_ \(\forall\,g\in\mathcal{G}:\quad[S(\zeta),V(g)^{\otimes 2}]=0\)_._ _An S-function is called **regular** iff_ 1. _Regularity:_ \(\exists\kappa>0:\)__\(S\restriction_{(-\kappa,\kappa)}\) _is analytic and bounded. In this case,_ \(\kappa(S)\) _denotes the supremum of such_ \(\kappa\)_'s._ _Remark 2.4_.: The S-function (also referred to as auxiliary scattering function [1, Eq. (2.7)]) is the central object to define the interaction of the model. It is closely related to the two-to-two-particle scattering matrix of the model, differing from it only by a "statistics factor", namely \(-1\) on a product state of two fermions and \(+1\) on fermion-boson- or boson-boson-vectors. The full scattering matrix is given as a product of two-to-two-particle scattering matrices of all participating combinations of one-particle states (see, e.g., [1, Sec. 2] and [1, Secs. 5, 6, 7]). _Remark 2.5_.: In examples below, we will choose a basis of \(\mathcal{K}\) such that \(J\) is given by \((Jv)^{\alpha}=\overline{v^{\alpha}}\) for \(v\in\mathcal{K}\); here \(\alpha\mapsto\bar{\alpha}\) is an involutive permutation on \(\{1,\ldots,\dim\mathcal{K}\}\), i.e., \(\overline{\overline{u}}=\alpha\). Then the relations (S1), (S2), (S5), and (S3) amount to unitarity plus the following conditions: \[S_{\alpha\beta}^{\gamma\delta}(\zeta)=\overline{S_{\gamma\delta}^{\alpha\beta} (-\bar{\zeta})}=S_{\delta\bar{\gamma}}^{\bar{\beta}\bar{\alpha}}(\zeta),\quad S _{\alpha\beta}^{\gamma\delta}(i\pi-\zeta)=S_{\beta\bar{\delta}}^{\bar{\alpha} \gamma}(\zeta). \tag{2.8}\] ### Integrable models, form factors, and the stress-energy tensor From the preceding data - one-particle little space \((\mathcal{K},V,J,M)\) and S-function \(S\) - it is well-known how to construct an integrable model of quantum field theory (inverse scattering approach). This can be done at the level of \(n\)-point functions of local fields [16, 17] or more rigorously in an operator algebraic setting, at least provided that \(S\) is regular, analytic in the physical strip, and satisfies an intertwining property [1]. We give a brief overview of the construction here, focussing only on aspects that will be relevant in the following. The interacting state space \(\mathcal{H}\), on which our local operators will act, is an \(S\)-symmetrized Fock space generated by \(S\)-twisted creators \(z^{\dagger}\) and annihilators \(z\) known as _ZF operators_[13, 14]. They are defined as operator-valued distributions \(h\mapsto z^{\sharp}(h)\), \(h\in\mathcal{H}_{1}=L^{2}(\mathbb{R},\mathcal{K})\) with \(z(h):=(z^{\dagger}(h))^{\dagger}\) and \[(z^{\dagger}(h)\Psi)_{n}:=\sqrt{n}\operatorname{Symm}_{S}(h\otimes\Psi_{n-1}). \tag{2.9}\] Here \(\Psi_{n}\) is the \(n\)-particle component of \(\Psi\in\mathcal{H}\), and \(\operatorname{Symm}_{S}\) denotes \(S\)-symmetrization: For \(n=2\) (other cases will not be needed here) and a \(\mathcal{K}^{\otimes 2}\)-valued function in two arguments, it can be defined as \[\operatorname{Symm}_{S}f:=\tfrac{1}{2}(1+S_{\leftarrow})f,\qquad S_{ \leftarrow}f(\zeta_{1},\zeta_{2}):=S(\zeta_{2}-\zeta_{1})f(\zeta_{2},\zeta_{ 1}). \tag{2.10}\] Products of \(z^{\dagger}\) and \(z\) can be linearly extended to arguments in tensor powers of \(\mathcal{H}_{1}\). With \(h_{1},h_{2}\in\mathcal{H}_{1}\) and \(S^{i\pi}:=S(i\pi+\zeta)\) the _ZF algebra_ relations amount to \[z^{\dagger}z^{\dagger}((1-S_{\leftarrow})(h_{1}\otimes h_{2})) =0, \tag{2.11}\] \[zz(J^{\otimes 2}(1-S_{\leftarrow})(h_{1}\otimes h_{2})) =0,\] (2.12) \[zz^{\dagger}(h_{1}\otimes h_{2})-z^{\dagger}z((1\otimes J)S_{ \leftarrow}^{i\pi}(Jh_{1}\otimes h_{2})) =\langle h_{1},h_{2}\rangle\,\mathbb{1}. \tag{2.13}\] Now any local operator \(A\) of the model can be expanded into a series of the form \[A=\sum_{n=0}^{\infty}\mathcal{O}_{n}[F_{n}^{[A]}]. \tag{2.14}\] (see [1] for the case \(\dim\mathcal{K}=1\)). Here the \(F_{n}^{[A]}\) are meromorphic functions of \(n\) variables depending linearly on \(A\) which are known as the _form factors_ of \(A\); they satisfy a number of well-known properties, the _form factor equations_[17]. In line with the literature, we will call \(F_{n}\) the _n-particle_ form factor, though note that expectation values in _n_-particle _states_ generically have contributions from all zero- to \(2n\)-particle form factors. The symbols \(\mathcal{O}_{n}\) are given by \[\mathcal{O}_{0}[F_{0}] =F_{0}\mathbb{1}, \tag{2.15}\] \[\mathcal{O}_{1}[F_{1}] =z^{\dagger}(F_{1})+z(JF_{1}(\cdot+i\pi)),\] (2.16) \[\mathcal{O}_{2}[F_{2}] =\frac{1}{2}z^{\dagger}z^{\dagger}(F_{2})+z^{\dagger}z((1\otimes J )F_{2}(\cdot,\cdot+i\pi))+\frac{1}{2}zz(J^{\otimes 2}F_{2}(\cdot+i\pi,\cdot+i\pi)), \tag{2.17}\] and analogously for higher \(n\), but only \(n\leq 2\) will be needed in the following. Conversely, given \(F_{n}\) that fulfill the form factor equations and suitable regularity conditions, (2.14) defines a local operator \(A\). The series (2.14) is to be read in the sense of quadratic forms on \(\mathcal{D}\times\mathcal{D}\) with a dense domain \(\mathcal{D}\subset\mathcal{H}\), which we can take to consist of elements \(\Psi=(\Psi_{n})\in\mathcal{H}\), where each \(\Psi_{n}\) is smooth and compactly supported and \(\Psi_{n}=0\) for large enough \(n\). With suitably chosen \(F_{n}\), we can also regard each \(\mathcal{O}_{n}[F_{n}]\) as an operator on \(\mathcal{D}\), for example for \(n=1\) if \(F_{1}\) and \(F_{1}(\cdot+i\pi)\) are square-integrable. In the following, we are interested in the stress-energy operator \[A=T^{\mu\nu}(g^{2})=\int dt\,g(t)^{2}\,T^{\mu\nu}(t,0), \tag{2.18}\] averaged in time with a non-negative test function \(g^{2}\), \(g\in\mathcal{S}_{\mathbb{R}}(\mathbb{R})\), and at \(x^{1}=0\) without loss of generality; the integral is to be read weakly on \(\mathcal{D}\times\mathcal{D}\). Also, we will focus on its two-particle coefficient \(F_{2}^{[A]}\); this is because: 1. In some models, the energy density has only these coefficients, i.e., \(F_{n}^{[A]}=0\) for \(n\neq 2\) (see Sec. 4). 2. One-particle expectation values, which will partly be our focus, are determined solely by the coefficients \(F_{n}^{[A]}\) for \(n\leq 2\). 3. The coefficients with \(n<2\) are not important for QEI results since the zero-point energy is expected to vanish (\(F_{0}^{[A]}=0\)) and the coefficient \(F_{1}^{[A]}\) yields only bounded contributions to the expectation values of \(A\) (see Remark 5.3 below). Under suitable regularity conditions, one has from (2.18), \[F_{2}^{[A]}(\mathbf{\zeta})=\int dt\,g(t)^{2}F_{2}^{\mu\nu}(\mathbf{\zeta};t,0),\quad \text{where }F_{2}^{\mu\nu}(\mathbf{\zeta};x):=F_{2}^{[T^{\mu\nu}(x)]}(\mathbf{\zeta}). \tag{2.19}\] Assuming \(F_{0}^{[A]}=0\), the expectation value of the (time-smeared) stress-energy tensor in one-particle states \(\varphi\in\mathcal{H}_{1}\cap\mathcal{D}\) is then given by \[\langle\varphi,T^{\mu\nu}(g^{2})\varphi\rangle=\int d\theta d\eta\,dt\,g(t)^{ 2}\left(\varphi(\theta),\widehat{F}_{2}^{\mu\nu}(\theta,\eta+i\pi;t,0)\varphi (\eta)\right) \tag{2.20}\] with \(\widehat{F_{2}}\) as in Eq. (2.2) for each \(\theta\), \(\eta\). The above analysis should apply whenever \(T^{\mu\nu}\) is given as a local Wightman field with sufficiently regular high-energy behaviour. In the present article, we will however proceed in the opposite way: We will select a suitable form factor \(F_{2}^{\mu\nu}(\mathbf{\zeta};x)\) for the stress-energy tensor, then use Eq. (2.20) to _define_\(T^{\mu\nu}(g^{2})\) as a quadratic form at one-particle level, i.e., on \(\mathcal{H}_{1}\cap\mathcal{D}\), or more generally, the expansion (2.14) to define it for arbitrary particle numbers, as a quadratic form on \(\mathcal{D}\). ## 3 The stress-energy tensor at one-particle level This section analyses what form the stress-energy tensor \(T^{\mu\nu}\) and, in particular, the energy density \(T^{00}\) can take in our setup. Since our models do not necessarily arise from a classical Lagrangian, we study the stress-energy tensor using a "bootstrap" approach: We require a list of physically motivated properties for \(T^{\mu\nu}\) and study which freedom of choice remains. Here we restrict our attention to the one-particle level, where the stress-energy tensor is determined by its form factor \(F_{2}^{\mu\nu}\), as explained in Section 2.3. For simplicity, we will list our axioms directly for the function \(F_{2}^{\mu\nu}\); see properties (T1)-(T12) in Definition 3.1 below. These are motivated by the features of the full stress-energy tensor as follows. First, \(T^{\mu\nu}(x)\) should be a local field, i.e., commute with itself at spacelike separation. This property is well-studied in the form factor program to integrable systems and is expected to be equivalent to the form factor equations [15, Sec. 2]. The same relations can be justified rigorously in an operator algebraic approach, at least for a single scalar field (\(\dim\mathcal{K}=1\)) without bound states [14], with techniques that should apply as well for more general \(\mathcal{K}\) and in the presence of bound states.2 At one-particle level, where the form factor equations simplify, this yields properties (T1)-(T4) below, with hermiticity of \(T^{\mu\nu}(x)\) implying (T5). The pole set \(\mathfrak{P}\) appearing below is directly connected to the bound state poles of the S-function[13, 14, 15, 16] and will be specified in the examples (Sec. 7). Footnote 2: K. Shedid Attifa, work in progress Further, \(T^{\mu\nu}\) should behave covariantly under proper Poincare transformations as a CPT-invariant symmetric 2-tensor (T6), (T7), (T8). It should be conserved, i.e., fulfill the continuity equation, \(\partial_{\mu}T^{\mu\nu}=0\) (T9), and integrate to the total energy-momentum operator, \(P^{\mu}=\int T^{0\mu}(x^{0},x^{1})dx^{1}\) (T10). Lastly, we demand that \(T^{\mu\nu}\) is invariant under the action of \(\mathcal{G}\) (T11) and, optionally, covariant under parity inversion (T12). **Definition 3.1**.: _Given a little space \((\mathcal{K},V,J,M)\), an S-function \(S\), and a subset \(\mathfrak{P}\subset\mathbb{S}(0,\pi)\), a **stress-energy tensor at one-particle level** (with poles \(\mathfrak{P}\)) is formed by functions \(F_{2}^{\mu\nu}:\mathbb{C}^{2}\times\mathbb{M}\to\mathcal{K}^{\otimes 2}\), \(\mu,\nu=0,1\), which for arbitrary \(\mathbf{\zeta}=(\zeta_{1},\zeta_{2})\in\mathbb{C}^{2}\), \(x\in\mathbb{M}\) satisfy_ 1. _Analyticity:_ \(F_{2}^{\mu\nu}(\zeta_{1},\zeta_{2};x)\) _is meromorphic in_ \(\zeta_{2}-\zeta_{1}\)_, where the poles within_ \(\mathbb{S}(0,\pi)\) _are all first-order and_ \(\mathfrak{P}\) _denotes the set of poles in that region._ 2. _Regularity:_ _There exist constants_ \(a,b,r\geq 0\) _such that for all_ \(|\Re(\zeta_{2}-\zeta_{1})|\geq r\) _and_ \(\Im(\zeta_{2}-\zeta_{1})\in[0,\pi]\) _it holds that_ \(\max_{\mu,\nu}||F_{2}^{\mu\nu}(\zeta_{1},\zeta_{2};x)||_{\mathcal{K}^{\otimes 2}} \leq a\exp b\left(|\Re\zeta_{1}|+|\Re\zeta_{2}|\right).\)__ 3. _S-symmetry:_ \(F_{2}^{\mu\nu}(\zeta;x)=S(\zeta_{2}-\zeta_{1})F_{2}^{\mu\nu}(\overleftarrow{ \mathbf{\zeta}};x).\)__ 4. _S-periodicity:_ \(F_{2}^{\mu\nu}(\mathbf{\zeta};x)=\mathbb{F}F_{2}^{\mu\nu}(\zeta_{2},\zeta_{1}+i2 \pi;x).\)__ 5. _Hermiticity:_ \(F_{2}^{\mu\nu}(\mathbf{\zeta};x)=\mathbb{F}J^{\otimes 2}F_{2}^{\mu\nu}( \overleftarrow{\mathbf{\zeta}}+i\mathbf{\pi};x).\)__ _._ * _Lorentz symmetry:_ \(\quad F_{2}^{\mu\nu}=F_{2}^{\mu\mu}\)_._ * _Poincare covariance:_ \(\quad\) _For all_ \(\lambda\in\mathbb{R}\) _and_ \(a\in\mathbb{M}\) _it holds that_ \[\Lambda(\lambda)^{\otimes 2}F_{2}(\boldsymbol{\zeta};\Lambda(\lambda)x+a)=e^{ iP(\boldsymbol{\zeta}).a}F_{2}(\boldsymbol{\zeta}-(\lambda,\lambda);x),\quad \Lambda(\lambda):=\begin{pmatrix}\mathrm{ch}(\lambda)&\mathrm{sh}(\lambda)\\ \mathrm{sh}(\lambda)&\mathrm{ch}(\lambda)\end{pmatrix}.\] * _CPT invariance:_ \(\quad F_{2}^{\mu\nu}(\boldsymbol{\zeta};x)=\mathbb{F}J^{\otimes 2}F_{2}^{\mu\nu}( \overset{\leftarrow}{\widetilde{\boldsymbol{\zeta}}};-x)\)_._ * _Continuity equation:_ \(\quad P_{\mu}(\boldsymbol{\zeta})F_{2}^{\mu\nu}(\boldsymbol{\zeta};x)=0\)_._ * _Normalization:_ \(F_{2}^{0\mu}(\zeta,\zeta+i\pi;x)=\frac{M^{\otimes 2}}{2\pi}\mathcal{L}^{0\mu} (P(\zeta,\zeta+i\pi))I_{\otimes 2}\) _with_ \[\mathcal{L}^{\mu\nu}(p):=\frac{-p^{\mu}p^{\nu}+g^{\mu\nu}p^{2}}{p^{2}}.\] (3.1) * \(\mathcal{G}\) _invariance:_ \(\quad F_{2}^{\mu\nu}(\boldsymbol{\zeta};x)=V(g)^{\otimes 2}F_{2}^{\mu\nu}( \boldsymbol{\zeta};x),\quad g\in\mathcal{G}\)_._ _It is called **parity-covariant** if, in addition,_ * _Parity covariance_ \[F_{2}^{\mu\nu}(\boldsymbol{\zeta};x^{0},x^{1})=\mathcal{P}_{\mu^{\prime}}^{ \mu}\mathcal{P}_{\nu^{\prime}}^{\nu}F_{2}^{\mu^{\prime}\nu^{\prime}}(- \boldsymbol{\zeta};x^{0},-x^{1}),\quad\mathcal{P}_{\nu}^{\mu}=\begin{pmatrix} 1&0\\ 0&-1\end{pmatrix}_{\nu}^{\mu}.\] Property (T7) implies that for any \(g\in\mathcal{S}(\mathbb{R})\), \[\int dt\,g^{2}(t)F_{2}^{\mu\nu}(\boldsymbol{\theta};t,0)=\widetilde{g^{2}}( P_{0}(\boldsymbol{\theta}))F_{2}^{\mu\nu}(\boldsymbol{\theta};0)\quad\text{ where }\widetilde{g^{2}}(p)=\int dtg(t)^{2}e^{ipt}. \tag{3.2}\] Such \(F_{2}^{\mu\nu}\) then defines the stress-energy tensor, \(T^{\mu\nu}\), as a quadratic form between one-particle vectors by Eq. (2.20). We are now in a position to characterize these one-particle stress-energy tensors. **Theorem 3.2**.: \(F_{2}\) _is a stress-energy tensor at one-particle level (with poles \(\mathfrak{P}\)) iff it is of the form_ \[F_{2}^{\mu\nu}(\zeta_{1},\zeta_{2};x)=\frac{M^{\otimes 2}}{2\pi}\mathcal{L}^{ \mu\nu}(P(\boldsymbol{\zeta}))e^{iP(\boldsymbol{\zeta}).x}F(\zeta_{2}-\zeta_ {1}),\quad\boldsymbol{\zeta}=(\zeta_{1},\zeta_{2})\in\mathbb{C}^{2}, \tag{3.3}\] _where \(F:\mathbb{C}\to\mathcal{K}^{\otimes 2}\) is a meromorphic function which satisfies for all \(\zeta\in\mathbb{C}\) that_ * \(F\uparrow\mathbb{S}[0,\pi]\) _has exactly the poles_ \(\mathfrak{P}\)_;_ * \(\exists a,b,r>0\,\forall|\Re\zeta|\geq r:\quad\|F(\zeta)\|_{\mathcal{K}^{ \otimes 2}}\leq a\exp(b|\Re\zeta|)\)_;_ * \(F(\zeta)=S(\zeta)F(-\zeta)\)_;_ * \(F(\zeta+i\pi)=\mathbb{F}F(-\zeta+i\pi)\)_;_ * \(F(\zeta+i\pi)=J^{\otimes 2}F(\bar{\zeta}+i\pi)\)_;_ * \(F=V(g)^{\otimes 2}F\) _for all_ \(g\in\mathcal{G}\)_;_ * \(F(i\pi)=I_{\otimes 2}\)_._ _It is parity covariant iff, in addition,_ * \(F(\zeta+i\pi)=F(-\zeta+i\pi)\quad\) _or, equivalently,_ \(\quad F=\mathbb{F}F\)_._ _Remark 3.3_.: As can be seen from the proof, it is sufficient to require (T10) for \(\mu=0\); the case \(\mu=1\) is automatic. _Proof of Theorem 3.2_.: Assume \(F_{2}\) to satisfy (T1)-(T12). By Poincare covariance (T7), it is given by \[F_{2}(\boldsymbol{\zeta};x)=e^{iP(\boldsymbol{\zeta}).x}\Lambda\left(-\tfrac{ \zeta_{1}+\zeta_{2}}{2}\right)^{\otimes 2}F_{2}(-\tfrac{\zeta_{2}-\zeta_{1}}{2}, \tfrac{\zeta_{2}-\zeta_{1}}{2};0). \tag{3.4}\] Define \(G^{\mu\nu}(\zeta):=F_{2}^{\mu\nu}(-\frac{\zeta}{2},\frac{\zeta}{2};0)\) and observe that the conditions (T1) to (T3), (T8), and (T11) imply that \(G\) is meromorphic with pole set \(\mathfrak{P}\) when restricted to \(\mathbb{S}[0,\pi]\) and that for all \(\mu,\nu=0,1\), \[\begin{split}\forall|\mathfrak{R}|\zeta|\geq r:\;\|G^{\mu\nu}( \zeta)\|_{\mathbb{K}^{\otimes 2}}\leq a\exp(b|\mathfrak{R}|),\quad\ G^{\mu\nu}( \zeta)=S(\zeta)G^{\mu\nu}(-\zeta),\\ G^{\mu\nu}(\zeta+i\pi)=\mathbb{F}J^{\otimes 2}G^{\mu\nu}(- \bar{\zeta}+i\pi),\quad\quad\quad G^{\mu\nu}(\zeta)=V(g)^{\otimes 2}G^{\mu\nu}( \zeta).\end{split} \tag{3.5}\] Omit the Minkowski indices for the moment. Then combining (T5) and (T8) we obtain \(F_{2}(\boldsymbol{\zeta};x)=F_{2}(\boldsymbol{\zeta}+i\pi;-x)\) and thus \(G(\zeta)=G^{\pi}(\zeta)\), where \(G^{\pi}(\zeta):=F_{2}(-\frac{\zeta}{2}+i\pi,\frac{\zeta}{2}+i\pi;0)\). Combining (T4) with the preceding equality, we obtain \(G(\zeta+i\pi)=\mathbb{F}G^{\pi}(-\zeta+i\pi)=\mathbb{F}G(-\zeta+i\pi)\). Moreover, by (T5), we have \(G(\zeta+i\pi)=\mathbb{F}J^{\otimes 2}G(-\bar{\zeta}+i\pi)=J^{\otimes 2}G( \bar{\zeta}+i\pi)\). If we demand (T12), this implies \(G(\zeta+i\pi)=G(-\zeta+i\pi)\) and with the preceding properties also \(G(\zeta)=\mathbb{F}G(\zeta)\). In summary, each \(G^{\mu\nu}(\zeta),\,\mu,\nu=0,1\) satisfies properties (a)-(f), and possibly (h), analogously. Due to the continuity equation (T9), we have \[(M_{1}+M_{2})G^{0\nu}(2\zeta)\operatorname{ch}\zeta+(M_{1}-M_{2})G^{1\nu}(2 \zeta)\operatorname{sh}\zeta=0,\quad\nu=0,1, \tag{3.6}\] where \(M_{1}:=M\otimes\mathbb{1}_{\mathcal{K}}\) and \(M_{2}:=\mathbb{1}_{\mathcal{K}}\otimes M\). Multiplying by the inverses of \(M_{1}+M_{2}\) and \(\operatorname{ch}\zeta\) (both are invertible) we find \[G^{0\nu}(2\zeta)=\frac{-M_{1}+M_{2}}{M_{1}+M_{2}}G^{1\nu}(2\zeta)\operatorname {th}\zeta,\quad\nu=0,1. \tag{3.7}\] Defining \(\operatorname{tr}G:=g_{\mu\nu}G^{\mu\nu}=G^{00}-G^{11}\), we obtain \[G^{\mu\nu}(\zeta)=\frac{1}{s(\zeta)^{2}-1}\begin{pmatrix}s(\zeta)^{2}&s(\zeta )\\ s(\zeta)&1\end{pmatrix}^{\mu\nu}\operatorname{tr}G(\zeta) \tag{3.8}\] with \(s(\zeta):=\frac{-M_{1}+M_{2}}{M_{1}+M_{2}}\operatorname{th}\frac{\zeta}{2}= \frac{P^{1}(-\zeta/2,\zeta/2)}{P^{0}(-\zeta/2,\zeta/2)}\). This yields \[G^{\mu\nu}(\zeta)=\mathcal{L}^{\mu\nu}(P(-\tfrac{\zeta}{2},\tfrac{\zeta}{2})) \operatorname{tr}G(\zeta). \tag{3.9}\] On the other hand, from (T10) we infer \(G^{00}(i\pi)=\frac{1}{2\pi}M^{\otimes 2}I_{\otimes 2}\); since \(\mathcal{L}^{00}(P(-\tfrac{\zeta}{2},\tfrac{\zeta}{2}))\to\delta(M_{1}-M_{2})\) as \(\zeta\to i\pi\), this yields \[\operatorname{tr}G(i\pi)=\tfrac{M^{\otimes 2}}{2\pi}I_{\otimes 2}. \tag{3.10}\] Define now \[F(\zeta):=\left(\tfrac{M^{\otimes 2}}{2\pi}\right)^{-1}\operatorname{tr}G( \zeta). \tag{3.11}\] Since \(M^{\otimes 2}\) commutes with all \(S(\zeta)\), \(\mathbb{F}\), \(J\) and \(V(g)\), we find that \(F\) satisfies properties (a)-(g), plus (h) in the parity-covariant case. We have thus shown (3.3) for arguments of the form \((-\zeta/2,\zeta/2;x)\). That (3.3) holds everywhere now follows from (T7) together with the identity \[\mathcal{L}^{\mu\nu}(P(\boldsymbol{\zeta}))=\Lambda\left(-\tfrac{\zeta_{1}+ \zeta_{2}}{2}\right)^{\mu}_{\mu^{\prime}}\Lambda\left(-\tfrac{\zeta_{1}+\zeta _{2}}{2}\right)^{\nu}_{\nu^{\prime}}\mathcal{L}^{\mu^{\prime}\nu^{\prime}}(P(- \tfrac{\zeta_{2}-\zeta_{1}}{2},\tfrac{\zeta_{2}-\zeta_{1}}{2})), \tag{3.12}\] which can be derived from the relation \(p(\theta+\lambda;m)=\Lambda(\lambda)p(\theta;m)\).--The converse direction, to show that (3.3) satisfies (T1) to (T11) (and (T12) provided that (h)) is straightforward. Let us call \(X\in\mathcal{K}^{\otimes 2}\)_diagonal in mass_ if \[(E_{m}\otimes E_{m^{\prime}})X=0\quad\text{for all }m\neq m^{\prime}. \tag{3.13}\] Equivalently, \(\hat{X}\) commutes with \(M\). On such \(X\), all of \(M_{1}\), \(M_{2}\) and \((M\otimes M)^{1/2}\) act the same and in a slight abuse of notation we will use \(M\) to denote any of these. If \(F\) has this property, i.e., \(F(\zeta)\) has it for all \(\zeta\in\mathbb{C}\), then the above result simplifies: **Corollary 3.4**.: _Assume that \(F\) is diagonal in mass, or equivalently, that \(\operatorname{tr}F_{2}(\cdot;x)\) is diagonal in mass for some \(x\). Then \(F_{2}^{\mu\nu}(\zeta_{1},\zeta_{2}+i\pi;0)=G_{\operatorname{free}}^{\mu\nu}( \tfrac{\zeta_{1}+\zeta_{2}}{2})F(\zeta_{2}-\zeta_{1}+i\pi)\) with_ \[G_{\operatorname{free}}^{\mu\nu}(\zeta):=\frac{M^{\otimes 2}}{2\pi}\begin{pmatrix} \operatorname{ch}^{2}\zeta&-\operatorname{sh}\zeta\operatorname{ch}\zeta\\ -\operatorname{sh}\zeta\operatorname{ch}\zeta&\operatorname{sh}^{2}\zeta\end{pmatrix}^ {\mu\nu}. \tag{3.14}\] _The energy density, in particular, becomes_ \[F_{2}^{00}(\theta,\eta+i\pi;x)=\frac{M^{\otimes 2}}{2\pi}\operatorname{ch}^{2} \left(\frac{\theta+\eta}{2}\right)e^{i(P(\theta)-P(\eta))\cdot x}F(\eta-\theta+i \pi). \tag{3.15}\] Proof.: On \(X\in\mathcal{K}^{\otimes 2}\) which is diagonal in mass we can simplify \[P(\zeta_{1},\zeta_{2}+i\pi)X=\big{(}p(\zeta_{1};M)-p(\zeta_{2};M)\big{)}X=M\, \mathrm{sh}\,\tfrac{\zeta_{1}-\zeta_{2}}{2}\begin{pmatrix}\mathrm{sh}&\tfrac{ \zeta_{1}+\zeta_{2}}{2}\\ \mathrm{ch}&\tfrac{\zeta_{1}+\zeta_{2}}{2}\end{pmatrix}X. \tag{3.16}\] A straightforward computation shows that \(\mathcal{L}^{\mu\nu}(P(\zeta_{1},\zeta_{2}+i\pi))X\) depends only on \(\tfrac{\zeta_{1}+\zeta_{2}}{2}\) and yields the proposed form of \(F_{2}\). _Remark 3.5_.: In some models, the one-particle form factor of the stress-energy tensor, \(F_{1}\), is non-zero; in particular in models with bound states, where \(F_{1}\) is linked to the residues of \(F_{2}\)[12, Sec. 3, Item d]. The general form of \(F_{1}^{\mu\nu}(\zeta;x):=F_{1}^{[T^{\mu\nu}(x)]}(\zeta)\) can be determined analogous to Theorem 3.2. In this case the continuity equation, \(P_{\mu}(\zeta)F_{1}^{\mu\nu}(\zeta;x)\), implies that \(F_{1}^{0\nu}(0;x)=0\). Poincare covariance yields that \(F_{1}(\zeta;x)=e^{ip(\zeta;M).x}\Lambda(-\zeta)^{\otimes 2}F_{1}(0;0)\). As a result, \[F_{1}^{\mu\nu}(\zeta;x)=e^{ip(\zeta;M).x}\begin{pmatrix}\mathrm{sh}^{2}\, \zeta&-\mathrm{sh}\,\zeta\,\mathrm{ch}\,\zeta\\ -\mathrm{sh}\,\zeta\,\mathrm{ch}\,\zeta&\mathrm{ch}^{2}\,\zeta\end{pmatrix}F_ {1}(0), \tag{3.17}\] where \(F_{1}(0)\in\mathcal{K}\) is constant. Hermiticity and \(\mathcal{G}\)-invariance imply \(F_{1}(0)=JF_{1}(0)=V(g)F_{1}(0)\) for all \(g\in\mathcal{G}\). The analogues of the other conditions in Theorem 3.2 are automatically satisfied. It is instructive to specialize the above discussion to free models: For a single free particle species of mass \(m\), either a spinless boson (\(S=1\)) or a Majorana fermion (\(S=-1\)), we have \(\mathcal{K}=\mathbb{C}\), \(Jz=\bar{z}\), \(M=m1_{\mathrm{C}}\), \(\mathcal{G}=\mathbb{Z}_{2}\), and \(V(\pm 1)=\pm 1_{\mathrm{C}}\). The canonical expressions for the stress-energy tensor at one-particle level are \[F_{2,\mathrm{free},+}^{\mu\nu}(\theta,\eta+i\pi;x) =G_{\mathrm{free}}^{\mu\nu}\left(\tfrac{\theta+\eta}{2}\right)e^{ i(p(\theta;m)-p(\eta;m)).x}, \tag{3.18}\] \[F_{2,\mathrm{free},-}^{\mu\nu}(\theta,\eta+i\pi;x) =\mathrm{ch}\,\tfrac{\theta-\eta}{2}F_{2,\mathrm{free},+}(\theta, \eta+i\pi;x) \tag{3.19}\] for the bosonic and the fermionic case, respectively; these conform to Definition 3.1, including parity covariance. Theorem 3.2 applies with \(F_{+}(\zeta)=1\) and \(F_{-}(\zeta+i\pi)=\mathrm{ch}\,\tfrac{\zeta}{2}\). Moreover, note that \(F_{n}^{[T^{\mu\nu}(x)]}=0\) for \(n\neq 2\) for these examples. ## 4 A state-independent QEI for constant scattering functions In this section, we treat scattering functions \(S\) which are constant, i.e., independent of rapidity. In this case, (S1) and (S2) imply that \(S\in\mathcal{B}(\mathcal{K}^{\otimes 2})\) is unitary and self-adjoint, hence has the form \(S=P_{+}-P_{-}\) in terms of its eigenprojectors \(P_{\pm}\) for eigenvalues \(\pm 1\). Further, we require that \(S\) has a parity-invariant diagonal, which is to be understood as \[[S,\mathbb{F}]I_{\otimes 2}=0. \tag{4.1}\] This setup yields two important simplifications. First, for constant \(S\) with parity-invariant diagonal, one easily shows that \[F(\zeta):=\left(P_{+}-i\,\mathrm{sh}\,\tfrac{\zeta}{2}P_{-}\right)I_{\otimes 2} \tag{4.2}\] satisfies the conditions (a) to (h) from Theorem 3.2 with respect to \(S\). Thus \(F_{2}^{\mu\nu}\) as given in Eq. (3.3) is a parity-covariant stress-energy tensor at one-particle level. Second, for constant \(S\), the form factor equations for \(F_{n}\), \(n>2\) simplify significantly; the residue formula connecting \(F_{n}\) with \(F_{n-2}\), see Item c in [12, Sec. 3], becomes trivial for even \(n\). As a consequence, the expression \[T^{\mu\nu}(x):=\mathcal{O}_{2}[F_{2}^{\mu\nu}(\cdot;x)], \tag{4.3}\] reducing the usually infinite expansion (2.14) to a single term, is a local operator after time-averaging. In fact, locality may be checked by direct computation from (T1)-(T4). Moreover, properties (T5)-(T12) mean that \(T^{\mu\nu}\) is hermitian, is a symmetric covariant two-tensor-valued field with respect to \(U_{1}(x,\lambda)\) (properly extended from Eq. (2.4) to the full state space), integrates to the total energy-momentum operator \(P^{\mu}=\int ds\,T^{\mu\partial}(t,s)\), and is conserved, \(\partial_{\mu}T^{\mu\nu}=0\). Hence \(T^{\mu\nu}\) is a good candidate for the stress-energy tensor of the interacting model. For this \(T^{\mu\nu}\), we aim to establish a QEI result. Our main technique is an estimate for two-particle form factors of a specific factorizing form, which can be stated as follows. **Lemma 4.1**.: _Let \(h:\mathbb{S}(0,\pi)\to\mathcal{K}\) be analytic with \(L^{2}\) boundary values at \(\mathbb{R}\) and \(\mathbb{R}+i\pi\). For_ \[f:=\operatorname{\mathrm{Symm}}_{S}h\otimes Jh(\bar{\cdot}+i\pi), \tag{4.4}\] _we have in the sense of quadratic forms on \(\mathcal{D}\times\mathcal{D}\),_ \[\mathcal{O}_{2}[f]\geq-\frac{1}{2}\|h(\cdot+i\pi)\|_{2}^{2}\mathbb{1}. \tag{4.5}\] Proof.: From the ZF algebra relations in (2.11)-(2.13), one verifies that \[\mathcal{O}_{1}[h]\mathcal{O}_{1}[h]^{\dagger}=2\mathcal{O}_{2}[f]+\|h(\cdot+ i\pi)\|_{2}^{2}\mathbb{1}. \tag{4.6}\] The left-hand side is positive as a quadratic form, implying the result. Our approach is to decompose \(F_{2}^{00}\) into sums and integrals over terms of the factorizing type (4.4) with positive coefficients, then applying the estimate (4.5) to each of them. To that end, we will call a vector \(X\in\mathcal{K}^{\otimes 2}\)_positive_ if \[\forall u\in\mathcal{K}:(u\otimes Ju,X)\geq 0. \tag{4.7}\] This is equivalent to \(X\) being a sum of mutually orthogonal vectors of the form \(e\otimes Je\) with positive coefficients.3 We also recall the notion of a vector _diagonal in mass_, Eq. (3.13). Now we establish our master estimate as follows: Footnote 3: Vectors of the form \(e\otimes Je\) are certainly positive since \((u\otimes Ju,e\otimes Je)=|(u,e)|^{2}\geq 0\) and remain positive when summed with positive coefficients. Conversely, given a positive \(X\), we note that \(\tilde{X}\in\mathcal{B}(\mathcal{K})\) is a positive matrix, \((u,\tilde{X}u)=(u\otimes Ju,X)\geq 0\), whose eigendecomposition is of the required form. **Lemma 4.2**.: _Fix \(n\in\{0,1\}\). Suppose that \(X\in\mathcal{K}^{\otimes 2}\) is positive, diagonal in mass, and that \(SX=(-1)^{n}X\). Let \(h:\mathbb{S}(0,\pi)\to\mathbb{C}\) be analytic with continuous boundary values at \(\mathbb{R}\) and \(\mathbb{R}+i\pi\) such that \(|h(\zeta)|\leq a\exp(b|\Re\zeta|)\) for some \(a,b>0\). Let \(g\in\mathcal{D}_{\mathbb{R}}(\mathbb{R})\). Set_ \[F_{2}:=\operatorname{\mathrm{Symm}}_{S}\left(\boldsymbol{\zeta}\mapsto h( \zeta_{1})\overline{h(\zeta_{2}+i\pi)}(\operatorname{ch}\zeta_{1}- \operatorname{ch}\zeta_{2})^{n}\widetilde{g^{2}}(P_{0}(\boldsymbol{\zeta}))X \right). \tag{4.8}\] _Then, in the sense of quadratic forms on \(\mathcal{D}\times\mathcal{D}\), it holds that_ \[\mathcal{O}_{2}[F_{2}]\geq-\int_{0}^{\infty}\frac{d\nu}{4\pi}(2\nu)^{n}\left( I_{\otimes 2},M\left(N_{+}(\nu,M)+N_{-}(\nu,M)\right)X\right)_{\mathcal{K}^{\otimes 2}} \mathbb{1}, \tag{4.9}\] _where the integral is convergent and where_ \[N_{\pm}(\nu,m)=\|h(\cdot+\tfrac{1\pm 1}{2}i\pi)\tilde{g}(p_{0}(\cdot;m)+m\nu)\|_{2} ^{2}. \tag{4.10}\] Proof.: Since \(X\) is diagonal in mass, we have \(X=\sum_{m\in\mathbb{R}}E_{m}^{\otimes 2}X\). Here, each \(E_{m}^{\otimes 2}X\) is positive, diagonal in mass and, by (S6), satisfies \(SE_{m}^{\otimes 2}X=E_{m}^{\otimes 2}SX=(-1)^{n}E_{m}^{\otimes 2}X\). As a consequence, we may assume without loss of generality that \(X=E_{m}^{\otimes 2}X\). Moreover, by positivity of \(X\), we may decompose \(X=\sum_{\alpha=1}^{r}c_{\alpha}\,e_{\alpha}\otimes Je_{\alpha}\) with \(r\in\mathbb{N}\), \(c_{\alpha}>0\) and orthonormal vectors \(e_{\alpha}\in\mathcal{K}\), \(\alpha=1,\ldots,r\). Let \[h^{+}_{\nu,\alpha}(\zeta)=h(\zeta)\tilde{g}(p_{0}(\zeta)-\nu)e_{\alpha},\qquad h ^{-}_{\nu,\alpha}(\zeta)=\overline{h^{+}_{-\nu,\alpha}(\bar{\zeta}+i\pi)} \tag{4.11}\] and let \(f^{\pm}_{\nu,\alpha}\) relate to \(h^{\pm}_{\nu,\alpha}\) as in Eq. (4.4). Further define \(f^{\pm}_{\nu}:=\sum_{\alpha=1}^{r}c_{\alpha}f^{\pm}_{\nu,\alpha}\). Since \(SX=(-1)^{n}X\) and \(g\) is real-valued, one finds \((-1)^{n}f^{+}_{-\nu}=f^{-}_{\nu}\) by a straightforward computation. Now, in (4.8) use the convolution formula \((n\in\{0,1\},\,p_{1},p_{2}\in\mathbb{C})\), \[(p_{1}-p_{2})^{n}\widetilde{g^{2}}(p_{1}+p_{2})=\int_{-\infty}^{\infty}\frac{d \nu}{2\pi}(2\nu)^{n}\tilde{g}(p_{1}-\nu)\widetilde{g}(-\bar{p}_{2}-\nu), \tag{4.12}\] then split the integration region into the positive and negative halflines, and obtain \[F_{2}(\boldsymbol{\zeta})=\int_{0}^{\infty}\frac{d\nu}{2\pi}(\tfrac{2\nu}{m})^{ n}\left(f^{+}_{\nu}(\boldsymbol{\zeta})+(-1)^{n}f^{+}_{-\nu}(\boldsymbol{ \zeta})\right)=\int_{0}^{\infty}\frac{d\nu}{2\pi}(\tfrac{2\nu}{m})^{n}\left(f^{ +}_{\nu}(\boldsymbol{\zeta})+f^{-}_{\nu}(\boldsymbol{\zeta})\right). \tag{4.13}\] Noting that the \(h^{\pm}_{\nu,\alpha}\) are square-integrable at the boundary of \(\mathbb{S}[0,\pi]\), we can now apply Lemma 4.1 to each \(f^{\pm}_{\nu,\alpha}\); then, rescaling \(\nu\to m\nu\) in the integral (4.13) yields the estimate (4.9). Note here that the integration in \(\nu\) can be exchanged with taking the expectation value \(\langle\Psi,\mathcal{O}_{2}[\cdot]\Psi\rangle\), since the integration regions in \(\mathbf{\zeta}\) are compact for \(\Psi\in\mathbb{D}\), and the series in (2.14) is actually a finite sum. Lastly, we show that the r.h.s of Eq. (4.9) is finite. By the Cauchy-Schwarz inequality, the integrand is bounded by a constant times \(\nu^{n}\) times \[N_{+}(\nu,m)+N_{-}(\nu,m)=\int d\theta(|h(\theta)|^{2}+|h(\theta+i\pi)|^{2})| \tilde{g}(m\operatorname{ch}\theta+\nu)|^{2}. \tag{4.14}\] By assumption \(|h(\theta)|\leq a(\operatorname{ch}\theta)^{b}\) for some \(a,b>0\), and the resulting integrand \(\nu^{n}(\operatorname{ch}\theta)^{2b}|\tilde{g}(m\operatorname{ch}\theta+m \nu)|^{2}\) can be shown to be integrable in \((\theta,\nu)\) over \(\mathbb{R}\times[0,\infty)\) by substituting \(s=\operatorname{ch}\theta+\nu\) (\(1\leq s<\infty,0\leq\nu\leq s-1\)), and using the rapid decay in \(s\) by the corresponding property of \(\tilde{g}\). In conclusion, the \(\theta\)- and \(\nu\)-integrals converge by Fubini-Tonelli's theorem. Now we can formulate: **Theorem 4.3** (QEI for constant S-functions).: _Consider a constant \(S\)-function \(S\in\mathcal{B}(\mathcal{K}^{\otimes 2})\) with a parity-invariant diagonal, i.e., \([S,\mathbb{F}]I_{\otimes 2}=0\) and denote its eigenprojectors with respect to the eigenvalues \(\pm 1\) by \(P_{\pm}\). Suppose that \(P_{\pm}I_{\otimes 2}\) are both positive. Then for the energy density \(T^{00}(x)\) in Eq. (4.3) and any \(g\in\mathcal{D}_{\mathbb{R}}(\mathbb{R})\), one has in the sense of quadratic forms on \(\mathcal{D}\times\mathcal{D}\):_ \[T^{00}(g^{2})\geq-\left(I_{\otimes 2},(W_{+}(M)P_{+}+W_{-}(M)P_{-})I_{\otimes 2 }\right)_{\mathcal{K}^{\otimes 2}}\mathbb{1}, \tag{4.15}\] _where_ \[W_{\pm}(m)=\frac{m^{3}}{4\pi^{2}}\int_{1}^{\infty}ds\,|\tilde{g}(ms)|^{2}w_{ \pm}(s)<\infty \tag{4.16}\] _and \(w_{\pm}(s)=s\sqrt{s^{2}-1}\pm\log(s+\sqrt{s^{2}-1})\)._ Proof.: We use Lemma 4.2 five times: with \(h_{1}(\zeta)=\operatorname{ch}\zeta\), \(h_{2}(\zeta)=\operatorname{sh}\zeta\), \(h_{3}(\zeta)=1\) (all with \(n=0\) and \(\psi=P_{+}I_{\otimes 2}\)) and \(h_{4}(\zeta)=\operatorname{ch}\frac{\zeta}{2}\), \(h_{5}(\zeta)=\operatorname{sh}\frac{\zeta}{2}\) (these with \(n=1\) and \(\psi=P_{-}I_{\otimes 2}\)); note that \(P_{\pm}I_{\otimes 2}\) are positive by assumption and diagonal in mass by (S6). Summation of Eq. (4.8) for all these five terms and multiplication with \(\frac{1}{4\pi}M^{\otimes 2}\) yields the expression \(\int dt\,g^{2}(t)F_{2}^{00}(\cdot;(t,0))\) for the energy density in Eq. (4.3). From Lemma 4.2 we obtain \[T^{00}(g^{2})\geq-\sum_{i=1}^{5}\sum_{\pm}\int_{0}^{\infty}\frac{d\nu}{16\pi^{ 2}}(2\nu)^{n_{i}}\big{(}I_{\otimes 2},M^{3}N_{\pm,i}(\nu,M)P_{s_{i}}I_{ \otimes 2}\big{)}_{\mathcal{K}^{\otimes 2}}\mathbb{1}. \tag{4.17}\] Here \(s_{i}:=(-1)^{n_{i}}\). Now we compute \[\begin{split}&\sum_{i=1}^{5}\sum_{\pm}\int_{0}^{\infty}\frac{d\nu}{16 \pi^{2}}\,(2\nu)^{n_{i}}M^{3}\|h_{i}(\cdot+\tfrac{1\pm 1}{2}i\pi)\tilde{g}(P_{0}( \theta)+M\nu)\|_{2}^{2}P_{s_{i}}\\ &=\frac{M^{3}}{8\pi^{2}}\int_{0}^{\infty}d\nu\int_{-\infty}^{ \infty}d\theta\,|\tilde{g}(P_{0}(\theta)+M\nu)|^{2}\left((1+\operatorname{ch}^ {2}\theta+\operatorname{sh}^{2}\theta)P_{+}+2\nu(\operatorname{ch}^{2}\tfrac {\theta}{2}+\operatorname{sh}^{2}\tfrac{\theta}{2})P_{-}\right)\\ &=\frac{M^{3}}{4\pi^{2}}\int_{0}^{\infty}d\nu\int_{-\infty}^{ \infty}d\theta\,|\tilde{g}(P_{0}(\theta)+M\nu)|^{2}\left(\operatorname{ch}^{2} \theta P_{+}+\nu\operatorname{ch}\theta P_{-}\right)\\ &=\frac{M^{3}}{4\pi^{2}}\int_{1}^{\infty}ds|\tilde{g}(Ms)|^{2}(w_{ +}(s)P_{+}+w_{-}(s)P_{-})\\ &=W_{+}(M)P_{+}+W_{-}(M)P_{-},\end{split} \tag{4.18}\] where we have substituted \(s=\operatorname{ch}\theta+\nu\) (\(1\leq s<\infty\), \(0\leq\nu\leq s-1\)), then solving explicitly the integral in \(\nu\). _Remark 4.4_.: The conditions of Theorem 4.3 are at least fulfilled in (constant) diagonal models, i.e., for S-functions of the form \(S=\sum_{\alpha\beta}c_{\alpha\beta}|e_{\alpha}\otimes e_{\beta})(e_{\beta} \otimes e_{\alpha}|\) for some choice of an orthonormal basis \(\{e_{\alpha}\}\) and coefficients \(c_{\alpha\beta}\), where we suppose \(Je_{\alpha}=e_{\bar{\alpha}}\) as indicated in Remark 2.5. The S-function has to satisfy \(S=S^{\dagger}=S^{-1}=\mathbb{F}J^{\otimes 2}SJ^{\otimes 2}\mathbb{F}\) which at the level of coefficients becomes \(|c_{\alpha\beta}|=1\), \(c_{\alpha\beta}=c_{\beta\alpha}^{-1}\) and \(c_{\alpha\beta}=c_{\bar{\alpha}\bar{\beta}}\). In particular, one has \(c_{\alpha\bar{\alpha}}=c_{\bar{\alpha}\alpha}\in\{\pm 1\}\). Together with \(P_{\pm}=\frac{1}{2}(1\pm S)\) this implies \(P_{\pm}I_{\otimes 2}=\sum_{\alpha:c_{\alpha\alpha}=\pm 1}|e_{\alpha}\otimes Je_{\alpha})\) which is clearly positive. Also, \([S,\mathbb{F}]I_{\otimes 2}=0\) by a straightforward computation using \(c_{\alpha\bar{\alpha}}=c_{\bar{\alpha}\alpha}\) and \(\mathbb{F}I_{\otimes 2}=I_{\otimes 2}\). Thus the QEI applies to all such models. This does not only include the known QEI results for the free Bose field [10], the free Fermi field [10], the Ising model [1], and combinations of those, but also the symplectic model, a fermionic variant of the Ising model (see, e.g., [11] or [1]). It also applies to the Federbush model (and generalizations of it as in [14]): Although the Federbush model's S-function is not parity invariant, it has a parity invariant diagonal and Eq. (4.2) yields a valid (parity covariant) candidate for the stress-energy tensor, i.e., it satisfies all the properties 1 to (h). The candidate is in agreement with [13, Sec. 4.2.3]. For further details on the Federbush model, see Section 7.2. _Remark 4.5_.: The QEI result is independent of the statistics of the particles; it depends only on the mass spectrum and the S-function. The aspect of particle statistics comes into play when computing the scattering function from the S-function (see Remark 2.4); it also enters the form factor equations for local operators (see, e.g., [10, Sec. 6]). However, in the equations for \(F_{2}\) relevant for our analysis, the "statistics factors" occur only in even powers, so that our assumptions on the stress-energy tensor - specifically, properties 1 and 4 in Def. 3.1 - are appropriate in both bosonic and fermionic cases. ## 5 QEI at one-particle level for general integrable models This section aims to give necessary and sufficient conditions for QEIs at one-particle level in general integrable models, including models with several particle species and bound states. The conditions are expressed in Theorem 5.1 in cases 1 and 2, respectively. Given a stress-energy tensor \(F_{2}^{\mu\nu}\) at one-particle level, including diagonality in mass, the expectation values of the averaged energy density are, combining Eq. (2.20) with Corollary 3.4, given by \[\langle\varphi,T^{00}(g^{2})\varphi\rangle=\int d\theta\,d\eta\,\mathrm{ch}^{2 }\,\frac{\theta+\eta}{2}\Big{(}\varphi(\theta),\frac{M^{2}}{2\pi}\widetilde{g }^{2}(p_{0}(\theta;M)-p_{0}(\eta;M))\hat{F}(\eta-\theta+i\pi)\varphi(\eta) \Big{)} \tag{5.1}\] for \(\varphi\in\mathcal{D}\cap\mathcal{H}_{1}\). We ask whether this quadratic form is bounded below. In fact, this can be characterized in terms of the asymptotic behaviour of \(\hat{F}\): **Theorem 5.1**.: _Let \(F_{2}^{\mu\nu}\) be a parity-covariant stress-energy tensor at one-particle level which is diagonal in mass and \(\hat{F}\) be given according to Corollary 3.4. Then:_ 1. _Suppose there exists_ \(u\in\mathcal{K}\) _with_ \(\|u\|_{\mathcal{K}}=1\)_, and_ \(c>\frac{1}{4}\) _such that_ \[\exists r>0\,\forall|\theta|\geq r:\quad|(u,\hat{F}(\theta+i\pi)u)|\geq c\exp |\theta|.\] (5.2) _Then for all_ \(g\in\mathcal{S}_{\mathbb{R}}(\mathbb{R})\)_,_ \(g\neq 0\) _there exists a sequence_ \((\varphi_{j})_{j}\) _in_ \(\mathcal{D}(\mathbb{R},\mathcal{K})\)_,_ \(\|\varphi_{j}\|_{2}=1\)_, such that_ \[\langle\varphi_{j},T^{00}(g^{2})\varphi_{j}\rangle\xrightarrow{j\to\infty}-\infty.\] (5.3) 2. _Suppose there exists_ \(0<c<\frac{1}{4}\) _such that_ \[\exists\epsilon,r>0\,\forall|\Re\zeta|\geq r,|\Im\zeta|\leq\epsilon:\quad\| \hat{F}(\zeta+i\pi)\|_{\mathcal{B}(\mathcal{K})}\leq c\exp|\Re\zeta|.\] (5.4) _Then for all_ \(g\in\mathcal{S}_{\mathbb{R}}(\mathbb{R})\) _there exists_ \(c_{g}>0\) _such that for all_ \(\varphi\in\mathcal{D}(\mathbb{R},\mathcal{K})\)_,_ \[\langle\varphi,T^{00}(g^{2})\varphi\rangle\geq-c_{g}\|\varphi\|_{2}^{2}.\] (5.5) Before we proceed to the proof, let us comment on the scope of the theorem. _Remark 5.2_.: We require parity-covariance of \(F_{2}^{\mu\nu}\). In absence of this property, at least the parity-covariant part \(F_{2,P}^{\mu\nu}\) of \(F_{2}^{\mu\nu}\), which is given by replacing \(F\) with \(F_{P}:=\frac{1}{2}(1+\mathbb{F})F\), has all features of a parity-covariant stress-energy tensor at one-particle level except possibly for S-symmetry 1, which requires the extra assumption \([S,\mathbb{F}]F=0\). In any case, since S-symmetry will not be used in the proof, Theorem 5.1 still applies to \(F_{2,P}^{\mu\nu}\). Now, if (5.2) holds for \(F\) with \(u\) satisfying \(Ju=\eta u\) with \(\eta\in\mathbb{C}\) and \(|\eta|=1\), it holds for \(F_{P}\) due to \((u,\widehat{F}(\theta)u)=(Ju,\widehat{F}(\theta)Ju)=(u,\widehat{\mathbb{F}} \widehat{F}(\theta)u)\). As a consequence, no QEI can hold for \(F_{2}^{\mu\nu}\). On the other hand, if (5.4) is fulfilled for \(F\) (hence for \(F_{P}\)), then a one-particle QEI for \(F_{2}^{\mu\nu}\) holds at least in parity-invariant one-particle states. _Remark 5.3_.: While Theorem 5.1 establishes a QEI only at one-particle level, the result usually extends to expectation values in vectors \(\Psi=c\,\Omega+\Psi_{1}\), \(c\in\mathbb{C},\Psi_{1}\in\mathcal{H}_{1}\). Namely, \[\langle\Psi,T^{00}(g^{2})\Psi\rangle=\langle\Psi_{1},T^{00}(g^{2})\Psi_{1} \rangle+2\Re\,c\int(\Psi_{1}(\theta),\widetilde{g}^{2}(p_{0}(\theta;M))F_{1}( \theta))d\theta, \tag{5.6}\] where \(F_{1}=F_{1}^{[T^{00}(0)]}\) is the one-particle form factor of the energy density. This \(F_{1}\) may be nonzero. However, due to Remark 3.5, it is of the form \(F_{1}(\zeta;0)=F_{1}(0)\,\mathrm{sh}^{2}\,\zeta\); thus the rapid decay of \(\widetilde{g}^{2}\) and the Cauchy-Schwarz inequality imply that the additional summand is bounded in \(\|\Psi_{1}\|_{2}\), hence in \(\|\Psi\|^{2}\). The rest of this section is devoted to the proof of Theorem 5.1, which we develop separately for the two parts 1 and 2. We first note that from Theorem 3.2, the operators \(\hat{F}(\zeta)\) fulfill \[\hat{F}(\zeta+i\pi) =\hat{F}(-\zeta+i\pi), \tag{5.7}\] \[\hat{F}(\zeta+i\pi) =\hat{F}(\bar{\zeta}+i\pi)^{\dagger},\] (5.8) \[\hat{F}(i\pi) =\mathbb{1}_{\mathcal{K}}. \tag{5.9}\] In more detail, these equations are implied by \(S\)-periodicity and parity-invariance for (5.7), by \(S\)-periodicity and CPT-invariance for (5.8), and by normalization for (5.9). Now the strategy for part 1 closely follows [1, Proposition 4.2], but with appropriate generalizations for matrix-valued rather than complex-valued \(\hat{F}\). Proof of Theorem 5.1(a).: Fix a smooth, even, real-valued function \(\chi\) with support in \([-1,1]\). Then for \(\rho>0\) define \(\chi_{\rho}(\theta):=\rho^{-1/2}\|\chi\|_{2}^{-1}\chi(\rho^{-1}\theta)\), so that \(\chi_{\rho}\) has support in \([-\rho,\rho]\) and is normalized with respect to \(\|\cdot\|_{2}\). Define \(\varphi_{j}(\theta):=\frac{1}{\sqrt{2}}(\chi_{\rho_{j}}(\theta-j)+s\,\chi_{ \rho_{j}}(\theta+j))M^{-1}u\), where \(s\in\{\pm 1\}\) and \((\rho_{j})_{j}\) is a null sequence with \(0<\rho_{j}<1\); both will be specified later. The \(\varphi_{j}\), thus defined, have norm of at most \(m_{-}^{-1}\), where \(m_{-}:=\min\mathfrak{M}\), and (5.1) yields \[\langle\varphi_{j},T^{00}(g^{2})\varphi_{j}\rangle=\frac{1}{4\pi}\big{(}u,(H_{ \chi,j,+}+sH_{\chi,j,-})u\big{)} \tag{5.10}\] with \(H_{\chi,j,\pm}:=\int d\theta d\eta\,\widetilde{g}^{2}(Mk_{j}(\theta,\eta))H_ {j,\pm}(\theta,\eta)\chi_{\rho_{j}}(\theta)\chi_{\rho_{j}}(\eta)\) and \[H_{j,+}(\theta,\eta) =\operatorname{ch}^{2}(j+\tfrac{\theta+\eta}{2})\hat{F}(\theta- \eta+i\pi),\] \[H_{j,-}(\theta,\eta) =\operatorname{ch}^{2}\tfrac{\theta-\eta}{2}\hat{F}(2j+\theta+ \eta+i\pi),\] \[k_{j}(\theta,\eta) =2\operatorname{sh}(j+\tfrac{\theta+\eta}{2})\operatorname{sh} \tfrac{\theta-\eta}{2}.\] We used here (5.7) and that \(\chi\) is an even function. For large \(j\) and for \(\theta,\eta\in[-\rho_{j},\rho_{j}]\), we establish the estimates \[(u,H_{j,+}(\theta,\eta)u) \leq\|H_{j,+}(\theta,\eta)\|_{\mathcal{B}(\mathcal{K})}\leq(\tfrac {1}{2}+2c)\left(1+\tfrac{1}{4}e^{2j}e^{2\rho_{j}}\right), \tag{5.11}\] \[s(u,H_{j,-}(\theta,\eta)u) \leq-ce^{2j}e^{-2\rho_{j}},\] (5.12) \[|k_{j}(\theta,\eta)| \leq 12e^{j}\rho_{j}. \tag{5.13}\] Namely for (5.11), due to (5.9) and continuity of \(\hat{F}\) restricted to \(\mathbb{R}\), we have \(\|F(\theta+i\pi)\|_{\mathcal{B}(\mathcal{K})}\leq 2c+\tfrac{1}{2}>1\) for \(\theta\in[-2\rho_{j},2\rho_{j}]\) and large \(j\). Also, \(\operatorname{ch}^{2}x\leq 1+\tfrac{1}{4}e^{2x}\). For (5.12) one uses \(\operatorname{ch}^{2}x\geq 1\) along with the estimate \(-s(u,\hat{F}(\theta+i\pi)u)\geq c\exp|\theta|\) for all \(|\theta|\geq r\), with suitable choice of \(s\in\{\pm 1\}\). The latter statement is implied by hypothesis (5.2) since \((u,\hat{F}(\theta+i\pi)u)\) is real-valued (due to 5.8) and continuous. For (5.13), see [1, Eq. (4.17)]. Now choose \(\delta>0\) so small that \(\widetilde{g}^{2}(m_{+}p)\geq\tfrac{1}{2}\widetilde{g}^{2}(0)>0\) for \(|p|\leq\delta\), where \(m_{+}:=\max\mathfrak{M}\). Choosing specifically the sequence \(\rho_{j}=\frac{\delta}{12}e^{-j}\), we can combine these above estimates in the integrands of \(H_{\chi,j,\pm}\) to give, cf. [1, Proof of Proposition 4.2], \[\big{(}u,(H_{\chi,j,+}+sH_{\chi,j,-})u\big{)}\leq\frac{\delta}{24}\widetilde{ g}^{2}(0)(ce^{-j}-c^{\prime}e^{j})\big{(}\rho_{j}^{-1/2}\|\chi_{\rho_{j}}\|_{1} \big{)}^{2}\xrightarrow{j\to\infty}-\infty \tag{5.14}\] with some \(c^{\prime}>0\), noting that \(\rho_{j}^{-1/2}\|\chi_{\rho_{j}}\|_{1}\) is independent of \(j\). For part 2, we follow [1, Theorem 5.1], but again need to take the operator properties of \(\hat{F}\) into account. Proof of Theorem 5.1(b).: For fixed \(\varphi\in\mathcal{D}(\mathbb{R},\mathcal{K})\) and \(g\in\mathcal{S}_{\mathbb{R}}(\mathbb{R})\), we introduce \(X_{\varphi}:=\langle\varphi,T^{00}(g^{2})\varphi\rangle\). Our aim is to decompose \(X_{\varphi}=Y_{\varphi}+(X_{\varphi}-Y_{\varphi})\) with \(Y_{\varphi}\geq 0\) and \(|X_{\varphi}-Y_{\varphi}|\leq c_{g}\|\varphi\|_{2}^{2}\) in order to conclude \(X_{\varphi}\geq-c_{g}\|\varphi\|_{2}^{2}\). Since \([M,\hat{F}(\zeta)]=0\) from diagonality in mass, we have \(X_{\varphi}=\sum_{m\in\mathfrak{M}}X_{E_{m}\varphi}\) and can treat each \(E_{m}\varphi\), \(m\in\mathfrak{M}\), separately. Therefore in the following, we assume \(M=m\mathbb{1}_{\mathcal{K}}\) without loss of generality. We now express \(X_{\varphi}\) as in (5.1) and rewrite the integral as \[X_{\varphi}=\frac{m^{2}}{2\pi}\int_{0}^{\infty}\int_{0}^{\infty}d\theta d\eta\, \widetilde{g}^{2}(p_{0}(\theta)-p_{0}(\eta))\left(\underline{\varphi}(\theta)^ {t},\underline{\underline{X}}(\theta,\eta)\underline{\varphi}(\eta)\right), \tag{5.15}\] where \(\underline{\varphi}(\theta)=(\varphi(\theta),\varphi(-\theta))^{t}\) and \[\underline{\underline{X}}(\theta,\eta)=\begin{pmatrix}\operatorname{ch}^{2}\frac {\theta+\eta}{2}\hat{F}(-\theta+\eta+i\pi)&\operatorname{ch}^{2}\frac{\theta- \eta}{2}\hat{F}(-\theta-\eta+i\pi)\\ \operatorname{ch}^{2}\frac{\theta+\eta}{2}\hat{F}(\theta+\eta+i\pi)& \operatorname{ch}^{2}\frac{\theta+\eta}{2}\hat{F}(\theta-\eta+i\pi)\end{pmatrix}.\] Using (5.7) we find \(\underline{\underline{X}}=(\begin{smallmatrix}4&B\\ B&A\end{smallmatrix})\) with \[A(\theta,\eta)=\operatorname{ch}^{2}\frac{\theta+\eta}{2}\hat{F}(\theta- \eta+i\pi),\quad B(\theta,\eta)=\operatorname{ch}^{2}\frac{\theta-\eta}{2} \hat{F}(\theta+\eta+i\pi).\] Defining \(H_{\pm}=A\pm B\) and \(\varphi_{\pm}(\theta)=\varphi(\theta)\pm\varphi(-\theta)\) we obtain further that \[(\underline{\varphi}(\theta)^{t},\underline{\underline{X}}(\theta,\eta) \underline{\varphi}(\eta))=\sum_{\pm}(\varphi_{\pm}(\theta),H_{\pm}(\theta, \eta)\varphi_{\pm}(\eta)). \tag{5.16}\] Let us define \[K_{\pm}(\theta):=\sqrt{|H_{\pm}(\theta,\theta)|}\in\mathcal{B}(\mathcal{K}), \tag{5.17}\] where for \(O\in\mathcal{B}(\mathcal{K})\), \(|O|\) denotes the operator modulus of \(O\) and \(\sqrt{|O|}\) its (positive) operator square root. Now, analogous to \(X_{\varphi}\), introduce \(Y_{\varphi}\) (replacing \(H_{\pm}(\theta,\eta)\) with \(K_{\pm}(\theta)K_{\pm}(\eta)\)), \[Y_{\varphi}:=\frac{m^{2}}{2\pi}\sum_{\pm}\int_{0}^{\infty}d\theta d\eta\, \widetilde{g^{2}}(p_{0}(\theta)-p_{0}(\eta))\left(\varphi_{\pm}(\theta),K_{ \pm}(\theta)K_{\pm}(\eta)\varphi_{\pm}(\eta)\right). \tag{5.18}\] Using the convolution formula (4.12) with \(n=0\), \(p_{1}=p_{0}(\theta)\), \(p_{2}=p_{0}(\eta)\), noting that for real arguments it also holds for \(g\in\mathcal{S}_{\mathbb{R}}(\mathbb{R})\), one finds that \[Y_{\varphi}=\frac{m^{2}}{2\pi}\sum_{\pm}\int\frac{d\nu}{2\pi}\left\|\int d \eta\,\psi_{\pm}(\eta,\nu)\right\|_{\mathcal{K}}^{2}\geq 0,\quad\text{ where }\psi_{\pm}(\eta,\nu):=\widetilde{g}(p_{0}(\eta)+\nu)K_{\pm}(\eta) \varphi_{\pm}(\eta). \tag{5.19}\] It remains to show that \(|X_{\varphi}-Y_{\varphi}|\leq c_{g}\|\varphi\|_{2}^{2}\) for some \(c_{g}\geq 0\). For this it suffices to prove that \[c_{g}:=\sum_{\pm}\int_{0}^{\infty}d\theta\int_{0}^{\infty}d\eta|\widetilde{g^ {2}}(p_{0}(\theta)-p_{0}(\eta))|^{2}\|H_{\pm}(\theta,\eta)-K_{\pm}(\theta)K_{ \pm}(\eta)\|_{\mathcal{B}(\mathcal{K})}^{2} \tag{5.20}\] is finite. To that end, let us introduce \(L_{\pm}(\rho,\tau):=H_{\pm}(\rho+\frac{\tau}{2},\rho-\frac{\tau}{2})\pm K_{\pm }(\rho+\frac{\tau}{2})K_{\pm}(\rho-\frac{\tau}{2})\), where \(\rho=\frac{\theta+\eta}{2}\), \(\tau=\theta-\eta\), and \(|\partial(\rho,\tau)/\partial(\theta,\eta)|=1\). In these coordinates, the integration region in (5.20) is given by \(\rho>0\), \(|\tau|<2\rho\). Let \(\rho_{0}\geq 1\) and \(\theta_{0}>0\) be some constants. The region \(\rho\leq\rho_{0}\) is compact; thus, the integral over this region is finite. The region \(\rho>\rho_{0},|\tau|>1\) also gives a finite contribution: Because of \[|p_{0}(\theta)-p_{0}(\eta)|=2m\operatorname{sh}\tfrac{|\tau|}{2}\operatorname{ sh}\rho\geq 2m(1-e^{-2\rho_{0}})\operatorname{sh}\tfrac{1}{2}\operatorname{ ch}\rho \tag{5.21}\] in this region, \(|\widetilde{g^{2}}(p_{0}(\theta)-p_{0}(\eta))|^{2}\) decays faster than any power of \(\operatorname{ch}\rho\), while \(\|L_{\pm}(\rho,\tau)\|_{\mathcal{B}(\mathcal{K})}^{2}\) cannot grow faster than a finite power of \(\operatorname{ch}\rho\) due to our hypothesis (5.4). The remaining region is given by \(\rho\geq\rho_{0}\) and \(|\tau|\leq 1\). By (5.4), there exists \(0<c<\frac{1}{4}\) and \(r>0\) such that \[\forall\theta\geq r:\,\|\hat{F}(2\theta+i\pi)\|_{\mathcal{B}(\mathcal{K})}\leq c \exp 2|\theta|\leq 4c\operatorname{ch}^{2}\theta. \tag{5.22}\] This implies, also using self-adjointness of \(\hat{F}\) (see (5.8)), that for all \(\theta\geq r\): \[H_{\pm}(\theta,\theta)=\operatorname{ch}^{2}\theta\,\hat{F}(i\pi)\pm\hat{F}(2 \theta+i\pi)\geq\operatorname{ch}^{2}\theta\,\mathbb{1}_{\mathcal{K}}-|\hat{F}(2 \theta+i\pi)|\geq(1-4c)\operatorname{ch}^{2}\theta\,\mathbb{1}_{\mathcal{K}}. \tag{5.23}\] Since \(c<\frac{1}{4}\), these \(H_{\pm}(\theta,\theta)\) are positive operators with a uniform spectral gap at \(0\). As a consequence, together with \(H_{\pm}(\theta,\theta)\), also the maps \(\theta\mapsto K_{\pm}(\theta)=\sqrt{H_{\pm}(\theta,\theta)}\) are analytic near \([r,\infty)\); see [11, 12, 13]. Correspondingly, \(L_{\pm}(\rho,\tau)\) is real-analytic in the region where \(\rho\geq\frac{|\tau|}{2}+r\). This contains the region \(\{(\rho,\tau):\rho\geq\rho_{0},|\tau|\leq 1\}\) if we choose \(\rho_{0}\geq\frac{1}{2}+r\). Now in this region, it can be shown that there exists \(a>0\) such that for any normalized \(u\in\mathcal{K}\), \[\big{|}\big{(}u,L_{\pm}(\rho,\tau)u\big{)}\big{|}\leq\tfrac{1}{2}\tau^{2}\sup_{| \xi|\leq 1}\big{|}(u,\tfrac{\partial^{2}}{\partial\xi^{2}}L_{\pm}(\rho,\xi)u \big{)}\big{|}\leq\tfrac{1}{2}a\tau^{2}\operatorname{ch}\rho. \tag{5.24}\] This estimate is based on the fact that \(L_{\pm}(\rho,\tau)=L_{\pm}(\rho,-\tau)\), and \(L_{\pm}(\rho,0)=0\) (which also uses positivity of \(H_{\pm}\)). The first inequality in (5.24) then follows from Taylor's theorem; the second is an estimate of the derivative by Cauchy's formula, using analyticity of \(\hat{F}(\cdot+i\pi)\) in a strip around \(\mathbb{R}\), and repeatedly applying the estimate (5.4), cf. [1, Proof of Lemma 5.3]. Since (5.4) is an estimate in operator norm, and the other parts of the argument are \(u\)-independent, one finds \(\|\frac{\partial^{2}}{\partial\xi^{2}}L_{\pm}(\rho,\tau)\|_{\mathcal{B}(\mathcal{K}) }\leq a\operatorname{ch}\rho\) with a constant \(a\). Finiteness of the integral (5.20) now follows from the estimate (5.24) together with \(|\widetilde{g^{2}}(p_{0}(\theta)-p_{0}(\eta))|\leq c^{\prime}(\tau^{4}\operatorname{ ch}^{4}\rho+1)^{-1}\) for some \(c^{\prime}>0\); cf. [1, Proof of Lemma 5.4]. The connection between the S-function and the minimal solution For the purpose of analyzing particular examples, it is helpful to introduce the _minimal solution_ of a model, a well-known concept in the form factor program [14] which plays an essential role in the description and classification of the observables of the model. We will here give a brief summary of necessary facts for the examples in Section 7 and a recipe for obtaining QEIs for other models. For technical details and full proofs we refer to Appendix A. Given an S-function, in generic cases including diagonal models and all our examples we can perform an eigenvalue decomposition into meromorphic complex-valued functions \(S_{i}\) and meromorphic projection-valued functions \(P_{i}\) such that \[S(\zeta)=\sum_{i=1}^{k}S_{i}(\zeta)P_{i}(\zeta) \tag{6.1}\] (see Proposition A.1). For each eigenfunction \(S\equiv S_{i}\) (omitting the index \(i\) for the moment), the _minimal solution_ is a meromorphic function \(F_{\min}:\mathbb{C}\to\mathbb{C}\) which is the most regular solution of the form factor equations at one-particle level (or Watson's equations), \[F_{\min}(\zeta)=S(-\zeta)F_{\min}(-\zeta),\quad F_{\min}(\zeta+i\pi)=F_{\min}(- \zeta+i\pi), \tag{6.2}\] subject to the normalization condition \(F_{\min}(i\pi)=1\) (see Appendix A.2). A general solution to (6.2) is then of the form \[F_{q}(\zeta)=q(\operatorname{ch}\zeta)F_{\min}(\zeta), \tag{6.3}\] where \(q\) is a rational function which is fixed by the pole- and zero-structure of \(F_{q}\), and \(q(-1)=1\) if \(F_{q}(i\pi)=1\) (Lemma A.5). Uniqueness of \(F_{\min}\) follows under mild growth conditions (Lemma A.2). Existence can be proven for a large class of (eigenvalues of) S-functions by employing a well-known integral representation. For this class, the function \[f[S]:\mathbb{R}\to\mathbb{R},\quad t\mapsto f[S](t):=-\tfrac{1}{\pi}\int_{0}^{ \infty}S^{\prime}(\theta)S(\theta)^{-1}\cos(\pi^{-1}\theta t)d\theta \tag{6.4}\] is well-defined and referred to as the _characteristic function_ of \(S\). In the case \(S(0)=1\), the minimal solution is then obtained from \(f=f[S]\) as the meromorphic continuation of \[F_{f}:\mathbb{R}\to\mathbb{C},\quad\theta\mapsto F_{f}(\theta):=\exp\left(2 \int_{0}^{\infty}f(t)\sin^{2}\frac{(i\pi-\theta)t}{2\pi}\,\frac{dt}{t\operatorname {sh}t}\right). \tag{6.5}\] For \(S(0)=-1\), an additional factor needs to be included (see Theorem A.6). For our analysis of QEIs, it will be crucial to control the large-rapidity behaviour of \(F_{\min}\) using properties of the characteristic function \(f[S]\). This is in fact possible as follows (Proposition A.11): For a continuous function \(f:[0,\infty)\to\mathbb{R}\), which is exponentially decaying at large arguments and second-order differentiable on some interval \([0,\delta],\delta>0\), and where \(f_{0}:=f(0)\), \(f_{1}:=f^{\prime}(0)\), the growth of \(F_{f}(\zeta)\) is bounded at large \(|\Re\zeta|\) as in \[\exists 0<c\leq c^{\prime},\,r>0:\,\forall|\Re\zeta|\geq r,\Im\zeta\in[0,2\pi]: \quad c\leq\frac{|F_{f}(\zeta)|}{|\Re\zeta|^{f_{1}}\exp|\Re\zeta|^{f_{0}/2}} \leq c^{\prime}. \tag{6.6}\] With this said, we have a recipe for a large class of models to determine whether a one-particle QEI in the sense of Theorem 5.1 holds, or no such QEI can hold: According to Theorem 3.2 and Corollary 3.4, we know that \[F_{2}^{\mu\nu}(\zeta;0)=G_{\text{free}}^{\mu\nu}(\tfrac{\zeta+\zeta^{\prime}} {2})F(\zeta^{\prime}-\zeta). \tag{6.7}\] Then \(F\) can be decomposed into the eigenbasis with respect to \(S\), namely \(F(\zeta):=\sum_{i=1}^{k}F_{i}(\zeta)\), where \(F_{i}(\zeta):=P_{i}(\zeta)F(\zeta)\). Let us restrict to parity-invariant \(F\) and constant eigenprojectors \(P_{i}\), i.e., having \(F=\mathbb{F}F\) and \(P_{i}=const\). Then (in some orthonormal basis) the components of each \(F_{i}\) will satisfy Watson's equations and take the form as in Eq. (6.3). Therefore, each \(F_{i}\) will be of the form \(F_{i}(\zeta)=Q_{i}(\operatorname{ch}\zeta)F_{i,\min}(\zeta)\), where \(Q_{i}\) is a rational function that takes values in \(\mathcal{K}^{\otimes 2}\) and \(F_{i,\min}\) is the minimal solution with respect to \(S_{i}\). In case of symmetries, the choice of \(Q_{i}\) is further restricted by \(\mathcal{G}\)-invariance. The asymptotic growth of the \(F_{i}\) will be bounded by the growth of the \(Q_{i}\) and the bound (6.6) for the \(F_{i,\min}\). In summary, depending on the growth of the \(Q_{i}\) and the \(F_{i,\min}\), we can determine the asymptotic growth of \(F\) and thus decide whether a one-particle QEI holds or not. QEIs in examples We now discuss some examples of integrable models which illustrate essential features of the abstract results developed in Sections 4 and 5. These include a model with bound states (Bullough-Dodd model, Sec. 7.1), an interacting model with a constant scattering function (Federbush model, Sec. 7.2), and a model with several particle species (\(O(n)\)-nonlinear sigma model, Sec. 7.3). As a first step, we review in our context the known results for models of one scalar particle type and without bound states [1]. That is, we consider \(\mathcal{K}=\mathbb{C}\), \(J\) the complex conjugation, \(\mathfrak{M}=\{m\}\) for the one-particle space, and \(\mathfrak{P}=\emptyset\) for the stress-energy tensor, with a scattering function of the form \[S(\zeta)=\epsilon\prod_{k=1}^{n}S(\zeta;b_{k}),\quad S(\zeta;b):=\frac{\operatorname {sh}\zeta-i\sin\pi b}{\operatorname{sh}\zeta+i\sin\pi b}, \tag{7.1}\] where \(\epsilon=\pm 1,n\in\mathbb{N}_{0}\), and \((b_{k})_{k\in\{1,\dots,n\}}\subset i\mathbb{R}+(0,1)\) is a finite sequence in which \(b_{k}\) and \(\overline{b_{k}}\) appear the same number of times. The minimal solution with respect to \(\zeta\mapsto S(\zeta;b)\) is known (see, e.g., [1, Eq. (2.5)] or [13, Eq. (4.13)]) and in our context given by \(F_{b,\min}(\zeta)=(-i\operatorname{sh}\frac{\zeta}{2})F_{f(\cdot;b)}(\zeta)\) with characteristic function \[f(t;b):=\frac{4\operatorname{sh}\frac{\zeta}{2}\operatorname{sh}\frac{(1-b)t }{2}\operatorname{sh}\frac{t}{2}-\operatorname{sh}t}{\operatorname{sh}t}. \tag{7.2}\] Since \(f(t;b)=-1+\mathcal{O}(t^{2})\) for \(t\to 0\), it follows that \(F_{b,\min}\) is uniformly bounded above and below on \(\mathbb{S}[0,2\pi]\) by Proposition A.11. More quantitatively, \(F_{b,\min}(\zeta+i\pi)\) converges uniformly to \[F_{b,\min}^{\infty}:=\lim_{\theta\to\pm\infty}F_{b,\min}(\theta+i\pi)=\exp \int_{\mathbb{R}}(t\operatorname{sh}t)^{-1}(1+f(t;b))dt<\infty \tag{7.3}\] for \(|\Re\zeta|\to\infty\) and \(|\Im\zeta|\leq\delta\) for any \(0<\delta<\pi\). This can be derived in the following way: Since \(g(t):=(t\operatorname{sh}t)^{-1}(1+f(t;b))\) is exponentially decaying and regular (in particular at \(t=0\)), it is integrable and \(F_{b,\min}^{\infty}\) is finite. As \(\log\operatorname{sh}\frac{\zeta}{2}=2\int_{0}^{\infty}(t\operatorname{sh}t)^ {-1}\sin^{2}\frac{\zeta\delta}{2\pi}dt\) for \(|\Im\zeta|<\pi\) one may write \(\log F_{b,\min}(\zeta+i\pi)=2\int_{\mathbb{R}}(t\operatorname{sh}t)^{-1}(1+f (t;b))\sin^{2}\frac{\zeta\pi}{2}dt\). In the limit \(|\Re\zeta|\to\infty\) the parts which are non-constant with respect to \(\zeta\) vanish due to the Riemann-Lebesgue lemma for \(|\Im\zeta|<\pi\); uniformity follows from \(g(t)\exp(\pm\frac{t\Im\zeta}{\pi})\) being uniformly \(L^{1}\)-bounded in \(|\Im\zeta|\leq\delta\) (see, e.g., proof of Thm. IX.7 in [13]). Next, according to Corollary A.4, the minimal solution with respect to \(S\) is given by \[F_{S,\min}(\zeta)=(i\operatorname{sh}\frac{\zeta}{2})^{-s(\epsilon,n)}\prod_{ k=1}^{n}F_{b_{k},\min}(\zeta) \tag{7.4}\] with \(s(+1,n)=2\lfloor\frac{n}{2}\rfloor\) and \(s(-1,n)=2\lfloor\frac{n-1}{2}\rfloor\). For the stress-energy tensor at one-particle level we obtain (using Corollary 3.4, Lemma A.5, and Corollary A.3) that \[F_{2}^{\mu\nu}(\zeta_{1},\zeta_{2}+i\pi)=G_{\operatorname{free}}^{\mu\nu} \left(\tfrac{\zeta_{1}+\zeta_{2}}{2}\right)F_{q}(\zeta_{1}-\zeta_{2}+i\pi), \quad F_{q}(\zeta)=q(\operatorname{ch}\zeta)F_{S,\min}(\zeta+i\pi) \tag{7.5}\] with \(q\) a polynomial having real-valued coefficients and \(q(-1)=1\). Let \(c:=2^{s(\epsilon,n)-\deg q}|c_{q}|\prod_{k=1}^{n}F_{b_{k},\min}^{\infty}\), where \(c_{q}\) is the leading coefficient of \(q\). By the preceding remarks we find that for some \(c^{\prime},c^{\prime\prime}\) with \(0<c^{\prime}<c<c^{\prime\prime}\) and \(\delta,r>0\): \[\forall|\Re\zeta|\geq r,|\Im\zeta|\leq\delta:\quad c^{\prime}\leq\frac{|F_{q}( \zeta+i\pi)|}{\exp((\deg q-\tfrac{1}{2}s(\epsilon,n))|\Re\zeta|)}\leq c^{ \prime\prime}, \tag{7.6}\] where \(c^{\prime}\) and \(c^{\prime\prime}\) can be chosen arbitrarily close to \(c\) for large enough \(r\). We can therefore conclude by Theorem 5.1 that a QEI of the form (5.5) holds if \(\deg q<\tfrac{1}{2}s(\epsilon,n)+1\) and cannot hold if \(\deg q>\tfrac{1}{2}s(\epsilon,n)+1\). In case that \(\deg q=\tfrac{1}{2}s(\epsilon,n)+1\), details of \(q\) become relevant. This can only occur if \(s(\epsilon,n)\) is even, i.e., \(\epsilon=+1\). If here \(c\) is less (greater) than \(\tfrac{1}{4}\) then a QEI holds (cannot hold). ### (Generalized) Bullough-Dodd model We now consider a class of integrable models which treat a single neutral scalar particle that is its own bound state. The presence of the bound state requires the S-function to have a specific "bound state pole" in the physical strip with imaginary positive residue and to satisfy a bootstrap equation for the self-fusion process. Such S-functions are classified in [11, Appendix A]. The Bullough-Dodd model itself (see [1, 10] and references therein) corresponds to the maximally analytic element of this class which is given by \(\zeta\mapsto S_{\mathrm{BD}}(\zeta;b)=S(\zeta;-\frac{2}{3})S(\zeta;\frac{b}{3})S( \zeta;\frac{2-b}{3})\) where \(b\in(0,1)\) is a parameter of the model. The full class allows for so-called CDD factors [10] and an exotic factor of the form \(\zeta\mapsto e^{ia\sinh\zeta},a>0\). In Lagrangian QFT, from a one-component field \(\varphi\) and a Lagrangian \[\mathcal{L}_{\mathrm{BD}}=\tfrac{1}{2}\partial_{\mu}\varphi\partial^{\mu} \varphi-\frac{m^{2}}{6g^{2}}(2e^{g\varphi}+e^{-2g\varphi}) \tag{7.7}\] one obtains as S-function \(S_{\mathrm{BD}}(\cdot;b)\) under the (perturbative) correspondence \(b=\frac{g^{2}}{2\pi}(1+\frac{g^{2}}{4\pi})^{-1}\)[12]. For more general elements of the described class no Lagrangian is known [10]. In our context, we will consider the generalized variant of the model, but for simplicity restrict to finitely many CDD factors and do not include the exotic factor: **Definition 7.1**.: _The **generalized Bullough-Dodd model** is specified by the mass parameter \(m>0\) and a finite sequence \((b_{k})_{k\in\{1,\ldots,n\}}\subset(0,1)+i\mathbb{R},n\in\mathbb{N},\) which has an odd number of real elements and where the non-real \(b_{k}\) appear in complex conjugate pairs. The one-particle little space is given by \(\mathcal{K}=\mathbb{C}\), \(\mathcal{G}=\{e\}\), \(V=1_{\mathcal{C}}\), and \(M=m1_{\mathcal{C}}\). \(J\) corresponds to complex conjugation. The \(S\)-function \(S_{\mathrm{gBD}}\) is of the form_ \[S_{\mathrm{gBD}}(\zeta)=S(\zeta;-\tfrac{2}{3})\prod_{k=1}^{n}S(\zeta;\tfrac{b_{ k}}{3})S(\zeta;\tfrac{2-b_{k}}{3}). \tag{7.8}\] Clearly, \(S_{BD}\) is obtained from \(S_{gBD}\) for \(n=1\) and \(b_{1}=b\). Since \(S_{\mathrm{gBD}}\) is defined as a product of a finite number of factors of the form \(S(\cdot;b)\), its minimal solutions exists and is given by, see Corollary A.4, \[F_{\mathrm{gBD,min}}(\zeta)=(-i\operatorname{sh}\tfrac{\zeta}{2})^{-2n}F_{-2/3,\min}(\zeta)\prod_{k=1}^{n}F_{b_{k}/3,\min}(\zeta)F_{(2-b_{k})/3,\min}(\zeta). \tag{7.9}\] It enters here that \(S_{\mathrm{gBD}}(0)=-1\). The presence of bound states in the model implies the presence of poles in the form factors of local operators [1], in particular also for \(F_{2}^{\mu\nu}\). For \(F_{1}^{\mu\nu}\neq 0\) we expect a single first-order pole of \(F_{2}^{\mu\nu}(\zeta,\zeta^{\prime};x)\) at \(\zeta^{\prime}-\zeta=i\frac{2\pi}{3}\). In case that \(F_{1}^{\mu\nu}=0\) we expect \(F_{2}^{\mu\nu}(\zeta,\zeta^{\prime};x)\) to have no poles in \(\mathbb{S}[0,\pi]\). **Lemma 7.2** (Stress tensor in the generalized BD model).: _A tensor-valued function \(F_{2}^{\mu\nu}:\mathbb{C}^{2}\times\mathbb{M}\to\mathcal{K}^{\otimes 2}\) is a stress-energy tensor at one-particle level with respect to \(S_{\mathrm{gBD}}\) and \(\mathfrak{P}\subset\{i\frac{2\pi}{3}\}\) iff it is of the form_ \[F_{2}^{\mu\nu}(\theta,\eta+i\pi)=G_{\mathrm{free}}^{\mu\nu}\left(\tfrac{\theta +\eta}{2}\right)e^{i(p(\theta;m)-p(\eta;m))\cdot x}F(\eta-\theta+i\pi), \tag{7.10}\] _with_ \[F(\zeta)=q(\operatorname{ch}\zeta)(-2\operatorname{ch}\zeta-1)^{-1}F_{\mathrm{ gBD,min}}(\zeta), \tag{7.11}\] _where \(F_{\mathrm{gBD,min}}\) is the unique minimal solution with respect to \(S_{\mathrm{gBD}}\) and where \(q\) is a polynomial with real coefficients and \(q(-1)=1\)._ Proof.: By Theorem 3.2 and Corollary 3.4, \(F_{2}^{\mu\nu}\) is given by (7.10), where \(F:\mathbb{C}\to\mathbb{C}\) satisfies properties (a)-(g) of Theorem 3.2 with respect to \(S_{\mathrm{gBD}}\). According to Lemma A.5, \(F\) is of the form (7.11); the factor \((-2\operatorname{ch}\zeta-1)^{-1}\) takes the one possible first-order pole within \(S[0,\pi]\), namely at \(i\frac{2\pi}{3}\), into account. That \(q\) has real coefficients is a consequence of property (e) and Corollary A.3. Conversely, it is clear that \(F_{2}^{\mu\nu}\), respectively \(F\), as given above has the properties (a)-(g). **Theorem 7.3** (QEI for the generalized BD model).: _Let the stress-energy tensor at one-particle level be given by \(F_{2}^{\mu\nu}\) as in Eq. (7.10). Then a QEI of the form_ \[\forall g\in\mathcal{S}_{\mathbb{R}}(\mathbb{R})\,\exists c_{g}>0\,\forall \varphi\in\mathcal{D}(\mathbb{R},\mathcal{K}):\quad\langle\varphi,T^{00}(g^{2}) \varphi\rangle\geq-c_{g}\|\varphi\|_{2}^{2} \tag{7.12}\] _holds if \(\deg q<n+1\) and cannot hold if \(\deg q>n+1\). In the case \(\deg q=n+1\), introduce_ \[c:=2^{2n-\deg q}|c_{q}|F_{-2/3,\min}^{\infty}\prod_{k=1}^{n}F_{b_{k}/3,\min}^{ \infty}F_{(2-b_{k})/3,\min}^{\infty}, \tag{7.13}\] _where \(c_{q}\) denotes the leading coefficient of \(q\). If here \(c\) is less (greater) than \(\frac{1}{4}\) then a QEI holds (cannot hold)._ Proof.: As the minimal solution \(F_{\text{gBD,min}}\) is given as a finite product of factors \(\zeta\mapsto(-i\operatorname{sh}\frac{\zeta}{2})\) and \(F_{b,\min}\), the asymptotic growth can be estimated analogously to the procedure in the introduction of Section 7. Similar to the estimate (7.6), one obtains for some \(c^{\prime}\) and \(c^{\prime\prime}\) with \(0<c^{\prime}<c<c^{\prime\prime}\) and some \(\epsilon,r>0\): \[\forall|\Re\zeta|\geq r,|\Im\zeta|\leq\epsilon:\quad c^{\prime}\leq\frac{|F_{ 9}(\zeta+i\pi)|}{\exp((\deg q-n)|\Re\zeta|)}\leq c^{\prime\prime}, \tag{7.14}\] where \(c^{\prime}\) and \(c^{\prime\prime}\) can be chosen arbitrarily close to \(c\) for large enough \(r\). Noting that parity covariance is trivial for \(\mathcal{K}=\mathbb{C}\) and applying Theorem 5.1 yields the conclusions from above depending on \(\deg q\) and \(c\). ### Federbush model The Federbush model is a well-studied integrable QFT model with a constant, but non-trivial, scattering function; see [11, 12, 13, 14, 15] and references therein. In Lagrangian QFT, the traditional Federbush model is described in terms of two Dirac fields \(\Psi_{1}\), \(\Psi_{2}\) by a Lagrangian density4 Footnote 4: The fields \(\Psi_{j}\) take values in \(\mathbb{C}^{2}\). \(\epsilon_{\mu\nu}\) denotes the antisymmetric tensor with \(\epsilon_{01}=-\epsilon_{10}=1\). Other standard notations are \(\tilde{\psi}_{j}:=\psi_{j}^{\dagger}\gamma_{0}\) and \(\not{\partial}=\gamma^{\mu}\partial_{\mu}\) with anticommuting matrices \(\gamma^{0},\gamma^{1}\in\operatorname{Mat}(2\times 2,\mathbb{C})\), \(\left[\gamma^{\mu},\gamma^{\nu}\right]_{+}=2g^{\mu\nu}\). \[\mathcal{L}_{\text{Fb}}=\sum_{j=1}^{2}\tfrac{1}{2}\bar{\Psi}_{j}(i\not{ \partial}-m_{j})\Psi_{j}-\lambda\pi\epsilon_{\mu\nu}J_{1}^{\mu}J_{2}^{\nu}, \quad J_{j}^{\mu}=\bar{\Psi}_{j}\gamma^{\mu}\Psi_{j}. \tag{7.15}\] The Federbush model obeys a global \(U(1)^{\oplus 2}\) symmetry since \(\mathcal{L}_{\text{Fb}}\) is invariant under \[\Psi_{j}(x)\mapsto e^{2\pi i\kappa}\Psi_{j}(x),\quad\Psi_{j}^{\dagger}(x) \mapsto e^{-2\pi i\kappa}\Psi_{j}^{\dagger}(x),\quad\kappa\in\mathbb{R},j=1,2. \tag{7.16}\] The stress-energy tensor of the model has been computed before [14] and its trace (Eq. (44) in the reference) is given by \[T_{\mu}^{\mu}=\sum_{j=1}^{2}m_{j}\colon\bar{\Psi}_{j}\Psi_{j}\colon \tag{7.17}\] which agrees with the (trace of the) stress-energy tensor of two free Dirac fermions. Note in particular that it is parity-invariant. In our framework, the model can be described in the following way: **Definition 7.4**.: _The **Federbush model** is specified by three parameters, the particle masses \(m_{1},m_{2}\in(0,\infty)\) and the coupling parameter \(\lambda\in(0,\infty)\). The symmetry group is \(\mathcal{G}=U(1)^{\oplus 2}\). The one-particle little space is given by \(L=(\mathcal{K},V,J,M)\) with \(L=L_{1}\oplus L_{2}\) and where for \(j=1,2\) we define \(\mathcal{K}_{j}=\mathbb{C}^{2}\) and_ \[V_{j}(\kappa)=\begin{pmatrix}e^{2\pi i\kappa}&0\\ 0&e^{-2\pi i\kappa}\end{pmatrix},\quad J_{j}=\begin{pmatrix}0&-1\\ -1&0\end{pmatrix},\quad M_{j}=m_{j}\begin{pmatrix}1&0\\ 0&1\end{pmatrix} \tag{7.18}\] _as operators on \(\mathcal{K}_{j}\) where \(J_{j}\) is antilinear and for the choice of basis \(\{e_{j}^{(+)}\equiv(1,0)^{t}\), \(e_{j}^{(-)}\equiv(0,1)^{t}\}\). The S-function is denoted by \(S_{\text{Fb}}\in\mathcal{B}(\mathcal{K}^{\otimes 2})\). Its only nonvanishing components, enumerated as \(\alpha,\beta=1+,1-,2+,2-\) corresponding to \(e_{1/2}^{(\pm)}\), are given by \(S_{\alpha\beta}:=(S_{\text{Fb}})_{\alpha\beta}^{\beta\alpha}\) with_ \[S=-\begin{pmatrix}1&1&e^{2\pi i\lambda}&e^{-2\pi i\lambda}\\ 1&1&e^{-2\pi i\lambda}&e^{2\pi i\lambda}\\ e^{-2\pi i\lambda}&e^{2\pi i\lambda}&1&1\\ e^{2\pi i\lambda}&e^{-2\pi i\lambda}&1&1\end{pmatrix}. \tag{7.19}\] Note that \(S_{\text{Fb}}\) is a constant diagonal S-function; e.g., \(S_{\alpha\beta}=S_{\beta\alpha}^{*}=S_{\beta\alpha}^{-1}\) imply that \(S_{\text{Fb}}\) is self-adjoint and unitary. Note also that, \(S_{\alpha\beta}=S_{\bar{\alpha}\bar{\beta}}\neq S_{\beta\alpha}\), where \(\bar{\alpha}\) corresponds to \(\alpha\in\{1+,1-,2+,2-\}\) by flipping plus and minus. These relations correspond to the fact that \(S_{\text{Fb}}\) is C-, PT- and CPT- but not P- or T-symmetric. However, \(S_{\text{Fb}}\) has a P-invariant diagonal (in the sense of Eq. (4.1)) due to \(S_{\alpha\bar{\alpha}}=S_{\bar{\alpha}\alpha}\) (or Remark 4.4). **Lemma 7.5** (Stress tensor for the Federbush model).: _A tensor-valued function \(F_{2}^{\mu\nu}:\mathbb{C}^{2}\times\mathbb{M}\to\mathcal{K}^{\otimes 2}\) is a stress-energy-tensor at one-particle level with respect to \(S_{\text{Fb}}\), is diagonal in mass (Eq. (3.13)), and has no poles, \(\mathfrak{P}=\emptyset\), iff it is of the form_ \[F_{2}^{\mu\nu}(\theta,\eta+i\pi;x)=G_{\text{free}}^{\mu\nu}\left(\tfrac{\theta+ \eta}{2}\right)e^{iP(\theta,\eta+i\pi).x}F(\eta-\theta+i\pi) \tag{7.20}\] _with_ \[F(\zeta)=\sum_{j=1}^{2}\left(-i\operatorname{sh}(\tfrac{\zeta}{2})q_{j}^{s}( \operatorname{ch}\zeta)\,e_{j}^{(+)}\otimes_{s}e_{j}^{(-)}+\operatorname{ch}( \tfrac{\zeta}{2})q_{j}^{\operatorname{as}}(\operatorname{ch}\zeta)\,e_{j}^{(+) }\otimes_{\operatorname{as}}e_{j}^{(-)}\right), \tag{7.21}\] _for \(e_{j}^{(+)}\otimes_{\operatorname{as}}e_{j}^{(-)}:=e_{j}^{(+)}\otimes e_{j}^{( -)}\pm e_{j}^{(-)}\otimes e_{j}^{(+)}\) and where each \(q_{j}^{\operatorname{s}/\operatorname{as}}\) is a polynomial with real coefficients and \(q_{j}^{\operatorname{s}}(-1)=1\)._ _The stress-energy tensor at one-particle level is parity-covariant iff \(q_{1}^{\operatorname{as}}=q_{2}^{\operatorname{as}}\equiv 0\)._ Proof.: By Theorem 3.2 and Corollary 3.4 we have that Eq. (7.20) holds with \(F\) satisfying properties 1-2. \(U(1)^{\oplus 2}\)-invariance, property 1, is equivalent to \[\forall\zeta\in\mathbb{C},\boldsymbol{\kappa}\in\mathbb{R}^{2},r,s\in\{\pm\},j,k\in\{1,2\}:\qquad\left(1-e^{2\pi i(r\kappa_{j}+\operatorname{as}_{k})} \right)(e_{j}^{(r)}\otimes e_{k}^{(s)},F(\zeta))=0.\] As a consequence, \((e_{j}^{(r)}\otimes e_{k}^{(s)},F(\zeta))=0\) unless \(j=k\) and \(r=-s\). On the remaining components, \(S\) acts like \(-\mathbb{F}\), thus \[F(\zeta)=-\mathbb{F}F(-\zeta)=\mathbb{F}F(2i\pi-\zeta), \tag{7.22}\] which implies \[F(\zeta)=\sum_{j=1}^{2}\left(-i\operatorname{sh}(\tfrac{\zeta}{2})f_{j}^{s}( \zeta)e_{j}^{(+)}\otimes_{s}e_{j}^{(-)}+\operatorname{ch}(\tfrac{\zeta}{2})f _{j}^{\operatorname{as}}(\zeta)e_{j}^{(+)}\otimes_{as}e_{j}^{(-)}\right) \tag{7.23}\] for some functions \(f_{j}^{\operatorname{s}/\operatorname{as}}\), where we have factored out the necessary zeroes due to the relations (7.22). Then from the properties of \(F\) we find \(f_{j}^{\operatorname{s}/\operatorname{as}}:\mathbb{C}\to\mathbb{C}\) to be analytic and to satisfy \[f_{j}^{\operatorname{s}/\operatorname{as}}(\zeta)=f_{j}^{\operatorname{s}/ \operatorname{as}}(-\zeta)=f_{j}^{\operatorname{s}/\operatorname{as}}(2\pi i -\zeta),\quad f_{j}^{\operatorname{s}}(i\pi)=1, \tag{7.24}\] and \(f_{j}^{\operatorname{as}}(i\pi)\) unconstrained. Moreover, \(f_{j}^{\operatorname{s}/\operatorname{as}}\) are regular in the sense of Eq. (A.4) of Lemma A.5; the lemma implies that \(f_{j}^{\operatorname{s}/\operatorname{as}}(\zeta)=q_{j}^{\operatorname{s}/ \operatorname{as}}(\operatorname{ch}\zeta)\) with \(q_{j}^{\operatorname{s}}(-1)=1\). Since \(J^{\otimes 2}F(\zeta+i\pi)=F(\bar{\zeta}+i\pi)\), \(Je_{j}^{(\pm)}=-e_{j}^{(\mp)}\), and by the antilinearity of \(J\), we find that \(q_{j}^{\operatorname{s}/\operatorname{as}}(\zeta+i\pi)=\overline{q_{j}^{ \operatorname{s}/\operatorname{as}}(\bar{\zeta}+i\pi)}\) such that \(q_{j}^{\operatorname{s}/\operatorname{as}}\) have real coefficients. Parity-invariance of \(F\), i.e., \(\mathbb{F}F=F\), is equivalent to \(q_{j}^{\operatorname{as}}=-q_{j}^{\operatorname{as}}\), thus \(q_{j}^{\operatorname{as}}=0\), because of \((1\mp\mathbb{F})\,e_{j}^{(+)}\otimes_{\operatorname{s}/\operatorname{as}}e_{j }^{(-)}=0\). We see that the stress-energy tensor does not need to be parity-covariant. Concerning QEIs we state: **Theorem 7.6** (QEI for the Federbush model).: _The parity-covariant part of the stress-energy tensor at one-particle level, given by \(F_{2}\) in Eq. (7.20) with \(q_{1}^{\operatorname{as}}=q_{2}^{\operatorname{as}}\equiv 0\), satisfies a one-particle-QEI of the form_ \[\forall g\in\mathcal{S}_{\mathbb{R}}(\mathbb{R})\,\exists c_{g}>0\,\forall \varphi\in\mathcal{D}(\mathbb{R},\mathcal{K}):\quad\langle\varphi,T_{P}^{00}( g^{2})\varphi\rangle\geq-c_{g}\|\varphi\|_{2}^{2} \tag{7.25}\] _iff \(q_{1}^{\operatorname{s}}=q_{2}^{\operatorname{s}}\equiv 1\)._ _The candidate stress-energy tensor given by Eq. (4.3) (i.e. for \(q_{1}^{\operatorname{s}}=q_{2}^{\operatorname{s}}=1,q_{1}^{\operatorname{as}}=q _{2}^{\operatorname{as}}=0\)) satisfies a QEI of the form_ \[T^{00}(g^{2})\geq-\left(\sum_{j=1}^{2}\frac{m_{j}^{3}}{2\pi^{2}}\int_{1}^{ \infty}ds|\widetilde{g}(m_{j}s)|^{2}w_{-}(s)\right)\mathbb{1} \tag{7.26}\] _with \(w_{-}(s)=s\sqrt{s^{2}-1}-\log(s+\sqrt{s^{2}-1})\) and in the sense of a quadratic form on \(\mathcal{D}\times\mathcal{D}\)._ Proof.: In case that one \(q_{j}^{\operatorname{s}}\neq 1\) we have for some \(c,r>0\) that \(|q_{j}^{\operatorname{s}}(\operatorname{ch}\zeta)\operatorname{sh}\tfrac{ \zeta}{2}|\geq c^{3|\mathbb{R}\zeta|/2}\) for all \(|\Re\zeta|\geq r\). Therefore, no QEI can hold due to Theorem 5.1(a) and Remark 5.2 with \(u=e_{j}^{(+)}\pm e_{j}^{(-)}\) for some \(j\in\{1,2\}\). For \(q_{1}^{\operatorname{s}}=q_{2}^{\operatorname{s}}\equiv 1\) (and \(q_{1}^{\operatorname{as}}=q_{2}^{\operatorname{as}}\equiv 0\)), Theorem 5.1(b) yields Eq. (7.25). In that case \(F(\zeta)=(-i\operatorname{sh}\tfrac{\pi}{2})I_{\otimes 2}\) which coincides with the expression in (4.2) due to \(P_{+}I_{\otimes 2}=0\) and \(P_{-}I_{\otimes 2}=I_{\otimes 2}\). Since \(S_{\operatorname{Pb}}\) is constant and diagonal by Remark 4.4, Theorem 4.3 applies and yields Eq. (7.26). We see that for the Federbush model, requiring a 1-particle QEI fixes a unique (parity-covariant part of the) stress-energy tensor at one-particle level that extends - since \(S_{\operatorname{Pb}}\) is constant - to a dense domain of the full interacting state space. The parity-covariant part is in agreement with preceding results for the stress-energy tensor at 1-particle level [14, Sec. 4.2.3]. This indicates that the parity-violating part of our expression is indeed not relevant for applications in physics. Our candidate for the full stress-energy tensor has the same trace as in [13]. That the respective energy density satisfies a generic QEI is no surprise after all, as the QEI results are solely characterized in terms of the trace of the stress-energy tensor which here agrees with that of two free Dirac fermions (as was indicated also by Eq. (7.17)). ### O(n)-nonlinear sigma model The \(O(n)\)-nonlinear sigma model is a well-studied integrable QFT model of \(n\) scalar fields \(\phi_{j},j=1,\ldots,n\), that obey an \(O(n)\)-symmetry. For a review see [1, Secs. 6-7] and references therein. In Lagrangian QFT it can be described by a combination of a free Lagrangian and a constraint \[\mathcal{L}_{\rm NLS}=\tfrac{1}{2}\partial_{\mu}\Phi^{t}\partial^{\mu}\Phi, \quad\Phi^{t}\Phi=\frac{1}{2g},\quad\Phi=(\phi_{1},\ldots,\phi_{n})^{t}, \tag{7.27}\] where \(g\in(0,\infty)\) is a dimensionless coupling constant. Clearly, \(\mathcal{L}_{\rm NLS}\) is invariant for \(\Phi\) transforming under the vector representation of \(O(n)\), i.e., \[\Phi(x)\mapsto O\Phi(x),\quad O\in\mathrm{Mat}_{\mathbb{R}}(n\times n),\quad O ^{t}=O^{-1}. \tag{7.28}\] Note that the model - other than one might expect naively from \(\mathcal{L}_{\rm NLS}\) - describes massive particles. This is known as dynamical mass transmutation; the resulting mass of the \(O(n)\)-multiplet can take arbitrary positive values depending on a choice of a mass scale and corresponding renormalized coupling constant; see, e.g., [1, Sec. 7.2.1] and [18]. In our framework, the model can be described in the following way: **Definition 7.7**.: _The \(O(n)\)**-nonlinear sigma model** is specified by two parameters, the particle number \(n\in\mathbb{N}\), \(n\geq 3\), and the mass \(m>0\). The one-particle little space \((\mathcal{K},V,J,M)\) is given by \(\mathcal{K}=\mathbb{C}^{n}\) with the defining/vector representation \(V\) of \(\mathcal{G}=O(n)\), \(M=m\mathbb{1}_{\mathbb{C}^{n}}\), and where \(J\) is complex conjugation in the canonical basis of \(\mathbb{C}^{n}\). The S-function is given by_ \[S_{\rm NLS}(\zeta):=(b(\zeta)\mathbb{1}+c(\zeta)\mathbb{F}+d(\zeta)\mathbb{K} )\mathbb{F}\,, \tag{7.29}\] _where in the canonical basis of \(\mathbb{C}^{n}\)_ \[\mathbb{1}_{\alpha\beta}^{\gamma\delta}=\delta_{\alpha}^{\gamma}\delta_{\beta }^{\delta},\quad\mathbb{F}_{\alpha\beta}^{\gamma\delta}=\delta_{\alpha}^{ \delta}\delta_{\beta}^{\gamma},\quad\mathbb{K}_{\alpha\beta}^{\gamma\delta}= \delta^{\gamma\delta}\delta_{\alpha\beta},\quad\alpha,\beta,\gamma,\delta=1, \ldots,n, \tag{7.30}\] \[b(\zeta)=s(\zeta)s(i\pi-\zeta),\quad c(\zeta)=-i\pi\nu\zeta^{-1}b(\zeta), \quad d(\zeta)=-i\pi\nu(i\pi-\zeta)^{-1}b(\zeta), \tag{7.31}\] _and_ \[\nu=\tfrac{2}{n-2},\quad s(\zeta)=\frac{\Gamma\left(\tfrac{\nu}{2}+\tfrac{ \zeta}{2\pi i}\right)\Gamma\left(\tfrac{1}{2}+\tfrac{\zeta}{2\pi i}\right)}{ \Gamma\left(\tfrac{1+\nu}{2}+\tfrac{\zeta}{2\pi i}\right)\Gamma\left(\tfrac{ \zeta}{2\pi i}\right)}. \tag{7.32}\] \(S_{\rm NLS}\) is the unique maximally analytic element of the class of \(O(n)\)-invariant S-functions [27]. Maximal analyticity means here that in the physical strip \(\mathbb{S}(0,\pi)\), the S-function has no poles and the minimal amount of zeroes which are compatible with the axioms for an S-function, i.e., (S1)-(S7). Its eigenvalue decomposition is given by \[S_{\rm NLS}(\zeta)=\left(S_{+}(\zeta)\tfrac{1}{2}\left(\mathbb{1}+\mathbb{F}- \tfrac{2}{n}\mathbb{K}\right)+S_{-}(\zeta)\tfrac{1}{2}\left(\mathbb{1}- \mathbb{F}\right)+S_{0}(\zeta)\tfrac{1}{n}\mathbb{K}\right)\mathbb{F}, \tag{7.33}\] with \(S_{\pm}=b\pm c\) and \(S_{0}=b+c+nd\). The S-function is P-, C-, and T-symmetric and satisfies \(S_{\rm NLS}(0)=-\mathbb{F}\). As a first step, we establish existence of the minimal solution with respect to \(S_{0}\) and an estimate of its asymptotic growth: **Lemma 7.8**.: _The minimal solution with respect to \(S_{0}\) exists and is given by \(F_{0,\min}(\zeta)=(-i\operatorname{sh}\tfrac{\zeta}{2})F_{f_{0}}(\zeta)\) with characteristic function_ \[f_{0}(t)=\frac{e^{-t}+e^{-\nu t}}{e^{t}+1}. \tag{7.34}\] _Moreover, there exist \(0<c\leq c^{\prime}\), \(r>0\) such that_ \[\forall|\Re\zeta|\geq r,\Im\zeta\in[0,2\pi]:\quad c\leq\frac{|F_{0,\min}( \zeta)|}{|\Re\zeta|^{-(1+\frac{\nu}{2})}\exp|\Re\zeta|}\leq c^{\prime}. \tag{7.35}\] Proof.: The characteristic function \(f_{0}=f[-S_{0}]\) is computed in Appendix B. Clearly, it is smooth and exponentially decaying. Applying Lemma A.2 (uniqueness) and Theorem A.6 (existence) we find that \(F_{f_{0}}\) is well-defined and that \(F_{0,\min}\) exists and agrees with the expression claimed. The estimate of Eq. (6.6) together with \[f_{0}(t)=1-(1+\tfrac{\nu}{2})t+\mathcal{O}(t^{2}),\quad t\to 0 \tag{7.36}\] and the estimate \[\forall|\Re\zeta|\geq r>0:\quad(1-e^{-2r})\exp|\Re\zeta|\leq|2\operatorname{sh }\zeta|\leq(1+e^{-2r})\exp|\Re\zeta| \tag{7.37}\] imply (7.35). **Lemma 7.9** (Stress tensor in NLS model).: _A tensor-valued function \(F_{2}^{\mu\nu}:\mathbb{C}^{2}\times\mathbb{M}\to\mathcal{K}^{\otimes 2}\) is a parity covariant stress-energy tensor at one-particle level with respect to \(S_{\mathrm{NLS}}\) with no poles, \(\mathfrak{P}=\emptyset\), iff it is of the form_ \[F_{2}^{\mu\nu}(\theta,\eta+i\pi;x)=G_{\mathrm{free}}^{\mu\nu}\left(\tfrac{ \theta+\eta}{2}\right)e^{i(p(\theta;m)-p(\eta;m))\cdot x}F(\eta-\theta+i\pi), \tag{7.38}\] _with_ \[F(\zeta)=q(\mathrm{ch}\,\zeta)F_{0,\min}(\zeta)I_{\otimes 2}, \tag{7.39}\] _where \(F_{0,\min}\) is the unique minimal solution with respect to the S-matrix eigenvalue \(S_{0}\) and \(q\) is a polynomial with real coefficients with \(q(-1)=1\)._ Proof.: By Corollary 3.4, \(F_{2}^{\mu\nu}\) has the form (7.38) with \(F\) satisfying properties (a)-(g) in Theorem 3.2. By (f), \(F(\zeta)\) is an \(O(n)\)-invariant \(2\)-tensor for each \(\zeta\). The general form of such a tensor is \(F(\zeta)=\lambda(\zeta)I_{\otimes 2}\) with \(\lambda:\mathbb{C}\to\mathbb{C}\)[3, Sec. 4, case (a)]. Consider now property (c), \(F(\zeta)=S(\zeta)F(-\zeta)\). Taking the scalar product of both sides with \(\tfrac{1}{n}I_{\otimes 2}\) in \((\mathbb{C}^{n})^{\otimes 2}\) yields \[\lambda(\zeta)=\tfrac{1}{n}(I_{\otimes 2},S(-\zeta)I_{\otimes 2})\lambda(- \zeta)=S_{0}(-\zeta)\lambda(-\zeta) \tag{7.40}\] by Eq. (7.29) and \(\mathbbm{1}I_{\otimes 2}=\mathbb{F}I_{\otimes 2}=\tfrac{1}{n}\mathbb{K}I_{ \otimes 2}\). Here we used that \(\mathbb{F}I_{\otimes 2}=J^{\otimes 2}I_{\otimes 2}=I_{\otimes 2}\) by Remark 2.1. In summary, Lemma A.5 can be applied to \(\lambda\), which implies that \(\lambda(\zeta)=q(\mathrm{ch}(\zeta))F_{0,\min}(\zeta)\) and thus \(F\) has the form (7.39). That \(q\) has real coefficients is a consequence of (e) and Corollary A.3. Conversely, it is clear that \(F_{2}^{\mu\nu}\) as in Eq. (7.38), respectively \(F\), has the properties (a)-(g). **Theorem 7.10** (QEI for the NLS model).: _The stress-energy tensor at one-particle level given by \(F_{2}\) in Eq. (7.38) satisfies_ \[\forall g\in\mathcal{S}_{\mathsf{R}}(\mathbb{K})\,\exists c_{g}>0\,\forall \varphi\in\mathcal{D}(\mathbb{R},\mathcal{K}):\quad\langle\varphi,T^{00}(g^{2 })\varphi\rangle\geq-c_{g}\|\varphi\|_{2}^{2} \tag{7.41}\] _iff \(q\equiv 1\)._ Proof.: Given \(F_{2}\) as in Lemma 7.9 and using \(\widehat{I_{\otimes 2}}=\mathbbm{1}_{\mathcal{K}}\), we have \(\|\hat{F}(\zeta)\|_{\mathcal{B}(\mathcal{K})}=|q(\mathrm{ch}\,\zeta)F_{0,\min }(\zeta)|\). Thus by Lemma 7.8 there exist \(r>0\) and \(0<c\leq c^{\prime}\) such that \[\forall\zeta\in|\Re\zeta|>r,\Im\zeta\in[0,2\pi]:\quad ct(\zeta)\exp|\Re\zeta| \leq\|\hat{F}(\zeta)\|_{\mathcal{B}(\mathcal{K})}\leq c^{\prime}t(\zeta)\exp| \Re\zeta| \tag{7.42}\] with \(t(\zeta)=|\Re\zeta|^{-(1+\frac{r}{2})}|q(\mathrm{ch}\,\zeta)|\). Note that for \(q\equiv 1\), \(t(\zeta)\) is polynomially decaying, whereas for non-constant \(q\), \(t(\zeta)\) is exponentially growing. Thus if \(q\) is constant (\(q\equiv 1\)), we have \(c^{\prime}t(\zeta)<\tfrac{1}{4}\) for large enough \(|\Re\zeta|\); and if \(q\) is not constant, then \(ct(\zeta)>\tfrac{1}{4}\) for large enough \(|\Re\zeta|\). We conclude by Theorem 5.1 that a QEI of the form (7.41) holds iff \(q\equiv 1\). ## 8 Conclusion and outlook We have established QEIs in a larger class of 1+1d integrable models than previously known in the literature. In particular, QEIs for generic states hold in a wide class of models with _constant_ scattering functions, including not only the Ising model, as known earlier, but also the Federbush model. Moreover, the class includes combinations and bosonic or fermionic variants of these models. In all of these situations, the form factor \(F_{2}\) of the energy density determines the entire operator. Furthermore, we have established necessary and sufficient conditions for QEIs to hold _at one-particle level_ in generic models, which may include bound states or several particle species. Also in this case, only \(F_{2}\) contributes to expectation values of the energy density, and the conditions for QEIs are based on the large-rapidity behaviour of \(F_{2}\). At the foundation of both results was a characterization by first principles of the form of the energy density. However, we found that those principles do not constrain polynomial prefactors (in \(\mathrm{ch}\,\zeta\)) added to a viable candidate for the energy density (at one-particle level). As seen in the case of the Bullough-Dodd, the Federbush, and the \(O(n)\)-nonlinear sigma model, one-particle QEIs can then fix the energy density at one-particle level partially or entirely, in analogy to [1]. Our results suggest a number of directions for further investigation, of which we discuss the most relevant ones: What is the nature of the freedom in the form of the stress-energy tensor?The factor \(q(\operatorname{ch}\zeta)\) in the energy density was partially left unfixed by our analysis. At least in the scalar case (\(\mathcal{K}=\mathbb{C}\)), it can be understood as a polynomial in the differential operator \(\Box=g^{\mu\nu}\partial_{\mu}\partial_{\nu}\) acting on \(T^{\mu\nu}\): Given a stress-energy tensor \(T^{\mu\nu}\), define \(\tilde{T}^{\mu\nu}:=q(-1-\frac{\Box}{2M^{2}})T^{\mu\nu}\) for some polynomial \(q\). Then at one-particle level \[F_{2}^{\tilde{T}^{\mu\nu}(x)]}(\mathbf{\zeta})=q(\operatorname{ch}(\zeta_{1}- \zeta_{2}))F_{2}^{[T^{\mu\nu}(x)]}(\mathbf{\zeta}),\] and, provided that \(q(-1)=1\), \(F_{2}^{\tilde{T}^{\mu\nu}(x)]}\) defines another valid candidate for the stress-energy tensor at one-particle level. However, for generic models, \(q\) may depend on the particle types and cannot be understood in terms of derivatives only. In the physics literature, given a concrete model, a few standard methods exist to check the validity of a specific choice of \(q\): In case the model admits a Lagrangian, perturbation theory checks are used, e.g., [1, 1, 2]. In case the model can be understood as a perturbation of a conformal field theory model, a scaling degree for the large-rapidity behaviour (conformal dimension) of the stress-energy tensor can be extracted, which fixes the large-rapidity behaviour of \(F_{2}\), e.g., [1, 15, 16]. The large-rapidity scaling degree is also related to momentum-space clustering properties, which were studied for some integrable models, e.g., [14, 15, 17]. But in the general case, none of these methods may be available, and other constraints - perhaps from QEIs in states of higher particle number - might need to take their place. Which other models can be treated with these methods?We performed our analysis of one-particle QEIs in a very generic setting; there are nevertheless some limitations. For one, we employed the extra assumption of parity covariance of the stress-energy tensor. While parity invariance of the scattering function (and therefore covariance of the stress-energy-tensor) is satisfied in many models, it is not fully generic. Nevertheless, a non-parity covariant stress-energy tensor is still subject to constraints by our results; in particular, the necessary condition we gave for a one-particle QEI to hold remains unmodified (see Remark 5.2). We expect a sufficient condition for a one-particle QEI, similar to the one presented in Theorem 5.1(b), to apply also in a parity-breaking situation. Some numerical tests indicate this; however, an analytic proof remained elusive to us. Another point is the decomposition of the two-particle form factor of the (trace of the) stress-energy tensor \(F\) into polynomials and factors which are fixed by the model (including the minimal solutions and pole factors). For generic models, multiple polynomial prefactors can appear (at least one for each eigenvalue of the S-function). In typical models, these are few to begin with, and symmetries exclude many of those prefactors (as was presented for the Federbush or the \(O(n)\)-nonlinear sigma model). In other situations, however, there might be too many unfixed factors for the QEI to meaningfully constrain them. Lastly, we should remark that also in the presence of higher-order poles in the scattering function, the poles in the form factors are expected to be of first-order[1, 2] so that such models should be tractable with our methods. This includes for instance the \(Z(n)\)-lsing, sine-Gordon, or Gross-Neveu model. Also generic Toda field theories don't seem to pose additional problems. Do QEIs hold in states with higher particle numbers?Apart from the case of constant S-functions, we treated only one-particle expectation values of the energy density in this paper. At \(n\)-particle level, generically the form factors \(F_{1},...,F_{2n}\) all enter the expectation values; these are more challenging to handle since the number of rapidity arguments increases and since additional ("kinematic") poles arise at the boundary of the analyticity region that were absent in the case \(n=1\). Therefore, higher particle numbers requires new methods, and we leave this analysis to future work. While we conducted a few promising numerical tests for the sinh-Gordon model at two-particle level, these can only serve as an indication. We do not expect to obtain numerical results at much higher particle numbers due to computational complexity scaling exponentially with \(n\). The minimal solution This appendix collects rigorous results on existence and uniqueness of minimal solutions for integrable models, as well as estimates for their asymptotic growth, which are central to the question whether a QEI holds in the model (see Sec. 5). Some of these results are also contained in [1], whereas a less rigorous but informative treatment can be found in [13]. Our existence result is based on an integral representation of the minimal solution which is well-known in principle and has been employed before in many concrete models, e.g., sinh-Gordon [14], \(SU(N)\)-Gross-Neveu [15], and \(O(N)\)-nonlinear-\(\sigma\)[15]. Existence of the integral representations was argued in [13], but without giving explicit assumptions. General results on the asymptotic growth of the minimal solution, based on this integral representation, are new to the best of the authors' knowledge. ### Eigenvalue decomposition of the S-function To begin with, we establish the eigenvalue decomposition of an S-function. Since \(S(\theta)\in\mathcal{B}(K^{\otimes 2})\) is unitary for real arguments, it is diagonalizable; this extends to complex arguments by analyticity: **Proposition A.1**.: _Let \(S\) be an S-function and \(D(S)\) its domain of analyticity. Then there exists \(k\in\mathbb{N}\) and a discrete set \(\Delta(S)\subset D(S)\) such that the number of distinct eigenvalues of \(S(\zeta)\) is \(k\) for all \(\zeta\in D(S)\setminus\Delta(S)\) and strictly less than \(k\) for all \(\zeta\in\Delta(S)\). Further, for any simply connected domain \(\mathcal{D}\subset D(S)\setminus\Delta(S)\) there exist analytic functions \(S_{i}:\mathcal{D}\to\mathbb{C}\), and analytic projection-valued functions \(P_{i}:\mathcal{D}\to\mathcal{B}(K^{\otimes 2})\), \(i=1,\ldots,k\) with_ \[S(\zeta)=\sum_{i=1}^{k}S_{i}(\zeta)P_{i}(\zeta),\quad\zeta\in\mathcal{D}\] (A.1) _such that for all \(\zeta\in\mathcal{D}\),_ 1. \(S_{1}(\zeta),\ldots,S_{k}(\zeta)\) _coincide with the eigenvalues of_ \(S(\zeta)\) _and_ \(P_{1}(\zeta),\ldots,P_{k}(\zeta)\) _coincide with the projectors onto the respective eigenspaces._ _In particular,_ \(P_{i}(\zeta)P_{j}(\zeta)=\delta_{ij}P_{i}(\zeta)\) _for_ \(i,j=1,\ldots,k\)_,_ 2. _if_ \(-\zeta\in\mathcal{D}\) _one has_ \(S_{i}(-\zeta)=S_{i}(\zeta)^{-1}\) _and_ \(P_{i}(-\zeta)=P_{i}(\zeta)\)_,_ 3. _if_ \(\bar{\zeta}\in\mathcal{D}\) _one has_ \(\overline{S_{i}(\bar{\zeta})}=S_{i}(\zeta)^{-1}\) _and_ \(P_{i}(\bar{\zeta})=P_{i}(\zeta)^{\dagger}\)_,_ 4. _each_ \(P_{i}\) _satisfies_ \(\mathcal{G}\)_-invariance,_ \([P_{i}(\zeta),V(g)^{\otimes 2}]=0,g\in\mathcal{G}\)_, CPT-invariance,_ \(P_{i}(\zeta)=J^{\otimes 2}\mathbb{F}P_{i}(\zeta)^{\dagger}\mathbb{F}J^{ \otimes 2}\)_, and translational invariance,_ \((E_{m}\otimes E_{m^{\prime}})P_{i}(\zeta)=P_{i}(\zeta)(E_{m^{\prime}}\otimes E _{m})\) _for all_ \(m,m^{\prime}\in\mathfrak{M}\)_._ _The decomposition is unique up to relabeling._ Proof.: For the eigenvalue decomposition of a matrix-valued analytic function, see e.g. [13, Theorem 4.8] or [16, Chapter 2]. Restricting \(S\) to its domain of analyticity \(\mathcal{D}(S)\) we can apply the theorem from the first-named reference: For some \(k\in\mathbb{N}\) and any simply connected domain \(\mathcal{D}\subset D(S)\) we obtain pairwise distinct analytic functions \(S_{i}:\mathcal{D}\to\mathbb{C}\) and \(P_{i},D_{i}:\mathcal{D}\to\mathcal{B}(K^{\otimes 2})\) for \(i=1,\ldots,k\) such that for each \(\zeta\in\mathcal{D}\) \[S(\zeta)=\sum_{i=1}^{k}S_{i}(\zeta)P_{i}(\zeta)+D_{i}(\zeta)\] is the unique Jordan decomposition of \(S(\zeta)\) with eigenvalues \(S_{i}(\zeta)\), eigenprojectors \(P_{i}(\zeta)\) and eigennilpotents \(D_{i}(\zeta)\), \(i=1,\ldots,k\). Let us enlarge \(\mathcal{D}\) to \(\bar{\mathcal{D}}\) within \(D(S)\setminus\Delta(S)\) such that \(\bar{\mathcal{D}}\cap\mathbb{R}\subset\mathbb{R}\) is open and non-empty and such that \(\bar{\mathcal{D}}\) is still simply connected; this is always possible since \(\mathbb{C}\setminus D(S)\) and \(\Delta(S)\) are discrete, i.e., countable and without finite accumulation points. Since \(S(\theta)\) for \(\theta\in\mathbb{R}\) is unitary and therefore semisimple we find that \(D_{i}\upharpoonright\bar{\mathcal{D}}\cap\mathbb{R}=0\). Since \(D_{i}\) is analytic, this implies \(D_{i}=0\). From the properties of the Jordan decomposition we further infer that \(P_{i}(\zeta)P_{j}(\zeta)=\delta_{ij}P_{i}(\zeta)\), \(i,j=1,\ldots,k\). The properties (b)-(d) are implied by the corresponding properties of \(S\). Note that the \(S_{i}\) (within any domain \(\mathcal{D}\) from above) satisfy all the properties of a scalar S-function except for crossing symmetry. Specifically, these are the properties (S1) and (S2), since (S3), (S4), (S6), and (S7) are trivially satisfied in the scalar setting. In typical examples, the decomposition in Eq. (A.1) can be extended to all of \(\mathbb{C}\) if one allows for meromorphic \(S_{i}\) and \(P_{i}\). This applies particularly to models with constant eigenprojectors (e.g., all models with constant or diagonal S-functions) but also the other examples treated in Section 7. ### Uniqueness of the minimal solution and decomposition of one-particle solutions Throughout the remainder of Appendix A, we intend to analyze eigenvalues \(S\) of some matrix-valued S-function; thus \(S\) will denote a \(\mathbb{C}\)-valued (not matrix-valued) function from now on. The content of the present section is taken from [1] with slight generalizations. Central to the section is: **Lemma A.2**.: _Let \(S:\mathbb{C}\to\mathbb{C}\) be a meromorphic function with no poles on the real line. Then there exists at most one meromorphic function \(F:\mathbb{C}\to\mathbb{C}\) such that_ 1. \(F\) _has no poles and no zeroes in_ \(\mathbb{S}[0,\pi]\)_, except for a first-order zero at_ \(0\) _in case that_ \(S(0)=-1\)_,_ 2. \(\exists a,b,r>0\,\forall|\Re\zeta|\geq r,\Im\zeta\in[0,\pi]:\quad|\log\lvert F (\zeta)\rvert|\leq a+b\lvert\Re\zeta\rvert\)_,_ 3. \(F(i\pi+\zeta)=F(i\pi-\zeta)\)_,_ 4. \(F(\zeta)=S(\zeta)F(-\zeta)\)_,_ 5. \(F(i\pi)=1\)_._ If such a function exists, we will refer to it as _the minimal solution_\(F_{S,\min}\) with respect to \(S\). Due to (d), a necessary condition for existence is the relation \(S(-\zeta)=S(\zeta)^{-1}\) for all \(\zeta\in\mathbb{C}\). Proof of Lemma a.2.: Assume that there are two functions \(F_{A}\), \(F_{B}\) with the stated properties. Then the meromorphic function \(G(\zeta):=F_{A}(\zeta)/F_{B}(\zeta)\) has neither poles nor zeroes in \(\mathbb{S}[0,2\pi]\) and satisfies \(G(\zeta)=G(-\zeta)=G(\zeta+2\pi i)\). These relations imply that \(q:=G\circ\mathrm{ch}^{-1}\) is well-defined and entire. The asymptotic estimates (b) for \(|\mathrm{log}\lvert F_{A/B}\rvert|\) imply an analogous estimate for \(|\mathrm{log}\lvert G\rvert|=|\mathrm{log}\lvert F_{A}|-\mathrm{log}\lvert F _{B}\rvert|\) by the triangle inequality. Thus \(q\) is polynomially bounded at infinity and therefore a polynomial. However, since \(q\) does not have zeroes, it must be a constant with \(q\equiv q(-1)=1\) due to \(G(i\pi)=1\). Hence \(F_{A}=F_{B}\). **Corollary A.3**.: _If in addition \(\overline{S(\bar{\zeta})}=S(\zeta)^{-1},\zeta\in\mathbb{C}\), then it holds that_ \[F_{\min}(\zeta)=\overline{F_{\min}(-\bar{\zeta})}.\] (A.2) Proof.: Since \(\overline{S(-\bar{\zeta})}=S(\zeta)\) it is clear that \(\zeta\mapsto\overline{F_{\min}(-\bar{\zeta})}\) satisfies the same properties (a)-(e) as \(F_{\min}\). By uniqueness they have to be equal. **Corollary A.4**.: _For \(n\in\mathbb{N}\) let \(S_{1},\ldots,S_{n}:\mathbb{C}\to\mathbb{C}\) be meromorphic functions such that their minimal solutions \(F_{j,\min}\) exist. Then the minimal solution with respect to \(\zeta\mapsto S_{\Pi}(\zeta)=\prod_{j=1}^{n}S_{j}(\zeta)\) exists and is given by_ \[\zeta\mapsto F_{\Pi,\min}(\zeta)=(-i\,\mathrm{sh}\,\tfrac{\zeta}{2})^{-2[ \frac{\pi}{2}]}\prod_{j=1}^{n}F_{j,\min}(\zeta),\] (A.3) _where \(s=|\{j:S_{j}(0)=-1\}|\)._ Proof.: One easily checks that Eq. (A.3) satisfies conditions (b)-(e) of Lemma A.2 with respect to \(S_{\Pi}\). Also, counting the order of zeroes at \(0\) on the r.h.s. yields \(s-2\lfloor\frac{\pi}{2}\rfloor\), which evaluates to \(1\) for odd \(s\) (when \(S_{\Pi}(0)=-1\)) and to \(0\) otherwise (when \(S_{\Pi}(0)=+1\)), thus establishing condition (a). We now apply theses results to classify "non-minimal" solutions, having more zeroes or poles than allowed by condition (a): **Lemma A.5**.: _Let \(F:\mathbb{C}\to\mathbb{C}\) be a meromorphic function which satisfies properties (c)-(e) of Lemma A.2 with respect to some meromorphic function \(S\), and suppose_ \[\exists a,b,r>0\,\forall|\Re\zeta|\geq r,\Im\zeta\in[0,\pi]:\quad|F(\zeta)| \leq a\exp b|\Re\zeta|.\] (A.4) _Assume further that the minimal solution \(F_{S,\min}\) with respect to \(S\) exists. Then there is a unique rational function \(q\) with \(q(-1)=1\) such that_ \[F(\zeta)=q(\mathrm{ch}\,\zeta)F_{S,\min}(\zeta).\] (A.5) _In particular, if \(F\) has no poles in \(S[0,\pi]\), then \(q\) is a polynomial._ Proof.: Since the pole set of the meromorphic function \(F\) has no finite accumulation points, and its intersection with \(\mathbb{S}[0,2\pi]\) must be located in a compact set due to (c) and the estimate (A.4), this intersection must be finite. Now, define \(\zeta\mapsto G(\zeta):=F(\zeta)/F_{S,\min}(\zeta)\) which satisfies \(G(\zeta)=G(-\zeta)=G(\zeta+2\pi i)\). Then, analogous to the proof of Lemma A.2, there exists a meromorphic function \(q=G\circ\mathrm{ch}^{-1}\) which is polynomially bounded at infinity and has finitely many poles. Thus it is a rational function. Lastly, note that \(q(-1)=1\) due to \(G(i\pi)=1\) ### Existence of the minimal solution and its asymptotic growth In this section we establish the existence of a common integral representation of the minimal solution for a large class of (eigenvalues of) regular S-functions, namely those satisfying the hypothesis of Theorem A.6 below. As a byproduct, but of crucial importance for our discussion in Section 7, we obtain an explicit formula for the asymptotic growth of the minimal solution (Proposition A.11). For \(\mathbb{C}\)-valued functions \(S\) and \(f\), the integral expressions of interest are formally given by \[f[S](t) :=\frac{i}{\pi}\int_{0}^{\infty}S^{\prime}(\theta)S(\theta)^{-1} \cos(\pi^{-1}\theta t)d\theta,\] (A.6) \[S_{f}(\zeta) :=\exp\left(-2i\int_{0}^{\infty}f(t)\sin\frac{\zeta t}{\pi}\, \frac{dt}{t}\right),\] (A.7) \[F_{f}(\zeta) :=\exp\left(2\int_{0}^{\infty}f(t)\sin^{2}\frac{(i\pi-\zeta)t}{2 \pi}\,\frac{dt}{t\,\mathrm{sh}\,t}\right);\] (A.8) we will give conditions for their well-definedness below. \(f[S]\) will be referred to as the _characteristic function_5 with respect to \(S\). For a large class of functions \(S\), the functions \(S_{f[S]}\) and \(F_{f[S]}\) will agree with \(S\) and \(F_{S,\min}\) respectively. Footnote 5: Differing conventions for \(f[S]\) are found in the literature. In the form factor program community, one mostly takes \(2f[S]\) as the characteristic function: Compare formulas (A.7)–(A.8) with, e.g., [10, Eq. (4.10)–(4.11)] or [11, Eq. (2.18)–(2.19)], but noting a typo in Eq. (2.19) there. For the following let us agree to call a function \(f\) on \(\mathbb{R}\)_exponentially decaying_ iff \[\exists a,b,r>0\,\forall|t|\geq r:\quad|f(t)|\leq a\exp(-b|t|);\] (A.9) analogously for functions on \([0,\infty)\). A function \(f\) on a strip \(\mathbb{S}(-\epsilon,\epsilon)\) will be called _uniformly_\(L^{1}\) if \(f(\cdot+i\lambda)\in L^{1}(\mathbb{R})\) for every \(\lambda\in(-\epsilon,\epsilon)\), with the \(L^{1}\) norm uniformly bounded in \(\lambda\). Now, we are ready to state the main result: **Theorem A.6**.: _Let \(S:\mathbb{C}\to\mathbb{C}\) be a meromorphic function with no poles on the real line, satisfying \(S(\zeta)^{-1}=S(-\zeta)\), and regularity (S8). Suppose that \(r_{S}(\zeta):=iS^{\prime}(\zeta)/S(\zeta)\) is uniformly \(L^{1}\) on some strip \(\mathbb{S}(-\epsilon,\epsilon)\). Then \(f[S]\in C([0,\infty),\mathbb{R})\) is exponentially decaying. If further \(f[S]\in C^{2}([0,\delta))\) for some \(\delta>0\), then the minimal solution with respect to \(S\) exists._ _In more detail, under these assumptions \(S_{f[S]}\) and \(F_{f[S]}\) are well-defined, non-vanishing and analytic on \(\mathbb{S}(-\epsilon,\epsilon)\) and \(\mathbb{S}(-\epsilon,2\pi+\epsilon)\), respectively. The meromorphic continuations of \(S_{f[S]}\) and \(F_{f[S]}\) to all of \(\mathbb{C}\) exist. In case that \(S(0)=1\), we have \(S_{f[S]}=S\) and \(F_{f[S]}=F_{S,\min}\). In case that \(S(0)=-1\), we have \(S_{f[S]}=-S\) and \(F_{f[S]}=F_{-S,\min}\); and \(F_{S,\min}(\zeta)=(-i\,\frac{\kappa}{2})F_{-S,\min}(\zeta)\)._ The examples treated in Section 7 fulfill this condition: In particular, for products of S-functions of sinh-Gordon type (see Eq. (7.1)), \(r_{S}\) is exponentially decaying (on a strip) and \(f[S]\) is actually smooth on \([0,\infty)\) (cf. Eq. (7.2)). For the eigenvalues of the S-function of the \(O(n)\)-NLS model, \(S\in\{S_{0},S_{+},S_{-}\}\), we find \(r_{S}(\theta+i\lambda)\lesssim\theta^{-2}\), \(|\theta|\to\infty\), uniformly in \(\lambda\in[-\epsilon,\epsilon]\) for some \(\epsilon>0\) and again, that \(f[S]\) is smooth on \([0,\infty)\) (cf. Appendix B). We give the proof of the theorem in several steps. To begin with, we have: **Lemma A.7**.: _Let \(r_{S}\) be uniformly \(L^{1}\) on a strip \(\mathbb{S}(-\epsilon,\epsilon)\). Then \(f[S]:\mathbb{R}\to\mathbb{R}\) is an even, bounded, and continuous function which decays exponentially._ Proof.: Since \(r_{S}\) restricted to \(\mathbb{R}\) is \(L^{1}\)-integrable, its Fourier transform \(\widetilde{r_{S}}\) is bounded, continuous and vanishes towards infinity. As \(r_{S}\) is even, \[r_{S}(-\theta)=iS^{\prime}(-\theta)S(\theta)=-i(S(\theta)^{-1})^{\prime}S( \theta)=iS^{\prime}(\theta)S(\theta)^{-1}=r_{S}(\theta),\] (A.10) also \(\widetilde{r_{S}}\) is even and we have \[f[S](t)=\tfrac{i}{\pi}\int_{0}^{\infty}r_{S}(\theta)\cos(\pi^{-1}\theta t)dt= \tfrac{i}{2\pi}\int r_{S}(\theta)e^{i\pi^{-1}\theta t}dt=\tfrac{i}{2\pi} \widetilde{r_{S}}(t).\] (A.11) Let now \(0<\lambda<\epsilon\) be arbitrary. By assumption, \(r_{S}(\cdot+i\lambda)\) is \(L^{1}\)-integrable as well; and by the translation property of the Fourier transform, \[\widetilde{r_{S}}(t)=e^{-\lambda t}\,\tfrac{1}{2\pi}\widetilde{r_{S}(\cdot+i \lambda)}(t),\] (A.12) where \(\widetilde{r_{S}(\cdot+i\lambda)}(t)\) vanishes for \(|t|\to\infty\) due to the Riemann-Lebesgue lemma. Thus \(\widetilde{r_{S}}\) is exponentially decaying towards \(+\infty\), and since it is even also towards \(-\infty\). We continue with an existence result for \(S_{f}\) and \(F_{f}\) and some of their basic properties: **Lemma A.8**.: _Let \(f\in C([0,\infty),\mathbb{R})\) be exponentially decaying. Then the functions \(S_{f}\) and \(F_{f}\) are well-defined by the integral expressions (A.7) and (A.8). Further they are non-vanishing and analytic on \(\mathbb{S}(-\epsilon,\epsilon)\) and \(\mathbb{S}(-\epsilon,2\pi+\epsilon)\), respectively, for some \(\epsilon>0\)._ Proof.: By assumption there exist positive constants \(a,r,\epsilon>0\) such that \(|f(t)|<a\exp(-\epsilon t)\) for all \(t\geq r\). By the triangle inequality, \(|\sin\pi^{-1}\zeta t|=\frac{1}{2}|e^{i\pi^{-1}\zeta t}+e^{-i\pi^{-1}\zeta t}| \leq\exp(\pi^{-1}|\Im\zeta|t)\) for all \(t\geq 0\). Thus for \(|\Im\zeta|<\epsilon\) one has that \(f(t)t^{-1}\sin(\pi^{-1}\zeta t)\) is exponentially decaying. Also, this function is continuous (including at \(t=0\) because of the first-order zero of the sine function at zero). By similar arguments, the same holds for its derivative with respect to \(\zeta\). In particular, \(t\mapsto f(t)t^{-1}\sin(\pi^{-1}\zeta t)\) and its \(\zeta\)-derivative are absolutely integrable for all \(\zeta\in\mathbb{S}(-\epsilon,\epsilon)\). As a consequence, \(S_{f}\) is well-defined and analytic on \(\mathbb{S}(-\epsilon,\epsilon)\). Since \(S_{f}\) is given by an exponential, it does not vanish. The argument for \(F_{f}\) runs analogously. The estimate from above gives \(|\sin^{2}(2\pi)^{-1}(i\pi-\zeta)t|\leq\exp(\pi^{-1}|\Im(i\pi-\zeta)|t)\) for all \(t\geq 0\). Thus \(t\mapsto f(t)(t\operatorname{sh}t)^{-1}\sin^{2}((2\pi)^{-1}(i\pi-\zeta)t)\) is exponentially decaying for \(|\Im\zeta-\pi|<\pi+\epsilon\). It is further continuous (including at \(t=0\) because of the second-order zero of the sine-function at zero). Together with similar properties of the \(\zeta\)-derivative, it follows that \(F_{f}\) is well-defined, analytic, and non-vanishing in the region \(\mathbb{S}(-\epsilon,2\pi+\epsilon)\). **Lemma A.9**.: _Let \(f\in C([0,\infty),\mathbb{R})\) be exponentially decaying. Then \(S_{f}(0)=1\) and \(F_{f}(i\pi)=1\). Moreover, there exists \(\epsilon>0\) such that for all \(\zeta\in\mathbb{S}(-\epsilon,\epsilon)\),_ \[S_{f}(\zeta)^{-1}=S_{f}(-\zeta)\] (A.13) _and_ \[F_{f}(\zeta)=S_{f}(\zeta)F_{f}(-\zeta),\quad F_{f}(i\pi+\zeta)=F_{f}(i\pi- \zeta).\] (A.14) Proof.: \(S_{f}(0)=1\) and \(F_{f}(i\pi)=1\) is immediate by definition. Next, take \(\zeta\in\mathbb{S}(-\epsilon,\epsilon)\) with the \(\epsilon\) from Lemma A.8. Then \[-\sin(\pi^{-1}\zeta t)=\sin(\pi^{-1}(-\zeta)t)\] (A.15) implies that \(S_{f}(\zeta)^{-1}=S_{f}(-\zeta)\). Similarly, \[\sin^{2}\frac{\zeta t}{2\pi}=\sin^{2}\frac{-\zeta t}{2\pi}\] (A.16) implies that \(F_{f}(i\pi+\zeta)=F_{f}(i\pi-\zeta)\). Lastly, the relation \[\sin^{2}\frac{(i\pi-\zeta)t}{2\pi}-\sin^{2}\frac{(i\pi+\zeta)t}{2\pi}=-i \operatorname{sh}(t)\sin\frac{\zeta t}{\pi}\] (A.17) implies that \[\begin{split}\log\frac{F_{f}(\zeta)}{F_{f}(-\zeta)}& =2\int_{0}^{\infty}f(t)\left(\sin^{2}\frac{(i\pi-\zeta)t}{2\pi}- \sin^{2}\frac{(i\pi+\zeta)t}{2\pi}\right)\,\frac{dt}{t\operatorname{sh}t}\\ &=-2i\int_{0}^{\infty}f(t)\sin(\pi^{-1}\zeta t)\,\frac{dt}{t}= \log S_{f}(\zeta),\end{split}\] (A.18) which concludes the proof. **Corollary A.10**.: _For arbitrary \(c\in\mathbb{R}\) and exponentially decaying \(f,g\in C([0,\infty),\mathbb{R})\), one has_ 1. \(S_{cf}=(S_{f})^{c},\quad F_{cf}=(F_{f})^{c},\)__ 2. \(S_{f+g}=S_{f}S_{g},\quad F_{f+g}=F_{f}F_{g},\)__ 3. \(S_{f}=S_{g}\Leftrightarrow f=g,\quad F_{f}=F_{g}\Leftrightarrow f=g.\)__ Proof.: (a) and (b) are immediate since \(\log S_{f}\) and \(\log F_{f}\) are linear in \(f\) by definition. In (c), we only need to show "\(\Rightarrow\)", and by the previous parts we may asssume \(g=0\), with \(S_{0}=F_{0}=1\). Now if \(S_{f}=1\), we compute from Eq. (A.7) for any \(\lambda\in\mathbb{R}\), \[0=\frac{d}{d\lambda}\log S_{f}(\lambda)=-2i\frac{d}{d\lambda}\int_{0}^{\infty} f(\pi t)\sin(\lambda t)\frac{dt}{t}=-2i\int_{0}^{\infty}f(\pi t)\cos(\lambda t)\,dt,\] (A.19) hence \(f=0\) by the inversion formula for the Fourier cosine transform. If \(F_{f}=1\), we use (A.14) to conclude that \(S_{f}=1\), which implies \(f=0\) as seen earlier. **Proposition A.11** (Asymptotic estimate).: _Let \(f\in C([0,\infty),\mathbb{R})\) be exponentially decaying and \(C^{2}([0,\delta))\) for some \(\delta>0\). Let \(f_{0}:=f(0)\) and \(f_{1}:=f^{\prime}(0)\), where \({}^{\prime}\) refers to the half-sided derivative. Then there exist constants \(0<c\leq c^{\prime}\) and \(r>0\) such that_ \[\forall|\Re\zeta|\geq r,\Im\zeta\in[0,2\pi]:\quad c\leq\frac{|F_{f}(\zeta)|}{| \Re\zeta|^{f_{1}}\exp|\Re\zeta|^{f_{0}/2}}\leq c^{\prime}.\] (A.20) Proof.: In the following let \(z:=(i\pi-\zeta)/2\pi\) with \(x:=|\Re z|>0\) and \(y:=|\Im z|\leq\frac{1}{2}\). Then \[\Re\log F_{f}(\zeta) =2\int_{0}^{\infty}\frac{f(t)}{t\operatorname{sh}t}\Re\sin^{2}( xt)dt\] \[=\int_{0}^{\infty}\frac{f(t)}{t\operatorname{sh}t}(1-\cos 2xt \operatorname{ch}2yt)dt=:I(z).\] (A.21) The aim is to show that \(|I(z)-f_{0}\pi x-f_{1}\log x|\) is uniformly bounded in \(z\in\mathbb{S}[-\frac{1}{2},\frac{1}{2}]\), as this implies the bound (A.20) by monotonicity of the exponential function. To begin with, note that the integrand of (A.21) for \(t\geq 1,y\leq\frac{1}{2}\), is uniformly bounded by \(f(t)(t\operatorname{sh}t)^{-1}(1+\operatorname{ch}t)\). This is integrable on \([1,\infty)\) by the exponential decay of \(f\). The integral over \([1,\infty)\) is thus bounded uniformly in \(z\) by a constant \(c_{0}\). As further preliminaries let us note that \[\left|\int_{0}^{1}((t\operatorname{sh}t)^{-1}-t^{-2})(1-\cos 2 xt\operatorname{ch}2yt)f(t)dt\right| \leq c_{1},\] (A.22) \[\left|\int_{0}^{1}(\operatorname{ch}(2yt)-1)t^{-2}\cos 2xtf(t)dt\right| \leq c_{2},\] (A.23) where \(c_{1}\), \(c_{2}\) are constants independent of \(x\) and \(y\). This is implied by mean-value-estimates using regularity of the functions \((t\operatorname{sh}t)^{-1}-t^{-2}\) and \((\operatorname{ch}(2yt)-1)t^{-2}\) also at \(t=0\), where \(t\) and \(y\) are evaluated on compact ranges while \(x\) appears only in the argument of the cosine-function. The same reasoning allows us to estimate \[\left|\int_{0}^{1}(f(t)-f_{0}-f_{1}t)t^{-2}(1-\cos 2xt)dt\right|\leq c_{3},\] (A.24) where we apply Taylor's theorem to \(f\in C^{2}([0,\delta))\) to argue regularity at \(t=0\). Applying Eqs. (A.22)-(A.24) and the triangle inequality yields \[|I(z)-J(x)|\leq c_{0}+c_{1}+c_{2}+c_{3},\] (A.25) where \[J(x):=\int_{0}^{1}(f_{0}+f_{1}t)\frac{1-\cos 2xt}{t^{2}}dt=f_{0}\big{(}-1+ \cos 2x+2x\operatorname{Si}(2x)\big{)}+f_{1}\operatorname{Cin}(2x)\] (A.26) in terms of the standard sine and cosine integral functions. Since these have the asymptotics \(\operatorname{Si}(x)=\frac{\pi}{2}+\mathcal{O}(x)\), \(\operatorname{Cin}(x)=\log x+\mathcal{O}(1)\) as \(x\to\infty\)[Nis, SS6.12(ii)], one finds constants \(r,c>0\) such that \[\forall x\geq r:\quad|I(z)-f_{0}\pi x-f_{1}\log x|\leq c.\] (A.27) With the asymptotic estimate for \(F_{f}\) we can now prove the main result: Proof of Theorem a.6.: First, consider \(S(0)=1\). By Lemma A.7, \(f[S]\) is exponentially decaying and hence by Lemma A.8, \(S_{f[S]}\) is well-defined, and analytic and nonvanishing on a small strip. Combining (A.6) and (A.7) with the inversion formula for the Fourier cosine transform, we find for \(\lambda\in\mathbb{R}\), \[\frac{d}{d\lambda}\log S_{f[S]}(\lambda)=-\frac{2i}{\pi}\int_{0}^{\infty}f[S] (t)\cos\frac{\lambda t}{\pi}dt=\frac{S^{\prime}(\lambda)}{S(\lambda)}=\frac{ d}{d\lambda}\log S(\lambda).\] (A.28) Since also \(S_{f[S]}(0)=1=S(0)\), we conclude that \(S_{f[S]}=S\) first on the real line, and then as meromorphic functions. Further by Lemma A.9, \(F:=F_{f[S]}\) is analytic and non-vanishing on the physical strip \(\mathbb{S}[0,\pi]\), satisfies \(F(i\pi)=1\), and for some \(\epsilon>0\), \[F(\zeta)=S(\zeta)F(-\zeta),\quad F(i\pi+\zeta)=F(i\pi-\zeta),\quad\zeta\in \mathbb{S}(-\epsilon,\epsilon);\] (A.29) in fact, using these relations we can extend \(F\) as a meromorphic function to all of \(\mathbb{C}\). Also, Proposition A.11 yields the asymptotic estimate in Lemma A.2(b). In summary, Lemma A.2 applies to \(F\), hence \(F\) is the unique minimal solution with respect to \(S\). In the case \(S(0)=-1\), one finds \(S_{f[S]}=S_{f[-S]}=-S\) by the above; also, \(F_{f[S]}=F_{f[-S]}\) is the minimal solution with respect to \(-S\), and from Corollary A.4, we have \(F_{S,\min}(\zeta)=-i\operatorname{sh}\frac{\zeta}{2}F_{f[-S]}(\zeta)\). Computing a characteristic function In this appendix, we present a method to explicitly compute characteristic functions (as defined in Appendix A.3) for a certain class of \(S\). The method is known but only briefly described in [12]. We illustrate it here using the eigenvalues of the S-function of the \(O(n)\)-nonlinear sigma model, i.e., \(S_{i}\) for \(i=\pm,0\) (see Definition 7.7 and below). First, we present the general method; second, we check the examples \(f[S_{\pm}]\) against the literature; lastly, we compute \(f[S_{0}]\). The method applies to S-function eigenvalues which are given as a product of Gamma functions; see [1, Appendix C] for some typical examples. While this product can be infinite in general, we restrict here to finite products, which suffice for our purposes. Specifically, let \(S\) be of the form \[S(\theta)=\frac{\prod_{x\in A_{+}}\Gamma(x+\frac{\theta}{\lambda\pi i})\prod_{ x\in A_{-}}\Gamma(x-\frac{\theta}{\lambda\pi i})}{\prod_{x\in A_{+}}\Gamma(x- \frac{\theta}{\lambda\pi i})\prod_{x\in A_{-}}\Gamma(x+\frac{\theta}{\lambda \pi i})},\] (B.1) where \(\lambda>0\) and \(A_{\pm}\) are finite subsets of \((0,\infty)\) such that \(|A_{+}|=|A_{-}|\). It is straightforward to check that this indeed defines a regular \(\mathbb{C}\)-valued S-function, apart from crossing symmetry which can only be satisfied for \(A_{+}=A_{-}=\emptyset\) or infinite products. **Lemma B.1**.: _The characteristic function with respect to \(S\) as in Eq. (B.1) is_ \[t\mapsto f[S](t)=\frac{1}{1-e^{-\lambda t}}\left(\sum_{x\in A_{-}}-\sum_{x\in A _{+}}\right)e^{-\lambda xt}.\] (B.2) Proof.: Since \(f[S]\) is linear in \(\log S\), it suffices to consider the case \(A_{+}=\{x_{+}\}\), \(A_{-}=\{x_{-}\}\). Using Malmsten's formula (see, e.g., [1, Sec. 1.9]) \[\log\Gamma(z)=\int_{0}^{\infty}\left(z-1-\frac{1-e^{-(z-1)t}}{1-e^{-t}}\right) \frac{e^{-t}}{t}dt,\quad\Re z>0,\] (B.3) we find \[\frac{d}{d\theta}\log S(\theta)=\int_{0}^{\infty}\frac{\big{(}e^{\frac{\theta t }{\lambda\pi i}}+e^{\frac{-\theta t}{\lambda\pi i}}\big{)}\big{(}e^{-x_{+}t}- e^{-x_{-}t}\big{)}}{1-e^{-t}}\frac{dt}{\lambda\pi i}=-\frac{2i}{\pi}\int_{0}^{ \infty}\underbrace{\frac{e^{-\lambda x_{-}t}-e^{-\lambda x_{+}t}}{1-e^{- \lambda t}}}_{=y(t)}\cos\frac{\theta t}{\pi}dt.\] (B.4) By definition in (A.6), \(f[S]\) is given as the Fourier cosine transform of \(S(\theta)^{-1}\frac{d}{d\theta}S(\theta)=\frac{d}{d\theta}\log S(\theta)\); its inversion formula yields that \(f[S]=g\) since \(g\) is clearly integrable. **Example B.2** (Eigenvalues \(S_{\pm}\)).: _By definition, \(S_{\pm}(\theta)=(b\pm c)(\theta)=h_{\pm}(\theta)b(\theta)\) with_ \[b(\theta)=s(\theta)s(i\pi-\theta),\quad s(\theta)=\frac{\Gamma\left(\frac{\nu }{2}+\frac{\theta}{2\pi i}\right)\Gamma\left(\frac{1}{2}+\frac{\theta}{2\pi i }\right)}{\Gamma\left(\frac{1}{2}+\frac{\nu}{2}+\frac{\theta}{2\pi i}\right) \Gamma\left(\frac{\theta}{2\pi i}\right)}\] (B.5) _and_ \[h_{\pm}(\theta)=\frac{\theta\mp i\pi\nu}{\theta}=\mp\frac{\frac{\nu}{2}\mp \frac{\theta}{2\pi i}}{\frac{\theta}{2\pi i}}=\mp\frac{\Gamma(1+\frac{\nu}{2} +\frac{\theta}{2\pi i})\Gamma(\frac{\theta}{2\pi i})}{\Gamma(\frac{\nu}{2}+ \frac{\theta}{2\pi i})\Gamma(1+\frac{\theta}{2\pi i})},\] (B.6) _where we used \(z=\Gamma(z+1)/\Gamma(z)\) in order to represent \(h_{\pm}\) in terms of \(\Gamma\). As a result,_ \[S_{\pm}(\theta)=\mp\frac{\Gamma(\frac{1\mp 1}{2}+\frac{\nu}{2\pi i})\Gamma(\frac{1} {2}+\frac{\theta}{2\pi i})\Gamma(\frac{1}{2}+\frac{\nu}{2}-\frac{\theta}{2 \pi i})\Gamma(1-\frac{\theta}{2\pi i})}{\Gamma(\frac{1}{2}+\frac{\nu}{2}+ \frac{\theta}{2\pi i})\Gamma(1+\frac{\theta}{2\pi i})\Gamma(\frac{1}{2}+\frac {1}{2}+\frac{\theta}{2\pi i})\Gamma(\frac{1}{2}-\frac{\theta}{2\pi i})},\] (B.7) _which is of the form (B.1) for \(\lambda=2\), \(A_{+}=\{\frac{1}{2},\frac{1\mp 1}{2}+\frac{\nu}{2}\}\), and \(A_{-}=\{1,\frac{1}{2}+\frac{\nu}{2}\}\). Due to Lemma B.1 we find_ \[f[-S_{+}](t) =\frac{1}{1-e^{-2t}}\left(e^{-2t}+e^{-(\nu+1)t}-e^{-t}-e^{-\nu t} \right)=-\frac{1+e^{(1-\nu)t}}{e^{t}+1}\] (B.8) \[f[S_{-}](t) =\frac{1}{1-e^{-2t}}\left(e^{-2t}+e^{-(\nu+1)t}-e^{-t}-e^{-(\nu+2 )t}\right)=\frac{e^{-\nu t}-1}{e^{t}+1}.\] (B.9) _This agrees with [1, Eq. (2.11)] and [12, Eq. (5.7)]. We read off_ \[f[-S_{+}](t) =-1+\tfrac{\nu}{2}t+\mathcal{O}(t^{2}),\quad t\to 0;\] (B.10) \[f[S_{-}](t) =-\tfrac{\nu}{2}t+\mathcal{O}(t^{2}),\quad t\to 0.\] (B.11) **Example B.3** (Eigenvalue \(-S_{0}\)).: _Similarly to the preceding example, we write_ \[S_{0}(\theta)=h_{0}(\theta)b(\theta)\] (B.12) _with_ \[h_{0}(\theta)=\frac{\theta^{2}+i\pi(1+\nu)\theta-\nu\pi^{2}}{\theta(\theta-i\pi) }=-\frac{(\frac{1}{2}+\frac{\theta}{2\pi i})(\frac{\nu}{2}+\frac{\theta}{2\pi i })}{\frac{\theta}{2\pi i}(\frac{1}{2}-\frac{\theta}{2\pi i})}.\] (B.13) _Using again \(z=\Gamma(z+1)/\Gamma(z)\), we find_ \[S_{0}(\theta)=-\frac{\Gamma(1+\frac{\nu}{2}+\frac{\theta}{2\pi i})\Gamma( \frac{3}{2}+\frac{\theta}{2\pi i})\Gamma(\frac{1}{2}+\frac{\nu}{2}-\frac{ \theta}{2\pi i})\Gamma(1-\frac{\theta}{2\pi i})}{\Gamma(\frac{1}{2}+\frac{ \nu}{2}+\frac{\theta}{2\pi i})\Gamma(1+\frac{\theta}{2\pi i})\Gamma(1+\frac{ \nu}{2}-\frac{\theta}{2\pi i})\Gamma(\frac{3}{2}-\frac{\theta}{2\pi i})},\] (B.14) _which is of the form (B.1) for \(\lambda=2\), \(A_{+}=\{\frac{3}{2},1+\frac{\nu}{2}\}\), and \(A_{-}=\{1,\frac{1}{2}+\frac{\nu}{2}\}\). Due to Lemma B.1 we find_ \[f[-S_{0}](t)=\frac{1}{1-e^{-2t}}\left(e^{-2t}+e^{-(1+\nu)t}-e^{-3t}-e^{-(2+\nu )t}\right)=\frac{e^{-t}+e^{-\nu t}}{e^{t}+1}\] (B.15) _and conclude_ \[f[-S_{0}](t)=1-(1+\tfrac{\nu}{2})t+\mathcal{O}(t^{2}),\quad t\to 0.\] (B.16) ## Acknowledgements J.M. would like to thank Karim Sheldit Attifa and Markus Frob for many fruitful discussions. D.C. and J.M. acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) within the Emmy Noether grant CA1850/1-1 and Grant No 406116891 within the Research Training Group RTG 2522/1. H.B. would like to thank the Institute for Theoretical Physics at the University of Leipzig for hospitality.
2308.15353
Detect, Augment, Compose, and Adapt: Four Steps for Unsupervised Domain Adaptation in Object Detection
Unsupervised domain adaptation (UDA) plays a crucial role in object detection when adapting a source-trained detector to a target domain without annotated data. In this paper, we propose a novel and effective four-step UDA approach that leverages self-supervision and trains source and target data concurrently. We harness self-supervised learning to mitigate the lack of ground truth in the target domain. Our method consists of the following steps: (1) identify the region with the highest-confidence set of detections in each target image, which serve as our pseudo-labels; (2) crop the identified region and generate a collection of its augmented versions; (3) combine these latter into a composite image; (4) adapt the network to the target domain using the composed image. Through extensive experiments under cross-camera, cross-weather, and synthetic-to-real scenarios, our approach achieves state-of-the-art performance, improving upon the nearest competitor by more than 2% in terms of mean Average Precision (mAP). The code is available at https://github.com/MohamedTEV/DACA.
Mohamed L. Mekhalfi, Davide Boscaini, Fabio Poiesi
2023-08-29T14:48:29Z
http://arxiv.org/abs/2308.15353v1
Detect, Augment, Compose, and Adapt: Four Steps for Unsupervised Domain Adaptation in Object Detection ###### Abstract Unsupervised domain adaptation (UDA) plays a crucial role in object detection when adapting a source-trained detector to a target domain without annotated data. In this paper, we propose a novel and effective four-step UDA approach that leverages self-supervision and trains source and target data concurrently. We harness self-supervised learning to mitigate the lack of ground truth in the target domain. Our method consists of the following steps: (1) identify the region with the highest-confidence set of detections in each target image, which serve as our pseudo-labels; (2) crop the identified region and generate a collection of its augmented versions; (3) combine these latter into a composite image; (4) adapt the network to the target domain using the composed image. Through extensive experiments under cross-camera, cross-weather, and synthetic-to-real scenarios, our approach achieves state-of-the-art performance, improving upon the nearest competitor by more than 2% in terms of mean Average Precision (mAP). The code is available at [https://github.com/MohamedTEV/DACA](https://github.com/MohamedTEV/DACA). ## 1 Introduction Domain Adaptation in object detection aims to adapt an object detection model, trained on a data distribution (source), to perform well on a different data distribution (target domain). The challenge of this process arises not only from domain shift but also from factors such as the high variability in object statistics within the target domain. For example, target objects with a different appearance from that of source objects are more difficult to adapt. Another challenge is the data bias across domains, such as the disparity in the quantity of source objects versus the abundance of objects in the target domain. When the target domain possesses a few ground-truth annotations, these can be used as initial seeds to enhance the model's generalization capabilities on the target, which has been shown to be effective in past research works [10, 53]. However, in real applications the target domain often lacks annotations, and this leads to Unsupervised Domain Adaptation (UDA) techniques [13]. ###### Abstract We propose a novel approach for UDA in object detection. Our approach is based on four simple processing steps: _detect_, _augment_, _compose_, and _adapt_. We refer to our approach as DACA. DACA is inspired by the idea that adaptation can be made effective by generating high-quality pseudo-labels from the target and using them to supervise augmented versions of the target itself to align the model to the target data distribution. Specifically, we exploit the region of target images with the most confident object detections to gather pseudo-labels for self-training. We create a composite image by applying a collection of random augmentations to the original target image region. This mechanism allows the detector to self-train with confident pseudo-labels applied to augmented images under domain shift. We evaluate DACA on the popular benchmarks Cityscapes [], FoggyCityscapes [], Sim10K [], and KITTI []. DACA achieves state-of-the-art performance, improving upon the nearest competitor by more than 2% in terms of mean Average Precision (mAP). Fig. 1 illustrates the idea of DACA. To summarize, our contributions are: * [leftmargin=*] * * Our approach is the first alternative to mix up approaches that does not mix images from different domains, but instead generates difficult and informative composite images only from the unsupervised target images. * We devise a novel approach for UDA based on self-supervision. DACA generates the composite image based on augmented versions of the target image region with the most confident detections, making the adaptation more effective. ## 2 Related Work ### UDA for object detection Domain adaptation has been an active area of research to compensate for the distribution mismatch between a source and target data in classification and segmentation endeavours [B,,,,,,,,, ]. It has assumed several adaptation forms such as multi-source adaptation [3], few-shot adaptation [4], and source-free adaptation [4]. For instance, Inoue [5] tackle the problem of weakly-supervised domain adaptation where instance-level annotations are provided at the source and image-level annotations are envisioned at the target. Wang [5] leverage synthetic data to carry out the adaptation. As per UDA in object detection, Zhu [5] align features of object-prone regions across disjoint domains. Li [5] present a student-teacher approach, where the student model ensures cross-domain consistent features via adversarial training. SC-UDA [4] translates source images to an intermediate domain by CycleGAN to close the style gap with the target, while content gap is reduced by fusing multiple pseudo-detections produced with embedded stochastic inference. Soviany [5] adopt curriculum learning, which aims to progressively adapt a source-trained model to a target, starting with easy images. Image difficulty was determined based on the number of objects and their size. CycleGAN was also trained to stylize source images to minimize style shift. In this context, several UDA for object detection works seem to focus on feature alignment across domains. Recent approaches mix up source and target images to conduct the adaptation procedure. To the best of our knowledge, ConfMix [4] is the first approach in this direction. ConfMix copy-pastes confident regions along with its detections from the target image to the source image, and supervises the model with a self-supervised loss on the mixed image, scaled by a confidence-induced factor, as well as a supervised loss the source image. Although ConfMix is effective, we argue that this is a rather preliminary approach for UDA and there is significant room for improvement. For example, the source image is used redundantly during training: the network processes it nearly twice at each iteration, once for the "source pass", and once for the "target pass" in its mix up form (see Fig. 1). We believe that this may hinder the network to adapt its statistics to those of the target domain. Moreover, ConfMix does not leverage data augmentation on the target domain to make adaptation more effective. DACA trains a detector with confident target image regions only. We believe that the source data is unnecessary and potentially deleterious during the target pass. ### Mixing under domain shift The benefit of data augmentation is evident in prior art [4], [5]. Cutout [5] is a regularization strategy that models object occlusion. It simply masks out random square regions of training images and has shown to improve the overall performance of deep models. Mixup [4] produces element-wise convex combinations of pairs of image examples and their labels. These techniques have been common to address regularization issues in vision tasks so far []. Regarding augmentation-based UDA for object detection, the literature is still developing []. For instance, CutMix [] was devised primarily for classification purposes, and then tailored to object detection []. It replaces a random portion of an image with a randomly sampled portion from another image. However, this may override important source information (e.g. objects and their bounding boxes, as well as background context) for the adaptation procedure. Moreover, the region that is randomly cropped from the target may not carry detections to be used as pseudo-labels. ConfMix [] partially addresses the above problem by selecting a confident region from the target instead of a random one (see Fig. 1). See description of ConfMix and its limitations in the previous section. DACA aims to settle the shortcomings of mix up approaches at once. The target image combines random augmentations of a confident target region only. We empirically show that our approach achieves state-of-the-art results. ## 3 Our approach ### Overview Given a detector \(\Phi_{\Theta}\) pre-trained on source data, DACA performs adaptation by self-training \(\Phi_{\Theta}\) on challenging composite images obtained by combining different augmentations of the same crop extracted from target data. We adapt the detector to the target distribution by imposing consistency between the detections predicted on the composite image and the detections of the most confident image region transformed accordingly to the augmentations. Figure 2: Overview of our proposed approach (DACA). The detector maintains its knowledge on the source by supervision via \(\ell_{S}\), and jointly adapts towards the target via \(\ell_{T}\) using the augmented target image and its pseudo-detections. Keys: Comp.: composition. Augm.: augmentation. Bounding boxes colors: red: source detections, green: source ground-truth, orange: target pseudo-detections, blue: composite target detections. ### Detect, augment, compose, and adapt We define a pair of annotated source data as \((\mathbf{X}_{S},\mathbf{G}_{S})\), where \(\mathbf{X}_{S}\in\mathbb{R}^{H\times W\times 3}\) is an RGB image sampled from the source data distribution \(S\) and \(\mathbf{G}_{S}=\{(\mathbf{B}_{i},c_{i}):i=1,\ldots,N_{S}\}\) is the list of its \(N_{S}\) ground-truth bounding boxes, where \(\mathbf{B}_{i}\in\mathbb{R}^{4}\) contains the coordinates of two opposite corners of the \(i\)th bounding box, and \(c_{i}\in[1,\ldots,C]\) is a label representing the category of the object contained inside it. We denote an RGB image sampled from the target data distribution \(T\) as \(\mathbf{X}_{T}\in\mathbb{R}^{H\times W\times 3}\). Without loss of generality, we assume \(\mathbf{X}_{S}\) and \(\mathbf{X}_{T}\) to have the same size. A neural network model designed to perform object detection can be defined as a parametric function \(\Phi_{\Theta}\colon\mathbb{R}^{H\times W\times 3}\to\mathbb{R}^{M\times 4} \times[1,\ldots,C]^{M}\) with learnable parameters \(\Theta\) that takes as input an RGB image \(\mathbf{X}\) and produces as output a list of \(M\) detections of the objects of interest contained in it, i.e. \(\Phi_{\Theta}(\mathbf{X})=\mathbf{D}\). #### 3.2.1 Finding reliable pseudo-detections Given a target image \(\mathbf{X}_{T}\), we first feed it to the detector \(\Phi_{\Theta}\) (pre-trained on source data), to obtain the detections \(\mathbf{P}_{T}\in\mathbb{R}^{M_{T}\times 4}\times[1,\ldots,C]^{M_{T}}\). Because we want to use \(\mathbf{P}_{T}\) as pseudo-labels to self-train our model, it is convenient to find a way to retain good detections and filter out the ones with low confidence which are more likely to be false positives. To this end, we first divide the input image \(\mathbf{X}_{T}\) into \(S_{\text{row}}\) rows and \(S_{\text{col}}\) columns, creating a grid of \(S_{\text{row}}\times S_{\text{col}}\) cells. We then assign to each cell \(\mathbf{X}_{T}^{ij}\in\mathbb{R}^{H/S_{\text{row}}\times W/S_{\text{col}} \times 3}\), \(i=1,\ldots,S_{\text{row}}\), \(j=1,\ldots,S_{\text{col}}\), a confidence value obtained by averaging the confidence of the bounding boxes whose center belongs to it. The most confident cell \(\mathbf{\tilde{X}}_{T}\) is cropped from \(\mathbf{X}_{T}\) and the portions of its bounding box detections that fall outside the selected region are trimmed accordingly to obtain \(\mathbf{\tilde{P}}_{T}\) (Fig. 2, Crop). #### 3.2.2 Composing augmented versions Since the most reliable detections \(\mathbf{\tilde{P}}_{T}\) occupy only a small portion of \(\mathbf{X}_{T}\), we perform the following operations: (i) we fill the remaining cells with useful information to avoid wasting computation, (ii) we sample such information from the target domain to balance source and target data during adaptation, and (iii) we fill remaining cells with augmented versions of \(\mathbf{\tilde{X}}_{T}\) and preserve pseudo-labels reliability because we transform \(\mathbf{\tilde{P}}_{T}\) according to the augmentation function chosen, as opposed to estimating them over potentially highly distorted versions of \(\mathbf{\tilde{X}}_{T}\). To this end, we first perturb the pair \((\mathbf{\tilde{X}}_{T},\mathbf{\tilde{P}}_{T})\) by different data augmentations \(f_{1},f_{2},\ldots,f_{S_{\text{row}}\,S_{\text{col}}}\), randomly selected from a list of predefined ones (Fig. 2, Augm.). Throughout our experiments the list of predefined data augmentations includes horizontal flipping, cropping, blurring, color jittering, downscaling, and perturbing brightness and contrast (Tab. 1), but it can be expanded or tailored according to the downstream task. A composite image \(\mathbf{\hat{X}}_{T}=g\left(\mathbf{\tilde{X}}_{T}\right)\in\mathbb{R}^{H \times W\times 3}\) is then created by composing the augmented versions of \(\mathbf{\tilde{X}}_{T}\) in a \(S_{\text{row}}\times S_{\text{col}}\) grid as \[g\colon\mathbb{R}^{H/S_{\text{row}}\times W/S_{\text{col}}\times 3}\to\mathbb{R}^{H \times W\times 3},\quad g\left(\mathbf{\tilde{X}}_{T}\right)=\begin{bmatrix}f_{1} \left(\mathbf{\tilde{X}}_{T}\right)&\ldots&f_{S_{\text{col}}}\left(\mathbf{ \tilde{X}}_{T}\right)\\ \vdots&\ddots&\vdots\\ f_{(S_{\text{row}}-1)S_{\text{col}}+1}\left(\mathbf{\tilde{X}}_{T}\right)&\ldots &f_{S_{\text{row}}S_{\text{col}}}\left(\mathbf{\tilde{X}}_{T}\right)\end{bmatrix},\] obtaining an image of the same size of \(\mathbf{X}_{T}\) (Fig. 2, Comp.). Composite pseudo-detections are obtained in the same way, i.e. \(\mathbf{\tilde{P}}_{T}=g(\mathbf{\tilde{P}}_{T})\) (Fig. 2, Comp.). #### 3.2.3 Loss function The core idea behind DACA is to use \(\hat{\mathbf{P}}_{T}\) as pseudo-detections to self-train \(\Phi_{\Theta}\). To do so, we pass \(\hat{\mathbf{X}}_{T}\) in input to \(\Phi_{\Theta}\) to obtain the detections \(\hat{\mathbf{D}}_{T}\) and promote consistency between \(\hat{\mathbf{D}}_{T}\) and \(\hat{\mathbf{P}}_{T}\) by minimizing the loss \(\ell_{T}\). Performing adaptation by minimizing \(\ell_{T}\) alone may incur in two problems: (i) catastrophic forgetting [], i.e. the knowledge learned by the detector on \(S\) during the pre-training phase can be quickly forgotten, and (ii) potential false positive pseudo-detections on target data can undermine the performance. To cope with both issues, during the adaptation phase we minimize the combined loss \(\ell=\ell_{S}+\ell_{T}\), where \(\ell_{S}\) promotes consistency between source detections \(\mathbf{D}_{S}=\Phi_{\Theta}(\mathbf{X}_{S})\) and its ground-truth annotations \(\mathbf{G}_{S}\) (Fig. 2, first row). ## 4 Experiments ### Datasets We evaluate and compare our method under three benchmark adaptation scenarios, which include four datasets: (i) Cityscapes (C) []: it contains urban images acquired from different cities. Annotations are provided for the eight object classes Person, Rider, Car, Truck, Bus, Train, Motorcycle, and Bicycle; (ii) FoggyCityscapes (F) []: it is a variation of Cityscapes obtained by applying a synthetic fog filter with three different intensity levels; (iii) Sim10K (S) []: it is a synthetic dataset produced by the GTA-V game engine; (iv) KITTI (K) []: it is a traffic scenes dataset. The first experimental scenario envisions a synthetic-to-real adaptation task on the Car category where \(S\) is Sim10K and \(T\) is Cityscapes (S\(\rightarrow\)C). The second experimental scenario is cross-camera adaptation on the Car category where \(S\) is KITTI and \(T\) is Cityscapes (K\(\rightarrow\)C). The last experimental scenario is weather-adaptation on Cityscapes' eight categories (C\(\rightarrow\)F). ### Implementation details We compare our method with previous works ground truth and tested on the test set of the target, which represents our upper bound of the performance range. We start the experiments with a 2\(\times\)2 target grid division layout in which the confident target region is augmented four times based on a list of typical image augmentations as listed in Tab. 1. Tab. 2 shows the quantitative results obtained on the S\(\rightarrow\)C, K\(\rightarrow\)C, and C\(\rightarrow\)F benchmarks for the Car category. DACA outperforms all the previous works. We observe gains of 4.4% and 2.6% in the S\(\rightarrow\)C and K\(\rightarrow\)C cases, respectively, compared to the closest competitor method (ConfMix), achieving new state-of-the-art on both scenarios. In particular, across the three scenarios, our method improves over ConfMix by 2.4% on average. Tab. 3 shows the quantitative results obtained on the C\(\rightarrow\)F benchmark. DACA improves by 0.3% over ConfMix, while both underperform some of other methods that use different detectors and different backbones (e.g. MGA and SIGMA [10, 5]). Although this adaptation case is particularly challenging due to the high occlusion of objects and high domain shift owed to the presence of fog (confirmed also by the low mAP achieved by the baseline), the gap between DACA and the oracle score is reasonably narrow (3.3%). We display qualitative examples on the class Car for the K\(\rightarrow\)C and S\(\rightarrow\)S scenarios in Fig. 3 to highlight the fact that DACA produces less false positives compared to ConfMix. More results can be found in the Supplementary Material. \begin{table} \begin{tabular}{l l l l} \hline \hline Transformation & Acronym & Operation & Hyper-parameters \\ \hline HorizontalFlip & HF & Randomly flips the image horizontally & probability(\(p\))=0.5 \\ BBoxSafeRandomCrop & SRC & Randomly crops the image without compromising & p=0.2 \\ & the bounding boxes & & \\ Blur & B & Randomly bars out the image & p=0.5 \\ ColorFilter & CJ & Randomly applies color jiner & p=0.5, brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2 \\ Downscale & D & Randomly downscules the image & p=0.5, minimum scale=0.5, maximum scale=0.99 \\ RandomBrightnessContrast & BC & Randomly alters brightness/contrast of the image & p=0.5, brightness limit=0.1, contrast limit=0.1 \\ \hline \hline \end{tabular} \end{table} Table 1: List of image augmentation operations to build our composite input image. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Method & Detector & Backbone & S\(\rightarrow\)C & K\(\rightarrow\)C & C\(\rightarrow\)F & Avg. \\ \hline CTRP [5] & Faster R-CNN & VGG-16 & 44.5 & 43.6 & 50.1 & 46.1 \\ CDN [5] & Faster R-CNN & VGG-16 & 49.3 & 44.9 & 50.9 & 48.4 \\ FL-UDA [5] & Faster R-CNN & VGG-16 & 43.1 & 44.6 & 44.4 & 44.0 \\ SC-UDA [5] & Faster R-CNN & VGG-16 & 52.4 & 46.4 & 56.0 & 51.6 \\ SAPN [5] & Faster R-CNN & VGG-16 & 44.9 & 43.4 & 59.8 & 49.4 \\ MeGA-CDN [5] & Faster R-CNN & VGG-16 & 44.8 & 43.0 & 52.4 & 46.7 \\ MGA [5] & Faster R-CNN & VGG-16 & 54.6 & 48.5 & 60.6 & 54.6 \\ IRGG [5] & Faster R-CNN & ResNet-50 & 43.2 & 45.7 & 51.9 & 46.9 \\ GIPA [5] & Faster R-CNN & ResNet-50 & 47.6 & 47.9 & 54.1 & 49.9 \\ SSOD [5] & Faster R-CNN & ResNet-50 & 49.3 & 47.6 & 57.2 & 51.4 \\ \hline EPM [5] & FCOS & ResNet-101 & 51.2 & 45.0 & 57.1 & 51.1 \\ SCAN [5] & FCOS & VGG-16 & 52.6 & 45.8 & 57.3 & 51.9 \\ SIGMA [5] & FCOS & VGG-16 & 53.7 & 45.8 & 63.7 & 54.4 \\ \hline Baseline (Source only) & YOLOv5 & CSP-Darknet53 & 50.4 & 42.9 & 54.9 & 49.4 \\ Oracle (Target only) & YOLOv5 & CSP-Darknet53 & 69.5 & 69.5 & 67.9 & 69.0 \\ ConfMix [5] & YOLOv5 & CSP-Darknet53 & 56.2 & 51.6 & **63.0** & 56.9 \\ DACA (Ours) & YOLOv5 & CSP-Darknet53 & **60.6** & **54.2** & **63.0** & **59.3** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of Car detection performance (AP) on the S\(\rightarrow\)C, K\(\rightarrow\)C, and C\(\rightarrow\)F adaptation benchmarks. Keys: C: Cityscapes. F: FoggyCityscapes. S: Sim10K. K: KITTI. Avg.: average across the three adaptation scenarios. ### Ablation study #### 4.4.1 Effect of different augmentation types Tab. 4 (left) shows an ablation study on the six augmentation types listed in Tab. 1 and their combinations. We can observe that horizontal flipping (HF) performs better than the other augmentations on average. Downscale (D) and blurring (B) are the next best performing augmentations, they improve the model's robustness to scale and occlusion. Combining all the augmentations together performs the best in all benchmarks. Interestingly, turning off all the augmentations and using raw images (the None case in Tab. 4 (left)) lowers the mAP significantly, which underlines the benefit of using data augmentations for UDA. #### 4.4.2 Effect of grid layout Tab. 4 (top right) shows an ablation on the grid layout used to find reliable detections for our best setting (the All case in Tab. 4 (left)). We can see that a 3\(\times\)3 grid scores the lowest mAP, we believe this happens because it produces a small confident region and consequently few Figure 3: Qualitative instances of DACA as compared to the ground truth and ConfMix []. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline Method & Detector & Backbone & Person & Rider & Car & Truck & Bus & Train & Motorcycle & Bicycle & mAP \\ \hline IRGG [] & Faster R-CNN & ResNet-50 & 37.4 & 45.2 & 51.9 & 24.4 & 39.6 & 25.2 & 31.5 & 41.6 & 37.1 \\ GIPA [] & Faster R-CNN & ResNet-50 & 32.9 & 46.7 & 54.1 & 24.7 & 45.7 & 41.1 & 32.4 & 38.7 & 39.5 \\ SSOD [] & Faster R-CNN & ResNet-50 & 38.8 & 45.9 & 57.2 & 29.9 & 50.2 & 51.9 & 31.9 & 40.9 & 43.3 \\ CTRP [] & Faster R-CNN & VGG-16 & 32.7 & 44.4 & 50.1 & 21.7 & 45.6 & 25.4 & 30.1 & 36.8 & 35.9 \\ CDN [] & Faster R-CNN & VGG-16 & 35.8 & 45.7 & 50.9 & 30.1 & 42.5 & 29.8 & 30.8 & 36.5 & 36.6 \\ FL-UDA [] & Faster R-CNN & VGG-16 & 30.4 & 51.9 & 44.4 & 34.1 & 25.7 & 30.3 & 37.2 & 41.8 & 37.0 \\ SC-UDA [] & Faster R-CNN & VGG-16 & 38.5 & 43.7 & 56.0 & 27.1 & 43.8 & 29.7 & 31.2 & 39.5 & 38.7 \\ SAPN [] & Faster R-CNN & VGG-16 & 40.8 & 46.7 & 59.8 & 24.3 & 46.8 & 37.5 & 30.4 & 40.7 & 40.9 \\ MeGA-CDA [] & Faster R-CNN & VGG-16 & 37.7 & 49.0 & 52.4 & 25.4 & 49.2 & 46.9 & 34.5 & 39.0 & 41.8 \\ MGA [] & Faster R-CNN & VGG-16 & 43.9 & 49.6 & 60.6 & 29.6 & 50.7 & 39.0 & 38.3 & 42.8 & 44.3 \\ \hline EPM [] & FCOS & ResNet-101 & 41.5 & 43.6 & 57.1 & 29.4 & 44.9 & 39.7 & 29.0 & 36.1 & 40.2 \\ SCAN [] & FCOS & VGG-16 & 41.7 & 43.9 & 57.3 & 28.7 & 48.6 & 48.7 & 31.0 & 37.3 & 42.1 \\ SIGMA [] & FCOS & ResNet-50 & 44.0 & 43.9 & 60.3 & 31.6 & 50.4 & 51.5 & 31.7 & 40.6 & 44.2 \\ \hline Baseline (Source only) & YOLOv5 & CSP-Darknet53 & 39.2 & 38.0 & 54.9 & 12.4 & 33.1 & 6.2 & 19.9 & 33.6 & 29.7 \\ Oracle (Target only) & YOLOv5 & CSP-Darknet53 & 45.6 & 43.0 & 67.9 & 30.2 & 48.0 & 39.4 & 30.3 & 37.5 & 42.7 \\ ConfMix [] & YOLOv5 & CSP-Darknet53 & 44.0 & 43.3 & 63.0 & 30.1 & 43.0 & 29.6 & 25.5 & 34.4 & 39.1 \\ DACA (Ours) & YOLOv5 & CSP-Darknet53 & 41.9 & 40.8 & 63.0 & 29.4 & 42.2 & 37.2 & 27.8 & 33.0 & 39.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Object detection performance (APs) on the C\(\rightarrow\)F scenario. The last column reports the mean average precision (mAP) across all the object categories. DACA improves slightly over its direct competitor ConfMix [] while both score the highest AP on the Car class. objects and small spatial context. On the other hand, a 3\(\times\)2 grid is slightly better than a 2\(\times\)3 one, suggesting that the horizontal context is more informative than the vertical one due to the presence of cars and roads in the target datasets. Finally, a 2\(\times\)2 grid performs best, being a good trade-off of the previous layouts. #### 4.4.3 Does the number of augmented regions matter? Tab. 4 (bottom right) shows an ablation on the number of augmented regions for our best setting (the All case in Tab. 4 (left)). We can observe that lowering the number of augmented regions that are used to build the composite image incurs a drastic performance decay. Fig. 4 shows some qualitative results obtained in the S\(\rightarrow\)C adaptation scenario when using only one augmented region versus when four regions are used in a 2\(\times\)2 grid layout. Using only one region compromises the robustness of the detector and yields more false positive detections. ## 5 Conclusions We presented a four-step approach for unsupervised domain adaptation in object detection. We exploit regions of target images that have high-confidence object detections to collect pseudo-labels for self-training. We generate a new image through the composition of different augmented versions of the selected image region and use its pseudo-labels to supervise the training of this composite image, which allows us to adapt the detector towards the \begin{table} \begin{tabular}{l c c c c} \hline \hline Transf. & C\(\rightarrow\)F & K\(\rightarrow\)C & S\(\rightarrow\)C & Avg. \\ \hline None & 33.5 & 52.8 & 57.4 & 47.9 \\ \hline HF & 38.0 & 52.9 & 59.7 & 50.2 \\ RC & 34.3 & 52.2 & 59.4 & 48.7 \\ B & 35.9 & 53.2 & 58.4 & 49.2 \\ CI & 34.6 & 52.5 & 58.2 & 48.5 \\ D & 35.3 & 54.1 & 59.8 & 49.8 \\ BC & 33.9 & 52.6 & 57.5 & 48.0 \\ \hline HF+B & 38.9 & 53.9 & 59.3 & 50.7 \\ HF+D & 35.4 & 53.3 & 56.7 & 48.5 \\ D+B & 36.8 & 53.5 & 59.9 & 50.0 \\ \hline HF+D+B & 37.8 & 54.0 & 57.1 & 49.6 \\ \hline All & **39.4** & **54.2** & **60.6** & **51.4** \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} \hline \hline Grid layout & C\(\rightarrow\)F & K\(\rightarrow\)C & S\(\rightarrow\)C & Avg. \\ \hline 3\(\times\)3 & 37.8 & 51.2 & 57.7 & 48.9 \\ 2\(\times\)3 & 38.5 & 51.7 & 58.7 & 49.6 \\ 3\(\times\)2 & 38.6 & 53.6 & 59.9 & 50.7 \\ 2\(\times\)2 & **39.4** & **54.2** & **60.6** & **51.4** \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} \hline \hline Num. regions & C\(\rightarrow\)F & K\(\rightarrow\)C & S\(\rightarrow\)C & Avg. \\ \hline 1 & 35.4 & 51.5 & 56.6 & 47.8 \\ 2 & 38.3 & 52.2 & 58.5 & 49.7 \\ 3 & 39.1 & 53.1 & 60.2 & 50.8 \\ 4 & **39.4** & **54.2** & **60.6** & **51.4** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation results in terms of mAP for the three adaptation benchmarks. Left: Effect of augmentation types. Top right: Effect of grid layout. Bottom right: Effect of the number of augmented regions (in a 2\(\times\)2 grid) to build the composite image. C: Cityscapes, F: FoggyCityscapes, S: Sim10K, and K: KITTI. Avg. Average across the three adaptation scenarios Figure 4: Qualitative instances depicting the effect of the number of augmented regions used to build the composite image. target domain in a self-supervised fashion. The adaptation of the detectors is carried out simultaneously with the training of source images to prevent knowledge drift of the detector. We evaluated our approach on several benchmarks for UDA and compared it against alternative approaches. Results show that our approach can outperform the other approaches. As future work, one can explore style transfer techniques [] to improve DACA's augmentation step, and consider confident bounding boxes instead of confident regions to improve DACA's composition step by reducing the number of false positives negatively affecting the self-training process. ## Acknowledgements We are very grateful to the support by European Union's Horizon Europe research and innovation programme under grant agreement No. 101092043, project AGILEHAND (Smart Grading, Handling and Packaging Solutions for Soft and Deformable Products in Agile and Reconfigurable Lines).
2304.12931
SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators
To meet the growing need for computational power for DNNs, multiple specialized hardware architectures have been proposed. Each DNN layer should be mapped onto the hardware with the most efficient schedule, however, SotA schedulers struggle to consistently provide optimum schedules in a reasonable time across all DNN-HW combinations. This paper proposes SALSA, a fast dual-engine scheduler to generate optimal execution schedules for both even and uneven mapping. We introduce a new strategy, combining exhaustive search with simulated annealing to address the dynamic nature of the loop ordering design space size across layers. SALSA is extensively benchmarked against two SotA schedulers, LOMA and Timeloop on 5 different DNNs, on average SALSA finds schedules with 11.9% and 7.6% lower energy while speeding up the search by 1.7x and 24x compared to LOMA and Timeloop, respectively.
Victor J. B. Jung, Arne Symons, Linyan Mei, Marian Verhelst, Luca Benini
2023-04-20T12:00:08Z
http://arxiv.org/abs/2304.12931v2
# SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators ###### Abstract To meet the growing need for computational power for DNNs, multiple specialized hardware architectures have been proposed. Each DNN layer should be mapped onto the hardware with the most efficient schedule, however, SotA schedulers struggle to consistently provide optimum schedules in a reasonable time across all DNN-HW combinations. This paper proposes SALSA, a fast dual-engine scheduler to generate optimal execution schedules for both even and uneven mapping. We introduce a new strategy, combining exhaustive search with simulated annealing to address the dynamic nature of the loop ordering design space size across layers. SALSA is extensively benchmarked against two SotA schedulers, LOMA [1] and Timeloop [2] on 5 different DNNs, on average SALSA finds schedules with 11.9% and 7.6% lower energy while speeding-up the search by 1.7\(\times\) and 24\(\times\) compared to LOMA and Timeloop, respectively. DNN, accelerator, scheduling, energy-efficiency, combinatorial optimization, simulated annealing ## I Introduction Convolutional Neural Networks (CNNs) [3] are a very successful class of machine learning (ML) models. This type of Deep Neural Network (DNN) consists of a stack of convolutional layers and reaches state-of-the-art (SotA) performance in the fields of image recognition, classification, segmentation, etc. A wide range of specialized hardware (HW) emerged to accelerate DNN execution [4]. These DNN accelerators vary from datacenter-class [5] to embedded systems. The efficiency of a DNN Accelerator is mainly based on the memory hierarchy, the spatial unrolling, and it heavily relies on efficient schedulers to find optimal temporal mappings [6] of DNN layers onto hardware resources. As previous work has demonstrated, the scheduling of a NN onto such HW accelerators impacts energy and latency up to orders of magnitude [7]. A subtle change in the characteristics of the NN-HW combination can completely modify the optimal schedule. For example, a change in on-chip memory resources can alter the optimal data allocation scheme and even the most efficient workload execution order to minimize energy or latency. As a result, many design space exploration (DSE) schedulers [2, 8, 9, 1], have been proposed to automatically find the optimal schedule given a DNN workload and an accelerator HW architecture. However, the above-mentioned schedulers fail to reach near-optimal mappings in a reasonable time. The contributions of this paper are the following: 1. We introduce SALSA, a novel scheduler that never shrinks or prunes the schedule search space while having an execution time of a few seconds. Using a dual-engine strategy, SALSA consistently reaches near-optimal schedules with an average error margin of \(0.007\)%. 2. To prove its superiority, we extensively compare SALSA with 2 SotA schedulers, LOMA [1] and Timeloop [2]. SALSA always finds schedules with higher or equal quality than Timeloop and LOMA while consequently reducing the search time. We tested SALSA on 5 commonly used DNNs, benchmarked against Timeloop and LOMA, and evaluated using the SotA cost model ZigZag [10]. **In both benchmarks, SALSA achieves a consequent reduction of the search time, we report 1.7\(\times\) and 24\(\times\) faster search than LOMA and Timeloop. Most importantly, SALSA reaches superior schedules leading to a reduction of the energy needed to execute the model by 7.6% and 11.9% compared to LOMA and Timeloop, respectively.** ## II Background ### _DNNs, Accelerators & Schedules_ A single convolutional layer consists of 7 nested for-loops, as can be seen in the top-left of Figure 1. The loop dimension sizes determine the tensor size of the three operands; Input (I), Weight (W), and Output (O). Other NN layer topologies (fully connected, pointwise convolutional, matrix-matrix multiplication, etc.) can use the same representation by fixing specific loop dimension sizes to 1. In order to speed up the DNN inference or increase its energy efficiency, various Application-Specific Integrated Circuit (ASIC) DNN accelerators have been proposed both in academia and by the industry. Such accelerators typically include a spatially unrolled array of Processing Elements (PE) that consist of a Multiply-Accumulate (MAC) unit and local memories to store the operand data. The PEs are connected to larger memories higher up in the memory hierarchy stack Fig. 1: Overview of the SALSA implementation. through fixed interconnections or a flexible Network-on-Chip (NoC) [4]. These connections allow the multicasting of operand data to multiple PEs, consequently parallelizing the computation. Unrolling a for-loop onto multiple PEs will turn it into a parallel for-loop (parfor-loop). When executing a DNN onto an Accelerator, the set of parfor-loops is named spatial unrolling and indicates the parallelization pattern. Usually, the number of PEs is lower than the dimension of the original for-loops; thus, it is common to split them in order to turn a part of the original for-loop into parfor-loops. On top of the spatial unrolling, an optimized temporal execution schedule is crucial to extract the full potential of DNN Accelerators. More specifically, a schedule can be decomposed into two elements: 1.) the _loop ordering_, which describes the temporal processing order of the for-loops, and 2.) the _memory allocation_, which assigns the operands of each loop to a specific memory resource. A detailed description of these elements follows later. ### _Loop Prime Factor Decomposition_ The loop ordering of the original nested for-loop representation would not result in an optimal schedule. By decomposing the large loop dimensions into multiple smaller loops, and subsequently re-ordering those smaller loops, better schedules can be found. At the finest level of granularity, each loop is decomposed into the number of prime factors of its loop dimension. The resulting indivisible for-loops are referred to as Loop Prime Factors (LPF). An example of the decomposition of an originally nested for-loop to an LPF ordering is shown in Fig.2 steps A to B. ### _Loop Ordering Search Space_ A loop ordering \(o\) can be seen as a permutation of a finite set of elements, where each element represents a for-loop (Fig.2 step B). The loop ordering search space is thus represented by the Symmetric Group \(S_{n}\) with \(n\) the number of loops in \(o\). The order (number of elements) of \(S_{n}\) is \(n!\) if every element is unique. Due to the LPF decomposition, \(n=20\) is not uncommon for modern DNN layers. This would require the evaluation of \(O(10^{18})\) orderings. Therefore, exhaustively going through all elements in \(S_{n}\) is only tractable for small NN layers where \(n<11\). ### _Memory Allocation_ Loop ordering has to be combined with the allocation of the data attributed to these loops to specific memory resources in the memory hierarchy (Fig.2 step D). Most mapping representations store the 3 operands (I/W/O) associated with a for-loop at the same memory level. Such mappings are referred to as 'even memory mappings'. A more complex mapping strategy has been proposed recently [10], named **'uneven memory mapping'**. This strategy allows to unevenly distribute of operand data of the nested for-loops within shared memories in the hierarchy in order to more efficiently exploit the data reuse at the cost of drastically enlarging the mapping search space. To reduce the resulting large mapping search space, LOMA [1] proposed a bottom-up memory allocation strategy independent of the loop ordering. This is possible due to the fact that for a single loop ordering \(o\), the optimal memory allocation \(m\) can be inferred with a one-to-one relationship in a bottom-up fashion. ### _Cost Model_ The energy, latency, or any other performance metric of the inference of a CNN layer on an accelerator depends on four aspects: 1.) the DNN workload \(w\) (size of the 7 loop dimensions); 2.) the accelerator characteristics \(a\) (PE array size, memory organization, memory size, etc.); 3.) the spatial unrolling \(s\) (parallelization strategy across PE array); 4.) the schedule or temporal mapping \(m\). This work focuses on temporal DNN mapping optimization, where the inputs \(w\), \(a\), and \(s\) are provided by the user or by an upper-level search engine. The optimization objective, returned by the cost model, is noted \(V\) and can represent the energy, latency, Energy-Delay Product (EDP), etc. ## III Related Work In recent years, a plethora of tools has been proposed to generate high-quality schedules. Some constrain the search space like CoSA [9], and Pluto [11] to speed up the search. Others, like Interstellar [8] and ZigZag [10] prune some part of the search space during the search through heuristics. LOMA [1] combines an exhaustive search with optional user-provided constraints, providing both unconstrained and constrained search. Timeloop [2] embeds a random search engine in an unconstrained space, failing to consistently provide near-optimum schedules in fast search time. Alternatively, Mind Mappings [12] trains a DNN to substitute the cost model and make the search space smooth and differentiable in order to apply Stochastic Gradient Descent. Fig. 2: Detailed example of SALSA’s Simulated Annealing path. The workload used in this figure is fictional for the purpose of demonstration, and the Memory Hierarchy is composed of three levels: DRAM, Shared Buffer, and Registers. It is important to highlight that, besides the search strategy, the representation of a schedule varies between frameworks. This makes it hard to extensively compare results and performances. All the above-mentioned frameworks implement an even mapping representation (see Section II-D). ZigZag's and LOMA's representation also allows uneven mappings. Consequently, its mapping search space becomes more complex. SALSA overcomes these bottlenecks by implementing a flexible and fast scheduler that allows for both even and uneven mappings generation by separating loop ordering and loop memory allocation in two independent processes. This also allows one to use SALSA with other scheduling representations, e.g., plug in another memory allocation strategy or cost model. As SALSA's loop ordering algorithm doesn't use expert knowledge of the cost model or memory allocation, it is robust to drastic changes in the search space. ## IV SALSA Scheduling Approach To cope with the changing size of the search space from one layer to another, SALSA implements a dual search strategy, as shown in Figure 1. The simulated annealing path is shown in detail in Figure 2. ### _Runtime Approximation and Search Method Selection_ To decide which of the Exhaustive or Simulated Annealing paths is the fastest (Fig.1), we evaluate and compare their runtime. The Simulated Annealing path's runtime is constant (depends on a fixed hyperparameter) while the Exhaustive path's execution time \(T\) is evaluated as follows: \[T(n,k)=\tau\frac{n!}{\prod_{i=1}^{m}k_{i}!} \tag{1}\] Where \(n\) is the number of elements in the loop ordering, \(k_{i}\) is the multiplicity of the i-th element, \(m\) is the number of unique elements in the loop ordering, and \(\tau\) is an HW-dependent constant. Figure 3 shows how the exhaustive search time exponentially increases with the number of LPFs in a loop ordering while the simulated annealing search time remains constant. We will demonstrate that, even though more LPFs imply a larger permutation space, simulated annealing performs well across all DNN-HW combinations in a constant time. ### _Exhaustive Search_ The exhaustive search branch is implemented using LOMA's scheduler [1]. After the exhaustive loop ordering generation, each unique ordering undergoes a bottom-up memory allocation and, finally, a cost model evaluation (both explained next). Most importantly, this exhaustive search engine guarantees to find the global optimum for any preferred optimization criterion at the cost of a potentially infeasible search time. ### _Simulated Annealing Search_ In most cases, the exhaustive path would be too time-consuming, and thus the simulated annealing path is taken. Despite its simplicity, simulated annealing [13] and its different variants are widely used and prove to be efficient in combinatorial optimization. Each iteration of the simulated annealing pass will go through the subsequent steps depicted in Figure 2: #### Iv-B1 Sampling New Ordering In order to sample new orderings (Fig.2 step C), we model a neighborhood of nearby states that can serve as the next candidate state [13]. SALSA defines the neighborhood of a loop ordering \(o\) as follows: \[N_{o}:=\{swap(o,i,j)\mid i\in[0,n),\ j\in[0,n),\ i\neq j\} \tag{2}\] with \(swap(o,i,j)\) the action of swapping the LPFs at indices \(i\) and \(j\) of the ordering \(o\) of size \(n\). With this neighborhood, any point in the search space can be reached in \(n-1\) steps. #### Iv-B2 Memory Allocation & Cost Model Evaluation Firstly, we allocate the memory accordingly to the new loop ordering generated by the previous stage (Fig.2 step D). SALSA then uses a cost model to get the performances associated with the candidate state (Fig.2 step E). In this paper, results using the ZigZag as well as the Timeloop cost model will be shown. #### Iv-B3 Transition Probability Computation & Next Node Selection Once the cost \(V^{\prime}\) of the sampled state \(m^{\prime}\) is returned by the cost model, SALSA computes the probability of accepting the candidate state \(m^{\prime}\) using the following formula: \[\mathbb{P}(m,m^{\prime})=exp(\frac{\frac{V}{V^{\prime}}-1}{T}) \tag{3}\] where \(V\) and \(V^{\prime}\) are respectively the optimization objective of the states \(m\) and \(m^{\prime}\). The temperature \(T\) is a hyperparameter handling the balance between _intensification_ and _diversification_ to avoid getting stuck in local optima while focusing the search on promising regions of the search space. The evolution of \(T\) depends on the number of iterations \(I\) and respect the following geometric progression: \(T_{i+1}=\rho T_{i}\) where \(\rho=0.999\) Fig. 4: Mapping energy distribution during a search for layer 2 of AlexNet, using Timeloop and SALSA. Best viewed in color. Fig. 3: Graph illustrating the required search time for different search strategies for varying numbers of LPFs for AlexNet Layer 2. Note the logarithmic y-axis. ## V Experimental Results and Benchmarking ### _Experimental Setup_ SALSA is implemented in Python and benchmarked across other schedulers available in the SotA. In our study, we use the 5 following NN: AlexNet, ResNet34 [14], ResNet50 [14], DarkNet19 [15], and MobileNetV2 [16]. The accelerator \(a\) is an Eyeriss-like architecture [4], consisting of a 14 by 12 PE array. Besides a MAC unit, each PE includes a scratchpad for weights, inputs, and outputs. Above the PE array resides a global buffer for storing inputs and outputs, followed by a DRAM that holds all three operands. The spatial dataflow \(s\) is fixed in accordance with the architecture. The total energy consumption of executing a layer is used as \(V\). Experiments were run on a quad-core CPU @3.6GHz, and with \(I=1000\), \(\rho=0.999\) and \(T_{0}=0.05\). ### _Experimental results_ To assess the efficiency of the simulated annealing path of SALSA, we show the energy distribution of mappings using both SALSA and Timeloop (Fig. 4). Note that this energy distribution pattern is consistently found across layers of all studied DNNs. Compared to the random-pruned search of Timeloop, SALSA's simulated annealing energy distribution is centered on higher-quality states, providing better schedules in a shorter time. The stochastic nature of SALSA's simulated annealing motivates an exhaustive search on ResNet34 in order to study the capability of SALSA to consistently reach near-optimal schedules. We used LOMA to exhaustively find the best loop ordering for each unique layer of ResNet34, then we ran SALSA's simulated annealing engine 500 times per layer. We find that SALSA reaches the global optimum 99.9% of the time. Even when SALSA does not find the global optimum, it still generates high-quality schedules, on average with \(0.007\)% higher energy than the best mapping. We also compare SALSA against LOMA with various _LPF Limits_ (Fig. 6). The _LPF Limit_ parameter indicates the maximum size of the orderings considered by LOMA, it limits the number of orderings to evaluate at the cost of the schedule's energy. We can clearly notice the trade-off between search time and energy between LOMA 6 and SALSA. Since the search for the optimal schedule is done offline, one would always favor lower energy rather than a reduction of a few seconds in the search time. Finally, we extensively benchmark SALSA against LOMA and Timeloop (Fig. 5). We choose the LPF limitation factor of LOMA to get a similar search time to SALSA (see Fig.6). In order to avoid a cost model bias, the schedule found by Timeloop's engine is evaluated using ZigZag's cost model. We notice that not all layers benefit from SALSA in the same way: all 3 search engines find similar energy schedules for simple layers (i.e., with fewer loops to permute). However, SALSA significantly outperforms LOMA and Timeloop for more complex layers with a bigger search space, leading to up to 50% of energy reduction. Additionally, SALSA's search time is drastically lower than Timeloop's for every layer. Overall, SALSA improves the execution energy by 7.6\(\%\), 11.9%, and speed-up the search runtime by 1.7\(\times\), 24\(\times\), respectively. ## VI Conclusion This paper presented SALSA: a dual-engine, rapid scheduler capable of finding optimal schedules of DNN layers onto an HW accelerator. The simulated annealing-based engine provides an efficient heuristic search guided by any desired performance metric and finds optimal mappings in a short and predictable time. SALSA consistently finds better mappings than current SotA schedulers in a shorter time. It is deployed extensively on 5 DNNs: finding on average 7.6% and 11.9% better energy schedules while speeding up the search by a factor of 1.7\(\times\) and 24\(\times\) compared to LOMA and Timeloop, respectively. By significantly speeding up the process of extracting high-quality temporal mappings, SALSA paves the way for fast spatial unrolling and accelerator architecture search. SALSA is open-sourced and available at [17]. Fig. 5: Comparison of SALSA, LOMA 7, and Timeloop for 5 DNN. In this figure, LOMA is configured with an LFP limitation factor of \(7\). The left part displays the Energy and Search Time for every unique layer of ResNet50, while the right part shows the average Energy Reduction and Speed-up of each DNN. Energy Reduction and Speed-up in the right plots are normalized with Timeloop’s Energy and Time, respectively. Fig. 6: Performances of SALSA against LOMA X on MobileNetV2 layer 3, X being the LPF limitation factor, shrinking the search space and trading mapping performances for search speed. The configuration of LOMA that does not constrain the search space is noted LOMA Ekh (exhaustive).
2301.06142
A note on lower bounds to variational problems with guarantees
Variational methods play an important role in the study of quantum many body problems, both in the flavour of classical variational principles based on tensor networks as well as of quantum variational principles in near-term quantum computing. This brief pedagogical note stresses that for translationally invariant lattice Hamiltonians, one can easily derive efficiently computable lower bounds to ground state energies that can and should be compared with variational principles providing upper bounds. As small technical results, it is shown that (i) the Anderson bound and a (ii) common hierarchy of semi-definite relaxations both provide approximations with performance guarantees that scale like a constant in the energy density for cubic lattices. (iii) Also, the Anderson bound is systematically improved as a hierarchy of semi-definite relaxations inspired by the marginal problem.
J. Eisert
2023-01-15T16:48:57Z
http://arxiv.org/abs/2301.06142v2
# A note on lower bounds to variational problems with guarantees ###### Abstract Variational methods play an important role in the study of quantum many body problems, both in the flavour of classical variational principles based on tensor networks as well as of quantum variational principles in near-term quantum computing. This brief pedagogical note stresses that for translationally invariant lattice Hamiltonians, one can easily derive efficiently computable lower bounds to ground state energies that can and should be compared with variational principles providing upper bounds. As small technical results, it is shown that (i) the Anderson bound and a (ii) common hierarchy of semi-definite relaxations both provide approximations with performance guarantees that scale like a constant in the energy density for cubic lattices. (iii) Also, the Anderson bound is systematically improved as a hierarchy of semi-definite relaxations inspired by the marginal problem. Variational principles are for good reason ubiquitous in the study of quantum many body system. They allow for achieving insights into the physics of quantum many-body systems in ways that are hard to obtain by any other method, in particular in situations when strong correlations are dominant or when an instance of the sign-problem occurs so that other work-horses of the numerical study of quantum many-body systems such as density functional theory [1] or quantum Monte Carlo methods [2] are being challenged in their performance. A ground state of a _local many-body Hamiltonian_\(H_{N}\) defined on a lattice \(\mathcal{L}\) composed of \(N\) sites or vertices each of which is associated with a \(d\)-dimensional quantum system is even defined as a state satisfying \[\rho_{G}:=\text{argmin}_{\rho\in\mathcal{S}((\mathbb{C}^{d})^{\otimes N})} \text{tr}(\rho H_{N}). \tag{1}\] This is the solution of a variational principle over all quantum states \(\rho\) defined on these \(N\) degrees of freedom. A substantial body of literature on the study of strongly correlated quantum systems with many degrees of freedom is hence dedicated to formulating meaningful tractable ansatz classes to tackle this general variational principle. Naturally, if one merely optimizes over certain families of quantum states \(\rho\in\mathcal{T}((\mathbb{C}^{d})^{\otimes N})\), the latter being a suitable subset of all quantum states, one arrives at quantum states providing an upper bound to the ground state energy density \[e_{\text{min}}(H_{N}):=\frac{1}{N}\min\nolimits_{\rho\in\mathcal{S}((\mathbb{ C}^{d})^{\otimes N})}\text{tr}(\rho H_{N})=\frac{1}{N}\lambda_{\text{min}}(H_{N}). \tag{2}\] This idea has presumably become most prominent in the study of _tensor network states_[3, 4, 5] where classically efficient (in memory storage, but also in computational complexity at least in approximation) ansatz classes are being used to generate excellent approximations of true ground states. The basis of their functioning is that common ground states of local Hamiltonians are much less entangled than they could be [6], allowing for efficient approximations. Indeed, ground states of gapped one-dimensional local Hamiltonians can basically be parametrized by the solutions of such variational principles, in fact by instances of matrix product states. An alternative ansatz that is increasingly becoming popular is that of using quantum circuits in what is called the _variational quantum eigensolver_[7, 8]: In this context, one thinks of near-term quantum computers that have the ability to prepare states from a parametrized family of quantum states, for which the expectation value of the given Hamiltonian is then estimated and computed from measurement data. A limitation of such ansatzes is the often relatively low expressivity of the parametrized family of states; at the same time, in contrast to classical approaches they do not require an efficient classical contraction. Both ansatzes deliver upper bounds to the energy density of the ground state. Either way, as such, by construction such methods do not provide any certificate of the quality of the approximation of the ground state energy. This is less of an issue for tensor network methods that reach enormous precision, but it applies to quantum variational approaches that are presently within reach. This short pedagogical note stresses that lower bounds of the ground state energy that provide precisely such certificates can be found, often with little programming effort [9, 10, 11, 12, 13]. A similar point has also been made for variational quantum eigensolvers in Ref. [14]. For simplicity, throughout this note, we consider _translationally invariant_ local Hamiltonians with nearest-neighbour interactions, defined on cubic lattices \(\mathcal{L}\) of size \(|\mathcal{L}|:=N=n^{D}\), naturally equipped with periodic boundary conditions. This means that the Hamiltonian takes the form \[H_{N}=\sum_{j\in\mathcal{L}}\tau_{j}(h), \tag{3}\] where \(\tau_{j}\) places the nearest-neighbour Hamiltonian term \(h\) at a root site \(j\in\mathcal{L}\) of a lattice hosting a \(d\)-dimensional quantum degree of freedom. As small technical contribution, it is shown that common such lower bounds can be seen to feature performance guarantees that scale as \(O(1)\) in the system size \(N\) for the ground state energy density with a small constant that can be tightly bounded. Also, the Anderson bound is improved. The arguments presented here are all elementary, but on a conceptual level, it is still worth stressing that any variational eigensolver has to deliver a value that is more accurate than a constant in \(N\) in order to deliver a meaningful estimate for the ground state energy. In this sense, the results stated here can be seen as (immediate instances of) "de-quantization" results, in that they place stringent demands on any quantum algorithm aimed at obtaining a quantum advantage when estimating ground state energies. For what follows, we define as the central quantity of this note \[e_{\text{min}}:=\limsup_{n\to\infty}e_{\text{min}}(H_{n^{D}}) \tag{4}\] as the asymptotic ground state energy density. A performance guarantee for the Anderson bound.The Anderson bound [9] is a remarkably simple lower bound to the ground state energy of quantum many-body Hamiltonians, basically merely exploiting the triangle inequality of the operator norm \(\|.\|\) applied to the Hamiltonian equipped with a negative sign. It is conceptually easy and is implementable with a small programming effort, of less than an hour for a one-dimensional Hamiltonian problem. The performance of the bound is depicted in Fig. 2 for the _Heisenberg Hamiltonian_ in one spatial dimension. In this section, we see that it actually always delivers an approximation of the ground state energy density up to a small constant in \(N\), in fact, arbitrarily small, with a computational effort that is exponential in \(m\). **Proposition 1** (Performance guarantee of the Anderson bound).: _Consider a family of translationally invariant Hamiltonians of the form (4) on a cubic lattice in some spatial dimension \(D\), indexed by the system size \(N=n^{D}\), and let \(\lambda_{\text{min}}(h_{m})\) be the smallest eigenvalue of a cubic patch \(h_{m}\) of \(H_{N}\) on \(m^{D}\) sites, with open boundary conditions, then_ \[A(m,D) :=\frac{\lambda_{\text{min}}(h_{m})}{(m-1)^{D}}\leq e_{\text{min}}, \tag{5}\] \[|e_{\text{min}}-A(m,D)| \leq\frac{D}{m}\|h\|-\lambda_{\text{min}}(h_{m})\bigg{[}\frac{1}{ (m-1)^{D}}-\frac{1}{m^{D}}\bigg{]}. \tag{6}\] Proof.: The first inequality, first stated in Ref. [9] and here adapted to the asymptotic limit of large cubic lattices, is an immediate consequence of the following basic and still profound insight: One composes a Hamiltonian of \(N=[(m-1)J]^{D}\) sites into overlapping parts (see Fig. 1(a)) \[H_{N}=\sum_{s\in I_{m,J}}\tau_{s}(h_{m}), \tag{7}\] where \(I_{m,J}:=\{((m-1)(j_{1}-1)+1,\ldots,(m-1)(j_{D}-1)+1):j_{1},\ldots,j_{D}=1, \ldots,J\}\subset\mathcal{L}\). Then, clearly, \[\lambda_{\text{min}}(H_{N})\geq\sum_{s\in I_{m,J}}\lambda_{\text{min}}(h_{m})= J^{D}\lambda_{\text{min}}(h_{m}), \tag{8}\] where the first bound follows from the fact that the smallest eigenvalue of \(H_{N}\) is lower bounded by the sum of the smallest eigenvalues of each of the \(J^{D}\) parts consisting of \((m-1)^{D}\) sites each. For \(J\to\infty\), this gives the first statement of Eq. (5). The performance guarantee can be shown by considering a different partition (see Fig. 1(b)), \[H_{N}=\sum_{s\in K_{m,J}}\tau_{s}(h_{m})+V_{N}, \tag{9}\] where \(K_{m,J}:=\{(m(j_{1}-1)+1,\ldots,m(j_{D}-1)+1):j_{1},\ldots,j_{D}=1,\ldots,J\} \subset\mathcal{L}\), and hence \(N=m^{D}J^{D}\), and where \(V_{N}\) is the remainder term that consists of nearest neighbour terms connecting the slightly larger patches. Let us define \(|\phi\rangle:=\text{argmin}_{|\psi\rangle}\langle\psi|h_{m}|\psi\rangle\). Then \[\frac{\lambda_{\text{min}}(H_{N})}{N}-\frac{\lambda_{\text{min}} (h_{m})}{(m-1)^{D}}=\min{}_{|\psi\rangle}\frac{\langle\psi|H_{N}|\psi\rangle}{ N}-\frac{\langle\phi|h_{m}|\phi\rangle}{(m-1)^{D}}\] \[\leq\langle\phi|h_{m}|\phi\rangle\frac{J^{D}}{N}+\frac{1}{N}\|V_ {N}\|-\langle\phi|h_{m}|\phi\rangle\frac{1}{(m-1)^{D}}\] \[\leq\langle\phi|h_{m}|\phi\rangle\frac{J^{D}}{N}+m^{D-1}\|h\| \frac{DJ^{D}}{N}-\langle\phi|h_{m}|\phi\rangle\frac{1}{(m-1)^{D}}\] \[=\lambda_{\text{min}}(h_{m})\bigg{[}\frac{1}{m^{D}}-\frac{1}{(m-1 )^{D}}\bigg{]}+\frac{D}{m}\|h\|, \tag{10}\] where the first inequality follows from the fact that the minimum of \(\min(\rho h_{m})\) over quantum states \(\rho\) takes the smallest value for \(\langle\phi|h_{m}|\phi\rangle\), the second from the triangle inequality of the operator norm. Then, one encounters \(Dm^{D-1}\) boundary terms in a cubic patch in \(D\) dimensions involving \(m^{D}\) many vertices; and again, the triangle inequality of the operator norm is used. As before, the statement follows for \(J\to\infty\). Performance guarantees for semi-definite relaxations.Similar performance guarantees can be shown for common hierarchies of semi-definite relaxations [15] of finding ground states of local Hamiltonians [10; 11; 12; 13]. The core idea of these approaches is very simple: The constraint of quantum states being positive semi-definite \(\rho\geq 0\) is relaxed to operators \(\omega\) satisfying \(\text{tr}(\omega O^{\dagger}O)\geq 0\) for suitable operators \(O\). In a next step, the quantum state \(\omega\) is eliminated. Since the constraint Figure 1: (a) The configuration in the original Anderson bound applied to a two-dimensional translationally invariant lattice system. (b) The configuration in one spatial dimension used to show the performance guarantee of the Anderson bound. (c) The configuration employed for the improved Anderson bound based on semi-definite programming and the marginal problem, again applied to one spatial dimension. of quantum states being positive semi-definite is relaxed, one naturally arrives at lower bounds to ground state energies. Concretely, for a distinguished lattice site \(j\), consider a set \(\mathcal{M}\) of cardinality \(|\mathcal{M}|=:C\) of operators \(O_{a}\in\mathcal{M}\) that has the property that the Hamiltonian term on this site can be written as \[\tau_{j}(h)=\sum_{a,b}c_{a,b}O_{a}^{\dagger}O_{b} \tag{11}\] with \(c_{a,b}\in\mathbb{C}\) for \(a,b=1,\ldots,C\). The set of sites on which all operators in \(\mathcal{M}\) are non-trivially supported is denoted as \(\mathcal{S}\), which at the same time contains by construction the support of \(\tau_{j}(h)\) (but which may be substantially larger). This set is being seen as the root set of operators that then acts in a translationally invariant fashion on each lattice site in the same fashion. The methods discussed in Refs. [10; 11; 12; 13] essentially amount to identifying suitable such sets of operators. The operators considered in \(\mathcal{M}\) will feature algebraic relations (such as commutation or anti-commutation relations). We consider again translationally invariant settings, with \[\mathcal{O}_{N}:=\{\tau_{j}(O_{a}):\forall j\in\mathcal{L},a=1,\ldots,C\} \tag{12}\] being defined as translates of the root set of operators \(\mathcal{M}\), giving rise to a set of cardinality \(|\mathcal{O}_{N}|=:D_{N}\). To express the lower bound, we define the matrix \(X\) with \[X_{a,b}:=\text{tr}(O_{a}^{\dagger}O_{b}\omega) \tag{13}\] for \(a,b=1,\ldots,D_{N}\). The matrices \(X\in\mathbb{C}^{D_{N}\times D_{N}}\) will again be taken to be translationally invariant, in that \[X_{a,b}=\text{tr}(\tau_{j}(O_{a}^{\dagger})\tau_{j}(O_{b})\omega) \tag{14}\] for all \(j\in\mathcal{L}\) and all \(a,b=1,\ldots,D_{N}\). The above constraints between the operators will be reflected by expressions of the form \(\text{tr}(XR)=1\) for suitable matrices \(R\in\mathbb{C}^{D_{N}\times D_{N}}\). The constraints immediately also act in a translationally invariant fashion: We denote the set of matrices \(R\in\mathbb{C}^{D_{N}\times D_{N}}\) that reflect both the local algebraic constraints as well as the translational invariance in Eq. (14) as \(\text{tr}(XR)=1\) by \(\mathcal{X}_{N}\). For all \(X\in\mathbb{C}^{D_{N}\times D_{N}}\) that satisfy \(X\geq 0\) and \(\text{tr}(XR)=1\) for all \(R\in\mathcal{X}_{N}\), \[\sum_{a,b}\alpha_{a}^{*}\alpha_{b}\text{tr}(O_{a}^{\dagger}O_{b}\rho)=\sum_{a,b}\alpha_{a}^{*}\alpha_{b}X_{a,b}\geq 0 \tag{15}\] will hold true for all \(\alpha\in\mathbb{C}^{D_{N}}\), allowing to devise a lower bound to the ground state energy density. In fact, these bounds will again be lower bounds with a guaranteed constant error. **Proposition 2** (Performance guarantee of semi-definite bounds).: _The solution \(x_{N}\) of the semi-definite problem_ \[\text{minimize} \text{tr}(hX), \tag{16}\] \[\text{subject to} X\geq 0,\] (17) \[\text{tr}(XR)=1\,\forall R\in\mathcal{X}_{N}, \tag{18}\] _where \(\mathcal{X}_{N}\) reflects algebraic constraints as well as translational invariance, satisfies \(x_{N}\leq e_{\text{min}}(H_{N})\leq x_{N}+O(1)\)._ Proof.: Since (15) is true for all \(\alpha\in\mathbb{C}^{D_{N}}\) exactly if \(X\geq 0\), and since \[\sum_{a,b}h_{a,b}\text{tr}(O_{a}O_{b}\omega)=\sum_{a,b}h_{a,b}X_{a,b}=\text{tr }(hX), \tag{19}\] we get the above lower bounds as the solution of the convex optimization problem Eqs. (16)-(18), as the energy minimization problem is relaxed to a semi-definite problem. We will continue the argument by showing that these bounds will scale like a constant in the energy density. For this, we consider a set of fixed lattice sites \(\mathcal{T}\) independent of \(N\) that is a superset of the set \(\mathcal{S}\) that hosts \(\tau_{j}(h)\) but which may be substantially larger than its support, \(\mathcal{S}\subset\mathcal{T}\). The strategy will be to construct bounds that are even lower bounding the ones from Proposition 2, but that already give rise to a constant energy approximation from below. These lower bounds are given by the solution of semi-definite problems with (16) and (18), where (17) is relaxed to the principal sub-matrix associated with \(\mathcal{T}\) satisfying \[\left.X\right|_{\mathcal{T}}\geq 0. \tag{20}\] This will give rise to lower bounds of the original semi-definite optimization problem, as \(X\geq 0\) implies that the principal sub-matrix \(\left.X\right|_{\mathcal{T}}\) is also positive semi-definite. Since no constraint in the problem involves the system size any longer, and the only constraints dependent on \(N\) in \(\mathcal{X}_{N}\) enforce translational invariance, the solution of this new semi-definite problem \[\text{minimize} \text{tr}(hX),\] (21) subject to \[\left.X\right|_{\mathcal{T}}\geq 0, \tag{22}\] \[\text{tr}(XR)=1\,\forall R\in\mathcal{X}_{N} \tag{23}\] scales like \(O(1)\) in \(N\). This implies that the solution of the original semi-definite problem in Proposition 2 satisfies \(x_{N}\leq e_{\text{min}}(H_{N})\leq x_{N}+O(1)\). At the same time, it is clear that by enlarging the set \(\mathcal{M}\), the actual ground state energy \(e_{\text{min}}(H_{N})\) can be arbitrarily well approximated from below. Figure 2: The Anderson bound for the one-dimensional Heisenberg model \(H=\frac{1}{2}\sum_{j}\tau_{j}(X\otimes X+Y\otimes Y+Z\otimes Z)\) as a function of the patch size \(m\) until \(m=15\), featuring a noteworthy even-odd effect. The straight line represents the exact ground state energy density, \(e_{\text{min}}=-2\log(2)+1/2\). _A hierarchy of improved Anderson bounds._ The Anderson bound as is provides strikingly good lower bounds of the energy density up to small constant errors at very little effort. For this reason, the question arises whether it can be systematically improved. Resorting to the _quantum marginal problem_, one can in fact easily improve the Anderson bound. For this, consider again the configuration used in Proposition 1, described in Eq. (9). Obviously, the full optimization problem minimizing the ground state energy density \(\epsilon_{\text{min}}(H_{N})\) can be written as the solution to the convex optimization problem \[\text{minimize}\sum_{s\in K_{m,\text{ }}}\text{tr}(\omega\tau_{s}(h_{m}))+ \text{tr}(\sigma V_{N}), \tag{24}\] \[\text{subject to}\qquad\qquad\qquad\qquad\qquad\omega=\sigma,\] (25) \[\omega,\sigma\geq 0,\,\text{tr}(\omega)=\text{tr}(\sigma)=1,\] (26) \[\tau_{j}(\omega)=\tau_{k}(\omega)\,\forall j,k\in\mathcal{L}, \tag{27}\] over states defined on the entire \((\mathbb{C}^{d})^{\otimes N}\), an optimization problem that can, needless to say, be only solved with exponential computational effort. This can be relaxed, however, to a family of efficiently solvable semi-definite problems (see also Fig. 1(c)) that strictly generalize the Anderson bound. **Proposition 3** (Improved Anderson bounds).: _Consider for a one-dimensional translationally invariant Hamiltonian \(H_{N}\) and an integer \(m\) the term \(h_{m}\) defined on \(m\) sites as before. For an integer \(s\) with \(2s\leq m\) consider the set of sites \(\mathcal{B}:=\{m+j\mod m:j=-s+1,-s+2,\ldots,s\}\). Then the solution \(z_{m,s}\) of the semi-definite optimization problem_ \[\text{minimize}\quad\text{tr}(\omega h_{m})+\text{tr}(\sigma( \mathbb{I}\otimes h)), \tag{28}\] \[\text{subject to}\qquad\qquad\text{tr}_{\setminus\mathcal{B}}( \omega)=\sigma,\] (29) \[\omega,\sigma\geq 0,\,\text{tr}(\omega)=\text{tr}(\sigma)=1, \tag{30}\] _satisfies \(z_{m,s}/m\leq e_{\text{min}}(H_{N})\)._ Proof.: This problem is obtained as a convex relaxation of Eqs. (24)-(26), by relaxing marginal constraints. As the full problem, it amounts to solving a semi-definite problem [15], but now involving quantum states on \((\mathbb{C}^{d})^{\otimes m}\) only, as can in turn be solved with interior-point methods [15; 16]. Here, the correct marginals are merely enforced, therefore, reminiscent of the quantum marginal problem [17], instead of the entire quantum states \(\sigma\) and \(\omega\) being identical. _Outlook._ This note emphasizes that one can easily equip upper bounds to ground states obtained by resorting to classical or quantum variational principles with concomitant lower bounds: These bounds certify the quality of the variational ansatz. As technical results, performance guarantees of Anderson bounds and of semi-definite relaxations are proven. What is more, an improved Anderson bound is presented. The upshot is that certified bounds to the energy density up to small constant \(O(1)\) in the system size - for the Anderson bound even quantitative ones - are easy to get classically. None of the bounds presented is particularly sophisticated, technically speaking. One may argue, however, that it is their simplicity that renders them useful: Again, they can be interpreted as "de-quantization statements". It is sometimes under-appreciated that one can easily classically approximate ground state energy densities up to a small constant error from below: This places stringent demands on quantum simulations aimed at producing such approximations. Any quantum simulation aimed at approximating ground state energies hence has to deliver approximations that scale more favourable compared to this in order to possibly outperform classical computations. Ref. [18] makes a similar point for systems of quantum chemistry, stressing that estimating the ground state energy of a local Hamiltonian when given, as an additional input, a state sufficiently close to the ground state, can be solved efficiently with constant precision on a classical computer. The results stated are clearly not in contradiction with the famous _quantum PCP_ conjecture [19]. After all, this conjecture states that it remains QMA-hard to approximate the ground state energy even up to an error \(\gamma n\) for some absolute constant \(0<\gamma<1\), where \(n\) is the number of local terms in the Hamiltonian. Obviously, the above bounds produce exactly such an energy approximation, which only once again implies that the statement of the quantum PCP conjecture cannot expected to be tight for cubic lattices. It is also worth noting that all the mentioned bounds apply equally well to _fermionic Hamiltonians_[20; 21] which have again moved to the focus of attention recently, not the least as the precise scaling of the ground state energy of the _Sachdev-Ye-Kitaev_ (SYK) model [22; 23; 24] of random degree polynomials has become interesting. It might also be fruitful to compare the discussed bounds with improvements of Temple's lower bound [25]. It is the hope that this note can contribute to the development of benchmarks for variational principles in both the classical and quantum reading. _Acknowledgements._ Discussions with J. Haferkamp as well as comments by D. Miller and J. M. Arrazola are acknowledged. This work has been funded by the DFG (CRC 183) and the BMBF (MuniQCAtoms, FermiQP, Hybrid), the BMWK (EniQmA), the Einstein Foundation, the Munich Quantum Valley (K8), and the QuantERA (HQCC). After completion of this note, the author of this note became aware of an exciting related but different relaxation of the infinite translationally invariant ground state problem [26] compared to what is stated as Proposition 3. It would be interesting to compare the two relaxations.
2303.10159
Conformal Ricci solitons on generalized ($κ, μ$)-space forms
In this paper, we study conformal Ricci solitons and conformal gradient Ricci solitons on generalized ($\kappa,\mu$)-space forms. The conditions for the solitons to be shrinking, steady, and expanding are derived in terms of conformal pressure p. We show under what conditions a Ricci semi-symmetric generalized ($\kappa,\mu$)-space form equipped with a conformal Ricci soliton forms an Einstein manifold.
Mehraj Ahmad Lone, Towseef Ali Wani
2023-01-27T13:24:50Z
http://arxiv.org/abs/2303.10159v1
# Conformal Ricci solitons on generalized \((\kappa,\mu)\)-space forms ###### Abstract In this paper, we study conformal Ricci solitons and conformal gradient Ricci solitons on generalized \((\kappa,\mu)\)-space forms. The conditions for the solitons to be shrinking, steady and expanding are derived in terms of conformal pressure \(p\). We show under what conditions a Ricci semi-symmetric generalized \((\kappa,\mu)\)- space form equiped with a conformal Ricci soliton forms an Einstein manifold. Conformal Ricci soliton, Conformal gradient Ricci soliton, generalized \((\kappa,\mu)\)-space form, ## 1 Introduction The concept of Ricci flow was introduced by R. Hamilton [15] in 1982. The Ricci flow is an evolution equation for metric on a Riemannian manifold given by \[\frac{\partial g}{\partial t}=-2S\] where \(g\) is the Riemannian metric and \(S\) denotes the Ricci tensor. A self-similar solution of the Ricci flow [15, 23], which moves only by one parameter family of diffeomorphism and scaling is called a Ricci soliton [17]. The Ricci soliton is given by \[\mathcal{L}_{V}g+2S=2\lambda g\] where \(\mathcal{L}_{V}\) is the Lie derivative, \(S\) is the Ricci tensor, \(g\) is the Riemannian metric, \(V\) is the vector field and \(\lambda\) is a scalar. The Ricci soliton is denoted by \((g,V,\lambda)\) and is said to be shrinking, steady and expanding according to whether \(\lambda\) is positive, zero and negative respectively. The concept of conformal Ricci flow was introduced by Fischer [11] as a variation of classical Ricci flow equation that modifies the volume constraint to a scalar curvature constraint. The confomal Ricci flow on a smooth, closed, connected, oriented \(n\)-manifold is defined by the equation [11] \[\frac{\partial g}{\partial t}+2(S+\frac{g}{n})=-pg\] \[\text{and}\hskip 28.452756ptr=-1\] where \(p\) is a non-dynamical scalar field which is time dependent, \(r\) is the scalar curvature of the manifold, and \(n\) is the dimension of \(M\). In 2015, Basu et. al. [2] introduced the notion of conformal Ricci soliton equation on Kenmotsu manifold \(M^{2n+1}\) as \[\mathcal{L}_{V}g+2S=[2\lambda-(p+\frac{2}{2n+1})]g \tag{1.1}\] where \(\lambda\) is constant. The equation is a generalization of the Ricci soliton and satisfies the conformal Ricci flow equation.The conformal Ricci flow equations are analogous to the Navier-Stokes equations of fluid mechanics and because of this analogy, the time-dependent scalar field \(p\) is called a conformal pressure and, as for the real physical pressure in fluid mechanics that serves to maintain the incompressibility of the fluid, the conformal pressure serves as a Lagrange multiplier to conformally deform the metric flow so as to maintain the scalar curvature constraint. A conformal Ricci soliton is called a conformal gradient Ricci soliton if the potential vector field \(V\) is gradient of some smooth function \(f\) i.e. \(V=grad(f)=\nabla f\) and satisfies \[\nabla\nabla f+S=[2\lambda-(p+\frac{2}{2n+1})]g \tag{1.2}\] where \(\nabla\) is Riemannian connection on the Riemannian manifold. Conformal Ricci solitons were studied by Ganguly et.al. within the framework of almost co-Kahler manifolds [13] and \((LCS)_{n}\)-manifolds [14]. The authors generalized conformal Ricci solitons and obtained interesting rgesults in [9]. Dey et.al. [8] studied conformal Ricci solitons on almost Kenmotsu manifolds. Siddiqui [22] studied conformal Ricci solitons of Lagrangian submanifolds in Kahler manifolds. Ganguly et. al. [12] investigated conformal Ricci solitons and quasi-Yamable soliton on generalized Sasakian space form. Motivated by these studies, we investigate conformal Ricci soliton and conformal gradient Ricci soliton in generalized \((\kappa,\mu)\)- space forms. ## 2 Preliminaries Let \((M,g)\) be a Riemannian manifold of dimension \((2n+1)\). \((M,g)\) is called almost contact manifold [4] if we can define an endomorpism \(\phi\) on its tangent bundle \(TM\), a vector field \(\xi\) and a 1-form \(\eta\) satisfying \[\phi^{2}=-Id+\eta\otimes\xi,\quad\phi(\xi)=0,\quad\eta(\phi)=0, \tag{2.1}\] \[g(X,\xi)=\eta(X), \tag{2.2}\] \[\eta(\xi)=1, \tag{2.3}\] for any vector fields \(X,Y\) on \(M\). It is called a contact metric manifold if \[g(\phi X,\phi Y)=g(X,Y)-\eta(X)\eta(Y).\] Making use of above equations, it is easy to prove that for an almost contact metric manifold \((M,g)\), \[g(\phi E,F)=-g(E,\phi F). \tag{2.4}\] An almost contact metric manifold is said to be a contact manifold if its second fundamental 2-form \(\Phi\), defined by \(\Phi(X,Y)=g(X,\phi Y)\), satisfies \[d\eta=\Phi.\] On a contact metric manifold \(M(\phi,\xi,\eta,g)\) the tensor \(h\) defined by \(\mathcal{L}_{\xi}\phi\) is symmetric and satisfies the following relations [4] \[\nabla_{X}\xi=-\phi X-\phi hX,h\xi=0,h\phi=-\phi h,tr(h)=0,\eta\circ h=0\] A contact manifold is called a \((\kappa,\mu)\)-metric manifold if the characteristic vector field \(\xi\) belongs to the \((\kappa,\mu)\)-distribution i.e.[5] \[R(X,Y)\xi=\kappa\{\eta(Y)X-\eta(X)Y\}-\mu\{\eta(Y)hX-\eta(X)hY\}\] where \(X\) and \(Y\) are vector fields on \(M\) and \(2h=\mathcal{L}_{\xi}\phi\). If \(\kappa,\mu\) are some smooth functions then the manifold is called generalized \((\kappa,\mu)\)-space. A \((\kappa,\mu)\)-space of dimension greater than three with constant \(\phi\)-sectional curvature is called \((\kappa,\mu)\)-space form and its curvature tensor is given by [18] \[R(X,Y)Z= \frac{c+3}{4}R_{1}(X,Y)Z+\frac{c-1}{4}R_{2}(X,Y)Z+(\frac{c+3}{4}- \kappa)R_{3}(X,Y)Z\] \[+R_{4}(X,Y)Z+\frac{1}{2}R_{5}(X,Y)Z+(1-\mu)R_{6}(X,Y)Z\] where \(R_{1},R_{2},R_{3},R_{4},R_{5},R_{6}\) are defined as follows \[R_{1}(X,Y)Z =g(Y,Z)X-g(X,Z)Y,\] \[R_{2}(X,Y)Z =g(X,\phi Z)\phi Y-g(Y,\phi Z)\phi X+2g(X,\phi Y)\phi Z,\] \[R_{3}(X,Y)Z =\eta(X)\eta(Z)Y-\eta(Y)\eta(Z)X+g(X,Z)\eta(Y)\xi-g(Y,Z)\eta(X)\xi,\] \[R_{4}(X,Y)Z =g(Y,Z)hX-g(X,Z)hY+g(hY,Z)X-g(hX,Z)Y,\] \[R_{5}(X,Y)Z =g(hY,Z)hX-g(hX,Z)hY+g(\phi hX,Z)\phi hY-g(\phi hY,Z)\phi hX,\] \[R_{6}(X,Y)Z =\eta(X)\eta(Z)hY-\eta(Y)\eta(Z)hX+g(hX,Z)\eta(Y)\xi-g(hY,Z)\eta X\xi,\] for any vector fields \(X,Y,Z\) on \(M\). As a generalization of \((\kappa,\mu)\)-space form, Carriazo et. al. [7] introduced the notion of generalized \((\kappa,\mu)\)-space form and provided interesting examples of such spaces. An almost contact metric manifold is called a generalized \((\kappa,\mu)\)-space form if there exist smooth functions \(f_{1},f_{2},f_{3},f_{4},f_{5},f_{6}\) such that \[R(X,Y)Z=f_{1}R_{1}(X,Y)Z +f_{2}R_{2}(X,Y)Z+f_{3}R_{3}(X,Y)Z+f_{4}R_{4}(X,Y)Z\] \[+f_{5}R_{5}(X,Y)Z+f_{6}R_{6}(X,Y)Z\] where \(R_{1},R_{2},R_{3},R_{4},R_{5},R_{6}\) are defined as above. _Example 1_.: [19] Consider the 3-dimensional manifold \(M=\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}|x_{3}\neq 0\}\), where \((x_{1},x_{2},x_{3})\) are the standard coordinates in \(\mathbb{R}^{3}\). The vector fields, \[e_{1}=\frac{\partial}{\partial x_{1}},\ \ e_{2}=-2x_{2}x_{3}\frac{\partial}{ \partial x_{1}}+2\frac{x_{1}}{{x_{3}}^{3}}\frac{\partial}{\partial x_{2}}- \frac{1}{x_{3}^{2}}\frac{\partial}{\partial x_{3}},e_{3}=\frac{1}{x_{3}}\frac{ \partial}{\partial x_{2}},\] are linearly independent at each point of \(M\). Let \(g\) be the Riemannian metric defined by \(g(e_{i},e_{j})=\delta_{ij},i,j=1,2,3\). Let \(\nabla\) be the Riemannian connection and \(R\) the curvature tensor of \(g\). We easily get \[[e_{1},e_{2}]=\frac{2}{{x_{3}}^{2}}e_{3},\ \ [e_{2},e_{3}]=2e_{1}+\frac{1}{{x_{ 3}}^{3}}e_{3},\ \ [e_{3},e_{1}]=0.\] Let \(\eta\) be the 1-form defined by \(\eta(z)=g(z,e_{1})\) for any \(z\in\mathcal{X}(M)\). Because \(\eta\wedge d\eta\neq 0\), everywhere on \(M\), \(\eta\) is a contact form. Let \(\phi\) be the \((1,1)\)-tensor field, defined by \(\phi e_{1}=0,\ \phi e_{2}=e_{3},\ \phi e_{3}=-e_{2}\). Using the linearity of \(\phi,d\eta\), and \(g\), we have \[\eta(e_{1})=1,\ \phi^{2}z=z-\eta(z)e_{1},\ d\eta(z,w)=g(z,\phi w)\] and \[g(\phi z,\phi w)=g(z,w)-\eta(z)\eta(w)\] for any \(z,w\in\mathcal{X}(M)\). Hence \((\phi,e_{1},\eta,g)\) defines a contact metric structure \(M\). So \(M\) with this structure is a contact metric manifold. Putting \(\xi=e_{1}\), \(x=e_{2}\), \(\phi x=e_{3}\) and using the well known formula \[2g(\nabla_{y}z,w) = yg(z,w)+zg(w,y)-wg(y,z)-g\big{(}y,[z,w]\big{)}\] \[-g\big{(}z,[y,w]\big{)}+g\big{(}w,[y,z]\big{)},\] We calculate \[\nabla_{x}\xi=-(1+\frac{1}{{x_{3}}^{2}})\phi x,\ \ \nabla_{\phi x}\xi=(1-\frac{1}{{x_{3}}^{2}})x,\] \[\nabla_{\xi}x=(-1+\frac{1}{{x_{3}}^{2}})\phi x,\ \ \nabla_{\xi}\phi x=(1-\frac{1}{{x_{3}}^{2}})x,\] \[\nabla_{x}x=0,\ \ \nabla_{x}\phi x=(1+\frac{1}{{x_{3}}^{2}})\xi,\] \[\nabla_{\phi x}x=(-1+\frac{1}{{x_{3}}^{2}})\xi-\frac{1}{{x_{3}}^{3}}\phi x,\ \ \nabla_{\phi x}\phi x=\frac{1}{{x_{3}}^{3}}x.\] Therefore for the tensor field \(h\), we get \(h\xi=0\), \(hx=\lambda x\), and \(\kappa=\frac{{x_{3}}^{4}-1}{{x_{3}}^{4}}\), we finally get \[R(x,\xi)\xi=\kappa\big{(}\eta(\xi)x-\eta(x)\xi\big{)}+\mu\big{(}\eta(\xi)hx- \eta(x)h\xi\big{)}\] \[R(\phi x,\xi)\xi=\kappa\big{(}\eta(\xi)\phi x-\eta(\phi x)\xi\big{)}+\mu\big{(} \eta(\xi)h\phi x-\eta(\phi x)h\xi\big{)}\] \[R(x,\phi x)\xi=\kappa\big{(}\eta(\phi x)x-\eta(x)\phi x\big{)}+\mu\big{(}\eta( \phi x)hx-\eta(x)h\phi x\big{)}.\] These relations yield the following, by a straight forward calculations, \[R(z,w)\xi=\kappa\big{(}\eta(w)z-\eta(z)w\big{)}+\mu\big{(}\eta(w)hz-\eta(z)hw \big{)},\] where \(\kappa\) and \(\mu\) are non-constant smooth functions. Hence \(M\) is a generalized \((\kappa,\mu)\)-contact metric manifold. Submanifolds of generalized \((\kappa,\mu)\)-space forms were studied by Hui et. al. [16]. Lee et. al. [20] studied generalized Wintgen inequality for submanifolds in generalized \((\kappa,\mu)\)-space forms. For a generalized \((\kappa,\mu)\)-space form, we have the following relations [7] \[\nabla_{X}\xi=(f_{3}-f_{1})\phi X+(f_{6}-f_{4})\phi hX \tag{2.5}\] \[(\nabla_{X}\eta)Y=(f_{3}-f_{1})g(\phi X,Y)+(f_{6}-f_{4})g(\phi hX,Y) \tag{2.6}\] \[(\nabla_{X}\phi)Y= (f_{1}-f_{3})[g(X,Y)\xi-\eta(Y)X]\] \[+(f_{4}-f_{6})[g(hX,Y)\xi-\eta(Y)hX] \tag{2.7}\] \[R(X,Y)\xi= (f_{1}-f_{3})\{\eta(Y)X-\eta(X)Y\}\] \[+(f_{4}-f_{6})\{\eta(Y)hX-\eta(X)hY\} \tag{2.8}\] \[R(\xi,Y)Z= (f_{1}-f_{3})[g(Y,Z)\xi-\eta(Z)Y]\] \[+(f_{4}-f_{6})[g(hY,Z)\xi-\eta(Z)hY] \tag{2.9}\] \[S(X,Y)= (2nf_{1}+3f_{2}-f_{3})g(X,Y)-\big{(}3f_{2}+(2n-1)f_{3}\big{)}\eta (X)\eta(Y)\] \[+\big{(}(2n-1)f_{4}-f_{6}\big{)}g(hX,Y) \tag{2.10}\] \[QX= (2nf_{1}+3f_{2}-f_{3})X-\big{(}3f_{2}+(2n-1)f_{3}\big{)}\eta(X)\xi\] \[+\big{(}(2n-1)f_{4}-f_{6}\big{)}hX \tag{2.11}\] \[S(X,\xi)=2n(f_{1}-f_{3})\eta(X), \tag{2.12}\] \[Q\xi=2n(f_{1}-f_{3})\xi \tag{2.13}\] for all vector fields \(X\) and \(Y\) in \(TM\) and where \(S\) is the Ricci tensor and \(Q\) is the Ricci operator related by \(S(X,Y)=g\big{(}Q(X),Y\big{)}\). ## 3 Main Results **Theorem 3.1**.: _Consider a generalized \((\kappa,\mu)\)-space form \(M\) admitting a conformal Ricci soliton \((g,V,\lambda)\). Then the soliton is_ 1. _shrinking if_ \(p<[4n(f_{3}-f_{1})-\frac{2}{2n+1}]\)__ 2. _steady if_ \(p=[4n(f_{3}-f_{1})-\frac{2}{2n+1}]\)__ 3. _expanding if_ \(p>[4n(f_{3}-f_{1})-\frac{2}{2n+1}]\)__ Proof.: Since \(M\) admits a conformal Ricci soliton\((g,V,\lambda)\), from (1.1) we have, \[(L_{V}g)(X,Y)+2S(X,Y)+[2\lambda-(p+\frac{2}{2n+1})]g(X,Y)=0\] Using the property of Lie derivative, we get \[g(\nabla_{X}V,Y)+g(X,\nabla_{Y}V)+2S(X,Y)+[2\lambda-(p+\frac{2}{2n+1})]g(X,Y)=0\] Substituting \(X=Y=\xi\) and using (2.3), we get \[2g(\nabla_{\xi}V,\xi)+2S(\xi,\xi)+[2\lambda-(p+\frac{2}{2n+1})]=0\] Using the fact that \(\nabla\) is a metric connection, we have \[-2g(V,\nabla_{\xi}\xi)+2S(\xi,\xi)+[2\lambda-(p+\frac{2}{2n+1})]=0\] Since \(\nabla_{\xi}\xi=0\), we get from the above equation \[2S(\xi,\xi)+[2\lambda-(p+\frac{2}{2n+1})]=0\] Now using (2.12) and substituting the value of \(S(\xi,\xi)\), we get \[4n(f_{1}-f_{3})+[2\lambda-(p+\frac{2}{2n+1})]=0\] Rearranging the above equation, we get \[\lambda=2n(f_{1}-f_{3})+(\frac{p}{2}+\frac{1}{2n+1})\] Upon using three different conditions on \(\lambda\) in the above equations, we get the desired expressions. **Theorem 3.2**.: _Consider a \((2n+1)\)-dimensional Ricci semi-symmetric generalized \((\kappa,\mu)\)-space form \(M\) admitting a conformal Ricci soliton \((g,V,\lambda).\) If \(f_{4}=f_{6}\), then the manifold is Einstein and the potential vector field \(V\) is a conformal vector field._ Proof.: Since the manifold \(M\) is Ricci semi-symmetric, for any vector fields \(X\) and \(Y\) on \(M\), we have \[R(X,Y).S=0,\] where \(R\) is the curvature tensor and \(S\) is the Ricci tensor of \(M\). The above equation can be written as \[S\big{(}R(X,Y)Z,U\big{)}+S\big{(}Z,R(X,Y)U\big{)}=0,\] where \(X,Y,Z,U\) are vector fields on \(M\). Replacing \(U\) by \(\xi\) in the above equation, we get \[S\big{(}R(X,Y)Z,\xi\big{)}+S\big{(}Z,R(X,Y)\xi\big{)}=0\] Using the fact that \(S(X,Y)=g(QX,Y)\) where \(Q\) is the Ricci operator, we have \[g\big{(}Q\xi,R(X,Y)Z\big{)}+S\big{(}Z,R(X,Y)\xi\big{)}=0\] Now using (2.8) and (2.13) in the avoure equation, we get \[g\big{(}2n(f_{1}-f_{3})\xi,R(X,Y)Z\big{)}+\] \[S\big{(}Z,(f_{1}-f_{3})[\eta(Y)X-\eta(X)Y]+(f_{4}-f_{6})[\eta(Y) hX-\eta(X)hY]\big{)}=0\] Simplifying and make use of (2.3) and the fact that \(f_{4}=f_{6}\), we get \[2n(f_{1}-f_{3})\eta\big{(}R(X,Y)Z\big{)}+(f_{1}-f_{3})[\eta(Y)S(Z,X)-\eta(X)S( Z,Y)]=0\] Putting \(X=\xi\), we get \[2n(f_{1}-f_{3})\eta\big{(}R(\xi,Y)Z\big{)}+(f_{1}-f_{3})[\eta(Y)S(Z,\xi)-S(Z,Y)]=0\] Making use of (2.9), (2.12) and then simplifying, we get \[S(Y,Z)=2n(f_{1}-f_{3})g(Y,Z), \tag{3.1}\] Thus it is clear from the expression (3.1) that \(M\) is an Einstein manifold. Since \(M\) admits a conformal Ricci soliton\((g,V,\lambda)\), from (1.1) we have, \[(L_{V}g)(X,Y)+2S(X,Y)+[2\lambda-(p+\frac{2}{2n+1})]g(X,Y)=0\] Making use of (3.1) in the above expression, we get \[(L_{V}g)(X,Y)=[2\lambda-4n(f_{1}-f_{3})-(p+\frac{2}{2n+1})]g(X,Y)\] Putting \(\delta=[2\lambda-4n(f_{1}-f_{3})-(p+\frac{2}{2n+1})]\), we can write \[\mathcal{L}_{V}g=\delta g. \tag{3.2}\] From (3.2), we conclude that \(V\) is a conformal vector field. **Theorem 3.3**.: _Consider a \((2n+1)\)-dimensional generalized \((\kappa,\mu)\)-space form \(M\) admitting a conformal Ricci soliton \((g,V,\lambda)\) whose potential vector field \(V\) is pointwise collinear with the Reeb vector field \(\xi.\) Then V is constant multiple of \(\xi\) and \(M\) is an Einstein manifold of scalar curvature \(r=2n(2n+1)(f_{1}-f_{3})\)._ Proof.: Let's assume that \(V=b\xi\) for some smooth function \(b.\) Then from (1.1) we can write \[bg(\nabla_{X}\xi,Y)+bg(X,\nabla_{Y}\xi)+X(b)\eta(Y)+Y(b)\eta(X)+2S(X,Y)=\] \[[2\lambda-(p+\frac{2}{2n+1})]g(X,Y)\] Using (2.5) in the above equation, we get, \[X(b)\eta(Y)+Y(b)\eta(X)+2S(X,Y)=[2\lambda-(p+\frac{2}{2n+1})]g(X,Y) \tag{3.3}\] Putting \(Y=\xi\) and using (2.12), we get, \[X(b)=[2\lambda-(p+\frac{2}{2n+1})-4n(f_{1}-f_{3}-\xi(b))]\eta(X) \tag{3.4}\] Putting \(X=\xi\), we get \[\xi(b)=[\lambda-(\frac{p}{2}+\frac{1}{2n+1})-2n(f_{1}-f_{3})] \tag{3.5}\] In view of (3.4) and (3.5), we can write \[db=[\lambda-(\frac{p}{2}+\frac{1}{2n+1})-2n(f_{1}-f_{3})]\eta \tag{3.6}\] Operating by \(d\) on both sides of (3.6) and uing the fact that \(d^{2}=0\), we get \[[\lambda-(\frac{p}{2}+\frac{1}{2n+1})-2n(f_{1}-f_{3})]d\eta \tag{3.7}\] Since \(d\eta\neq 0\), we get from (3.7) \[\lambda=2n(f_{1}-f_{3})+(\frac{p}{2}+\frac{1}{2n+1}). \tag{3.8}\] Substituting (3.7)in (3.5), we get \(db=0\) which implies that \(b\) is constant.Thus \(V\) is constant multiple of \(\xi\) which proves the first part of the theorem. To prove the second part of the theorem, we consider an orthonormal basis \(\{e_{i}:1\leq i\leq 2n+1\}\) at each point of the manifold and put \(X=Y=e_{i}\) in (3.3) and summing over \(1\leq i\leq(2n+1)\), we get \[\xi(b)+r=(2n+1)[\lambda-(\frac{p}{2}+\frac{1}{2n+1})]. \tag{3.9}\] Using the fact that \(b\) is constant in (3.9) we get, \[r=(2n+1)[\lambda-(\frac{p}{2}+\frac{1}{2n+1})]. \tag{3.10}\] Using (3.8) in (3.10), we get \[r=2n(2n+1)(g_{1}-g_{3}),\] which is the desired epression for the scalar curvature of the manifold. Also, using the fact that \(b\) is constant in(3.3), we get \[S(X,Y)=[\lambda-(\frac{p}{2}+\frac{1}{2n+1})]g(X,Y).\] Hence \(M\) is an Einstein manifold. **Lemma 3.4**.: _Consider a \((2n+1)-\)dimensional generalized \((\kappa,\mu)\)-space form \(M\) admitting a conformal gradient Ricci soliton \((g,\nabla f,\lambda)\). Then the curvature tensor \(R\) satisfies_ \[R(X,Y)\nabla f= (2ndf_{1}+3df_{2}-df_{3})(Y)X-(2ndf_{1}+3df_{2}-df_{3})(X)Y\] \[+\big{(}3f_{2}+(2n-1)f_{3}\big{)}(g_{1}-g_{3})[2g(\phi X,Y)\xi+ \eta(X)\phi X-\eta(Y)\phi Y]\] \[+\big{(}3f_{2}+(2n-1)f_{3}\big{)}(g_{4}-g_{6})[2g(\phi hX,Y)\xi+ \phi hX-\phi hY]\] \[-\big{(}3df_{2}+(2n-1)df_{3}\big{)}(Y)\eta(X)\xi+\big{(}3df_{2}+(2 n-1)df_{3}\big{)}(X)\eta(Y)\xi\] \[+\big{(}(2n-1)f_{4}-f_{6}\big{)}[(\nabla_{Y}h)X-(\nabla_{X}h)Y]+ \big{(}(2n-1)df_{4}-df_{6}\big{)}(Y)hX\] \[-\big{(}(2n-1)df_{4}-df_{6}\big{)}(X)hY.\] Proof.: Since \((g,\nabla f,\lambda)\) is a conformal gradient Ricci soliton on \(N\), for any vector field \(X\) on \(N\), we can write \[\nabla_{X}\nabla f=[\lambda-(\frac{p}{2}+\frac{1}{2n+1})]X-QX,\] where \(Q\) is the Ricci operator. Differentiating covariantly with respect to arbitrary vector field \(Y\), we get \[\nabla_{Y}\nabla_{X}\nabla f=[\lambda-(\frac{p}{2}+\frac{1}{2n+1})]\nabla_{Y} X-\nabla_{Y}QX.\] Interchanging \(X\) and \(Y\) in the above equaton we get, \[\nabla_{X}\nabla_{Y}\nabla f=[\lambda-(\frac{p}{2}+\frac{1}{2n+1})]\nabla_{X}Y- \nabla_{X}QY.\] Also, we can write \[\nabla_{[X,Y]}\nabla f=[\lambda-(\frac{p}{2}+\frac{1}{2n+1})](\nabla_{X}Y- \nabla_{Y}X)-Q(\nabla_{X}Y-\nabla_{Y}X).\] Using the above equations in the expression for curvature tensor we get \[R(X,Y)\nabla f=(\nabla_{Y}Q)X-(\nabla_{X}Q)Y. \tag{3.11}\] Now, \[(\nabla_{Y}Q)X=\nabla_{Y}QX-Q\nabla_{Y}X. \tag{3.12}\] Differentiating covariantly (2.11) with respect to \(Y\), we get \[\nabla_{Y}QX= (2nf_{1}+3f_{2}-f_{3})\nabla_{Y}X+(2ndf_{1}+3df_{2}-df_{3})(Y)X\] \[+\big{(}(2n-1)f_{4}-f_{6}\big{)}\nabla_{Y}hX+\big{(}(2n-1)df_{4}- df_{6}\big{)}(Y)hX\] \[-\big{(}3f_{2}+(2n-1)f_{3}\big{)}[\nabla_{Y}\eta(X)\xi+\eta(X) \nabla_{Y}\xi]\] \[-\big{(}3df_{2}+(2n-1)df_{3}\big{)}(Y)\eta(X)\xi. \tag{3.13}\] Also, from (2.11), we can write \[Q\nabla_{Y}X= \big{(}2nf_{1}+3f_{2}-f_{3}\big{)}\nabla_{X}Y+\big{(}(2n-1)(f_{4}- f_{6})h\nabla_{Y}X\big{)}\] \[-\big{(}3f_{2}+(2n-1)f_{3}\big{)}\eta(\nabla_{Y}X)\xi. \tag{3.14}\] Using (3.13) and (3.14) in (3.12), and recalling (2.5) and (2.6) we get, \[(\nabla_{Y}Q)X= \big{(}2ndf_{1}+3df_{2}-df_{3}\big{)}(Y)X+\big{(}(2n-1)f_{4}-f_{6 }\big{)}(\nabla_{Y}h)X\] \[+\big{(}(2n-1)df_{4}-df_{6}\big{)}(Y)hX\] \[-\big{(}3f_{2}+(2n-1)f_{3}\big{)}[(f_{3}-f_{1})g(\phi X,Y)+(f_{6} -f_{4})g(\phi hX,Y)]\xi\] \[-(3f_{2}+(2n-1)f_{3})\eta(X)[(f_{3}-f_{1})\phi X+(f_{6}-f_{4}) \phi hX]\] \[-(3df_{2}+(2n-1)df_{3})(Y)\eta(X)\xi. \tag{3.15}\] Interchanging \(X\) and \(Y\) in (3.15), we get \[(\nabla_{X}Q)Y= \big{(}2ndf_{1}+3df_{2}-df_{3}\big{)}(X)Y+\big{(}(2n-1)f_{4}-f_{6 }\big{)}(\nabla_{X}h)Y\] \[+\big{(}(2n-1)df_{4}-df_{6}\big{)}(X)hY\] \[-\big{(}3f_{2}+(2n-1)f_{3}\big{)}[(f_{3}-f_{1})g(\phi Y,X)+(f_{6} -f_{4})g(\phi hY,X)]\xi\] \[-(3f_{2}+(2n-1)f_{3})\eta(Y)[(f_{3}-f_{1})\phi Y+(f_{6}-f_{4})\phi hY]\] \[-(3df_{2}+(2n-1)df_{3})(X)\eta(Y)\xi. \tag{3.16}\] Now using (3.15) and (3.16) in (3.11), we get the desired expression for \(R(X,Y)\nabla f\) **Theorem 3.5**.: _Consider a \((2n+1)\)-dimensional generalized \((\kappa,\mu)\)-space form \(M\) admitting a conformal gradient Ricci soliton \((g,\nabla f,\lambda)\) such that \(g(hY,\nabla f)=0.\) Then the potential function f is constant if both \(f_{1}\) and \(f_{3}\) are constants._ Proof.: Since \(M\) admits a conformal gradient Ricci solition, putting \(X=\xi\) in (3.4), we get \[R(\xi,Y)\nabla f= (2ndf_{1}+3df_{2}-df_{3})(Y)\xi-(2ndf_{1}+3df_{2}-df_{3})(\xi)Y\] \[+\big{(}3f_{2}+(2n-1)f_{3}\big{)}[(f_{3}-f_{1})\eta(Y)\phi Y+(f_{6 }-f_{4})\phi hY]\] \[-\big{(}3df_{2}+(2n-1)df_{3}\big{)}(Y)\xi+\big{(}3df_{2}+(2n-1)df_{ 3}\big{)}(\xi)\eta(Y)\xi\] \[+\big{(}(2n-1)f_{4}-f_{6}\big{)}[(\nabla_{Y}h)\xi-(\nabla_{\xi}h)Y]\] \[-\big{(}(2n-1)df_{4}-df_{6}\big{)}(\xi)hY. \tag{3.17}\] Now taking the inner product of (3.17) with structure vector field \(\xi\), we get \[g\big{(}R(\xi,Y)\nabla f,\xi\big{)}= 2n(df_{1}-df_{3})(Y)-(2ndf_{1}+3df_{2}-df_{3})(\xi)\eta(Y)\] \[+\big{(}3df_{2}+(2n-1)df_{3}\big{)}[(f_{3}-f_{1})\eta(Y)g(\phi Y,\xi)\] \[+(f_{6}-f_{4})g(\phi hY,\xi)]-\big{(}3df_{2}+(2n-1)df_{3}\big{)}(Y)\] \[+\big{(}3df_{2}+(2n-1)df_{3}\big{)}(\xi)\eta(Y)\] \[+\big{(}(2n-1)f_{4}-f_{6}\big{)}g((\nabla_{Y}h)\xi-(\nabla_{\xi}h )Y,\xi)\] \[-\big{(}(2n-1)df_{4}-df_{6}\big{)}(X)g(hY,\xi). \tag{3.18}\] Making use of the fact \(h\xi=0\) and symmetry of \(h\) in (3.18), we get \[g\big{(}R(\xi,Y)\nabla f,\xi\big{)}=2n(df_{1}-df_{3})(Y)-2n(df_{1}-df_{3})(\xi )\eta(Y). \tag{3.19}\] Now, using the property of the curvature tensor, we have \[g\big{(}R(\xi,Y)\nabla f,\xi\big{)}=-g\big{(}R(\xi,Y)\xi,\nabla f\big{)}. \tag{3.20}\] Making use of (2.9) in (3.20), we get \[g\big{(}R(\xi,Y)\nabla f,\xi\big{)}=(f_{3}-f_{1})[\eta(Y)(\xi f)-(Yf)]+(f_{6} -f_{4})[-g(hY,\nabla f)]. \tag{3.21}\] Now using the fact that \(g(hY,\nabla f)=0\), we get from the (3.21) \[g\big{(}R(\xi,Y)\nabla f,\xi\big{)}=(f_{3}-f_{1})[\eta(Y)(\xi f)-(Yf)]. \tag{3.22}\] From (3.19) and (3.22), we get \[2n(df_{1}-df_{3})(Y)-2n(df_{1}-df_{3})(\xi)\eta(Y)=(f_{3}-f_{1})[\eta(Y)(\xi f) -(Yf)]. \tag{3.23}\] If \(f_{1}\) and \(f_{3}\) are constants, then we get from (3.23) \[\eta(Y)(\xi f)-(Yf)=0.\] The above equation can be rewritten as \[g((\xi f)\xi,Y)=g(\nabla f,Y).\] Since \(Y\) is an arbitrary vector field, we can write \[\nabla f=(\xi f)\xi. \tag{3.24}\] Differentiating (3.24) covariantly along the vector field \(X\), we get \[\nabla_{X}\nabla f=\big{(}X(\xi f)\big{)}\xi+(\xi f)[(f_{3}-f_{1})\phi X+(f_{ 6}-f_{4})\phi hX].\] Substituting the value of \(\nabla_{X}\nabla f\), we get \[QX=[\lambda-(\frac{p}{2}+\frac{1}{2n+1})]X-\big{(}X(\xi f)\big{)}[(f_{3}-f_{1}) \phi X+(f_{6}-f_{4})\phi hX]. \tag{3.25}\] Comparing the coefficients of \(\phi X\) from (2.11) and (3.25), we get \((\xi f)=0\). Using this in (3.24), we get \(\nabla f\)=0. Hence \(f\) is constant. **Theorem 3.6**.: _Consider a \((2n+1)\)-dimensional generalized \((\kappa,\mu)\)-space form \(M\) admitting a conformal gradient Ricci soliton \((g,\nabla f,\lambda)\).Then the soliton is_ 1. _shrinking if_ \(p<[2f_{3}-6f_{2}-2nf_{1}-\frac{2}{2n+1}]\)__ 2. _steady if_ \(p=[2f_{3}-6f_{2}-2nf_{1}-\frac{2}{2n+1}]\)__ 3. _expanding if_ \(p>[2f_{3}-6f_{2}-2nf_{1}-\frac{2}{2n+1}]\)__ Proof.: Comparing the coefficents of \(X\) in (2.11) and (3.25), we get \[\lambda=[(\frac{p}{2}+\frac{1}{2n+1}\big{)}+(2nf_{1}-3f_{2}-f_{3})].\] Now we use the three different conditons on \(\lambda\) in the above equation to get the desired expressions.
2302.10516
Deficiency, Kinetic Invertibility, and Catalysis in Stochastic Chemical Reaction Networks
Stochastic chemical processes are described by the chemical master equation satisfying the law of mass-action. We first ask whether the dual master equation, which has the same steady state as the chemical master equation, but with inverted reaction currents, satisfies the law of mass-action, namely, still describes a chemical process. We prove that the answer depends on the topological property of the underlying chemical reaction network known as deficiency. The answer is yes only for deficiency-zero networks. It is no for all other networks, implying that their steady-state currents cannot be inverted by controlling the kinetic constants of the reactions. Hence, the network deficiency imposes a form of non-invertibility to the chemical dynamics. We then ask whether catalytic chemical networks are deficiency-zero. We prove that the answer is no when they are driven out of equilibrium due to the exchange of some species with the environment.
Shesha Gopal Marehalli Srinivas, Matteo Polettini, Massimiliano Esposito, Francesco Avanzini
2023-02-21T08:44:50Z
http://arxiv.org/abs/2302.10516v2
# Deficiency, Kinetic Invertibility, and Catalysis ###### Abstract Stochastic chemical processes are described by the chemical master equation satisfying the law of mass-action. We first ask whether the dual master equation, which has the same steady state as the chemical master equation, but with inverted reaction currents, satisfies the law of mass-action, namely, still describes a chemical process. We prove that the answer depends on the topological property of the underlying chemical reaction network known as deficiency. The answer is yes only for deficiency-zero networks. It is no for all other networks, implying that their steady-state currents cannot be inverted by controlling the kinetic constants of the reactions. Hence, the network deficiency imposes a form of non-invertibility to the chemical dynamics. We then ask whether catalytic chemical networks are deficiency-zero. We prove that the answer is no when they are driven out of equilibrium due to the exchange of some species with the environment. ## I Introduction Open chemical reaction networks (CRNs) driven out of equilibrium constitute the underlying mechanism of many complex processes in biosystems [1] and synthetic chemistry [2]. Information processing [3], oscillations [4], self-replication [5; 6; 7], self-assembly [8; 9] and molecular machines [10; 11] provide some prototypical examples. At steady state, these CRNs operate with nonzero net reaction currents sustained by thermodynamic forces generated via continuous exchanges of free energy with the environment [12]. Their energetics can be characterized on rigorous grounds using nonequilibrium thermodynamics of CRNs undergoing stochastic [13; 14; 15] or deterministic dynamics [16; 17; 18; 19]. For instance, this theory has been used to quantify the energetic cost of maintaining coherent oscillations [20]; the efficiency of dissipative self-assembly [21] and central metabolism [22]; the internal free energy transduction of a model of chemically-driven self-assembly and an experimental light-driven bimolecular motor [23]. It has also been used to determine speed limits for the chemical dynamics [24; 25]. Away from equilibrium, the currents are nonlinear functions of the thermodynamic forces. While they increase with the forces close to equilibrium (i.e., for small forces), they can also decrease far from equilibrium (i.e., for large forces) [26; 27]. They can further show a highly nonlinear response to time-dependent modulations in the forces [28; 29] as well as lead to the emergence of oscillations or chaos [30; 31]. In this paper, we focus on the steady-state dynamics of stochastic chemical processes described by the chemical master equation satisfying the law of mass-action. We start by investigating whether the net currents of all reactions can be inverted by controlling the kinetic constants only. To do so, we use the dual master equation [32] (also called adjoint or reversal [33; 34; 35]) of the chemical master equation which, by definition, has the same steady state but inverted currents. It thus describes a stochastic process, called here dual process, whose dynamics is inverted compared to the chemical process (see App. A). From a thermodynamic perspective, the dual process is not in general the time-reversed process which enters the definition of the entropy production of a stochastic trajectory, but it enters the definition of the adiabatic and nonadiabatic entropy production [36]. From a dynamic perspective, the dual process is not a chemical process unless the dual master equation satisfies the law of mass-action. By building on the results derived in Refs. [37; 38] (and summarized in App. B), we prove that this happens if and only if the underlying CRN has zero deficiency. This constitutes our first main result. The deficiency is a topological property of CRNs, which roughly speaking quantifies the number of "hidden" cycles (i.e., cycles that have not a graphical representation in the graph of complexes) [39]. Physically, our result means that the network deficiency determines the kinetic invertibility (or non-invertibility) of the stochastic chemical dynamics. We further show that the correspondence between network deficiency and invertibility is specific of the stochastic dynamics: in the thermodynamic limit, where the dynamics becomes deterministic, the net steady-state currents can always be inverted independently of the network deficiency. We then investigate which CRNs are not deficiency-zero. We consider catalytic CRNs which are ubiquitous in nature. From a chemical point of view, a catalyst is a substance that acts as both a reactant and a product of the reaction while increasing its rate [40]. From this definition, necessary stoichiometric conditions for catalysis in CRNs have been recently derived and used as a mathematical basis to identify minimal autocatalytic subnetworks (called motifs or cores) in larger CRNs [41]. By building on these results, we prove that catalytic CRNs are not deficiency-zero when they are driven out of equilibrium due to the exchange of some species with the environment. This constitutes our second main result. Together, our two main results show that the net steady-state currents of stochastic catalytic CRNs driven out of equilibrium via exchanges of species with the environment can
2303.08070
Victoria Amazonica Optimization (VAO): An Algorithm Inspired by the Giant Water Lily Plant
The Victoria Amazonica plant, often known as the Giant Water Lily, has the largest floating spherical leaf in the world, with a maximum leaf diameter of 3 meters. It spreads its leaves by the force of its spines and creates a large shadow underneath, killing any plants that require sunlight. These water tyrants use their formidable spines to compel each other to the surface and increase their strength to grab more space from the surface. As they spread throughout the pond or basin, with the earliest-growing leaves having more room to grow, each leaf gains a unique size. Its flowers are transsexual and when they bloom, Cyclocephala beetles are responsible for the pollination process, being attracted to the scent of the female flower. After entering the flower, the beetle becomes covered with pollen and transfers it to another flower for fertilization. After the beetle leaves, the flower turns into a male and changes color from white to pink. The male flower dies and sinks into the water, releasing its seed to help create a new generation. In this paper, the mathematical life cycle of this magnificent plant is introduced, and each leaf and blossom are treated as a single entity. The proposed bio-inspired algorithm is tested with 24 benchmark optimization test functions, such as Ackley, and compared to ten other famous algorithms, including the Genetic Algorithm. The proposed algorithm is tested on 10 optimization problems: Minimum Spanning Tree, Hub Location Allocation, Quadratic Assignment, Clustering, Feature Selection, Regression, Economic Dispatching, Parallel Machine Scheduling, Color Quantization, and Image Segmentation and compared to traditional and bio-inspired algorithms. Overall, the performance of the algorithm in all tasks is satisfactory.
Seyed Muhammad Hossein Mousavi
2023-01-22T10:04:00Z
http://arxiv.org/abs/2303.08070v1
# Victoria Amazonica Optimization (VAO) ###### Abstract The Victoria Amazonica plant, often known as the Giant Water Lily, has the largest floating spherical leaf in the world, with a maximum leaf diameter of 3 meters. It spreads its leaves by the force of its spines and creates a large shadow underneath, killing any plants that require sunlight. These water tyrants use their formidable spines to compel each other to the surface and increase their strength to grab more space from the surface. As they spread throughout the pond or basin, with the earliest-growing leaves having more room to grow, each leaf gains a unique size. Its flowers are transsexual and when they bloom, Cyclocephala beetles are responsible for the pollination process, being attracted to the scent of the female flower. After entering the flower, the beetle becomes covered with pollen and transfers it to another flower for fertilization. After the beetle leaves, the flower turns into a male and changes color from white to pink. The male flower dies and sinks into the water, releasing its seed to help create a new generation. In this paper, the mathematical life cycle of this magnificent plant is introduced, and each leaf and blossom are treated as a single entity. The proposed bio-inspired algorithm is tested with 24 benchmark optimization test functions, such as Ackley, and compared to ten other famous algorithms, including the Genetic Algorithm. The proposed algorithm is tested on 10 optimization problems: Minimum Spanning Tree, Hub Location Allocation, Quadratic Assignment, Clustering, Feature Selection, Regression, Economic Dispatching, Parallel Machine Scheduling, Color Quantization, and Image Segmentation and compared to traditional and bio-inspired algorithms. Overall, the performance of the algorithm in all tasks is satisfactory. victoria Amazonica, Bio-inspired Algorithm, Optimization Test Function, Parallel Machine Scheduling, Economic Dispatching ## 1 Introduction Nature provides everything we require, including solutions to the most intricate problems. Metaheuristics Optimization (Bianchi et al., 2009) refers to problem-solving techniques based on biology in nature or bio-inspired behavior of animals, plants, or natural phenomena, which aim to find the most optimal solution to a complex mathematical problem. Occasionally, these algorithms are referred to as nature-inspired algorithms (Mousavi et al., 2017; Vikhar, 2016), but for the sake of clarity, the term 'bio-inspired' algorithms (Darwish, 2018) will be used throughout the paper.
2310.10707
Demonstrations Are All You Need: Advancing Offensive Content Paraphrasing using In-Context Learning
Paraphrasing of offensive content is a better alternative to content removal and helps improve civility in a communication environment. Supervised paraphrasers; however, rely heavily on large quantities of labelled data to help preserve meaning and intent. They also often retain a large portion of the offensiveness of the original content, which raises questions on their overall usability. In this paper we aim to assist practitioners in developing usable paraphrasers by exploring In-Context Learning (ICL) with large language models (LLMs), i.e., using a limited number of input-label demonstration pairs to guide the model in generating desired outputs for specific queries. Our study focuses on key factors such as - number and order of demonstrations, exclusion of prompt instruction, and reduction in measured toxicity. We perform principled evaluation on three datasets, including our proposed Context-Aware Polite Paraphrase (CAPP) dataset, comprising of dialogue-style rude utterances, polite paraphrases, and additional dialogue context. We evaluate our approach using four closed source and one open source LLM. Our results reveal that ICL is comparable to supervised methods in generation quality, while being qualitatively better by 25% on human evaluation and attaining lower toxicity by 76%. Also, ICL-based paraphrasers only show a slight reduction in performance even with just 10% training data.
Anirudh Som, Karan Sikka, Helen Gent, Ajay Divakaran, Andreas Kathol, Dimitra Vergyri
2023-10-16T16:18:55Z
http://arxiv.org/abs/2310.10707v2
# Demonstrations Are All You Need: ###### Abstract Paraphrasing of offensive content is a better alternative to content removal and helps improve civility in a communication environment. Supervised paraphrasers; however, rely heavily on large quantities of labelled data to help preserve meaning and intent. They also retain a large portion of the offensiveness of the original content, which raises questions on their overall usability. In this paper we aim to assist practitioners in developing usable paraphrasers by exploring In-Context Learning (ICL) with large language models (LLMs), i.e., using a limited number of input-label demonstration pairs to guide the model in generating desired outputs for specific queries. Our study focuses on key factors such as - number and order of demonstrations, exclusion of prompt instruction, and reduction in measured toxicity. We perform principled evaluation on three datasets, including our proposed Context-Aware Polite Paraphrase dataset, comprising of dialogue-style rude utterances, polite paraphrases, and additional dialogue context. We evaluate our approach using two closed source and one open source LLM. Our results reveal that ICL is comparable to supervised methods in generation quality, while being qualitatively better by 25% on human evaluation and attaining lower toxicity by 76%. Also, ICL-based paraphrasers only show a slight reduction in performance even with just 10% training data. ## 1 Introduction _Disclaimer: Figures and examples in this work may feature offensive language._ Timely moderation helps curb the spread of hateful content on social-media platforms and prevents the harmful effects it has on a user's psychological well-being (Waldron, 2012; Ye et al., 2023). Unfortunately, the sheer volume of content generated on these platforms makes it infeasible to enforce a scalable human moderation process (Hassan et al., 2022; Dosono and Semaan, 2019). AI-based moderation systems can help with this problem. However, current systems often remove or flag offensive content, which can reduce user participation and diversity in online discussions (Xiang et al., 2012; Warner and Hirschberg, 2012; Kwok and Wang, 2013; Wang et al., 2014; Burnap and Williams, 2015; Nobata et al., 2016; Davidson et al., 2017; Founta et al., 2019; Jhaver et al., 2019; Ye et al., 2023). A better alternative is to paraphrase offensive content to make it less offensive. Paraphrasing offensive content; however, is nontrivial since the paraphrased output should not only be inoffensive but also retain the original meaning and intent. Prior works (Atwell et al., 2022; Logacheva et al., 2022) have proposed using supervised generative models (Vaswani et al., 2017) like BART (Lewis et al., 2019), to paraphrase offensive content. However, these methods require sufficient la Figure 1: Influence of number and order of demonstrations, and instruction, on BLEU Score performance and measured Toxicity. Comparison is done between BART and three ICL-based approaches. Numbers on the \(x\)-axis represent number of demonstrations used in the ICL framework. Note, measured Toxicity for BART in ParaDetox is 82, exceeding the set \(y\)-axis limit. belled training data, which makes it harder to adapt them to novel settings. Moreover, these models are optimized to perform well on several automated metrics Papineni et al. (2002); Zhang et al. (2019); Lin (2004); Vedantam et al. (2015) at the expense of retaining a significant portion of the original toxicity, thereby making us question its overall usability for the targeted task (see Figure 1). The emergence of few-shot _In-Context Learning_ (ICL) has revolutionized the field by complementing the generalization capabilities of _Large Language Models_ (LLMs) to quickly and accurately adapt to new tasks. It does this by using a small amount of labeled data, known as _demonstrations_ or _demos_Brown et al. (2020). As shown in Figure 1, ICL shows BLEU score performance that is similar to BART, but significantly reduces the measured toxicity Hanu and Unitary team (2020). Through detailed, principled experiments we explore the viability of ICL for paraphrasing offensive content, which to the best of our knowledge has not been done before. Our key contributions and findings are summarized below. 1. Influence of the following factors on generation quality, as summarized in Figure 1. (a) _Number of Demonstrations:_ Performance improves by increasing number of demos but eventually saturates. (b) _Selection and Order of Demonstrations:_ Systematically selecting and ordering demos is better than its random counterpart. It is more effective to select demos that are semantically similar to the query and present them in a decreasing/increasing order of similarity. (c) _Exclusion of Instruction in Prompt:_ When the main instruction is removed and with enough demos, the generation performance is only slightly affected but at the expense of toxicity. This shows that to simultaneously preserve performance and lower toxicity, we need both demos and instructions. (d) _Robustness to Training Data Size:_ Carefully ordering demos shows robustness to available training data size, with only small decrease in generation performance when 10% of training data is only made available. 2. We tested the capabilities of OpenAI's _textdavinci-003, gpt-3.5-turbo_ models and the open-source _Vicuna-13b_ model Chiang et al. (2023). These models were compared to SOTA supervised methods using both automated metrics and a human evaluation process. ICL generated paraphrases are comparable to supervised approaches in performance, but on average show 76% less toxicity and are 25% better using a manual qualitative assessment, and thus have superior overall usability. 3. Current paraphrasers are less effective at mitigating offensiveness like rudeness in conversations. They are trained using datasets that focus on social-media content, and hence aren't directly applicable to dialogue-based environments. To this end we release a new _Context-Aware Polite Paraphrase (CAPP)_ dataset, a dialogue-style corpus of rude utterances and corresponding polite paraphrases, with samples accompanied by additional context in the form of prior two turns from the dialogue. We conduct experiments to show the importance and benefit of incorporating context to improve paraphraser performance. **Paper Outline:** Section 2 describes ICL in our experimental setting; details about selecting and ordering the demos; and finally our proposed CAPP dataset in detail. Section 3 contains detailed experimental results. Section 4 discusses related work. Section 5 concludes the paper. ## 2 Method ### In-Context Learning Prompts used for paraphrasing offensive content using ICL contain three parts - (1) an instruction \(I\) that defines the task; (2) a set of \(n\) demonstrations from the training corpus, \(D=(x_{i},y_{i})_{i=1}^{n}\), where \((x_{i},y_{i})\) denotes the offensive, inoffensive sentence pairs; and (3) the offensive test query sample \(x_{q}\). Consider the following prompt example with \(n=2\) demonstrations, where the final sentence represents the query for which we want to generate the paraphrase. [backgroundbackground=] _Instruction:_ Paraphrase the following sentence to be more polite. [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Are you feeling alright? [background=] _Sentence:_ Get out of the way. [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's the name with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's the name with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's the name with you? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Are you feeling alright? [background=] _Sentence:_ Get out of the way. [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's the name with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Are you feeling alright? [background=] _Sentence:_ Get out of the way. [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Are you feeling alright? [background=] _Sentence:_ Get out of the way. [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Are you feeling alright? [background=] _Sentence:_ Get out of the way. [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [background=] _Purophrase:_ Are you feeling alright? [background=] _Sentence:_ Get out of the way. [background=] _Purophrase:_ Can you please step aside? [background=] _Sentence:_ What's wrong with you? [MISSING_PAGE_POST] est BLEU scores, followed by prompts with only demos, while prompts that include both have the best BLEU scores. In terms of Toxicity, prompts with just instruction show the least Toxicity, followed by prompts that include both demos and instruction, while prompts that only include demos exhibit a higher toxicity. The order of demos is also crucial, and we discuss this next. ### Selection and Ordering of Demonstrations Here we describe our approach to select and order the demonstrations. We first compute normalized vector embeddings for each training sample \(x_{i}\) and query \(x_{q}\), denoted as \(e_{i}\) and \(e_{q}\) respectively. Next, the cosine similarity scores between \(e_{q}\) and each \(e_{i}\) are used to select \(n\) demonstrations. We explored following two variations for selecting the demonstrations - (1) _Least Similar_: Select \(n\) demos with the lowest cosine similarity scores; (2) _Most Similar:_ Select \(n\) demos with the highest cosine similarity scores. These are compared to randomly selecting \(n\) demos, that are arranged in no particular order. We further investigated if arranging the \(n\) selected demos in either ascending or descending order based on their measured cosine similarity, had any impact on the overall performance. Using BLEU and toxicity, Figure 1 compares _Random_ selection to the _Most Similar (Descending order)_ approach, with the latter being better on both fronts. Our findings are described in detail in Section 3.2. ### Context-Aware Polite Paraphrase (CAPP) Dataset **Motivation:** Datasets like APPDIA (Atwell et al., 2022) and ParaDetox (Logacheva et al., 2022) contain comments flagged for toxicity and provide nontoxic paraphrases that maintain the core meaning in a neutral manner. However, these datasets are not directly suitable for training models to address rudeness in speech, as speech is often directed at specific participants and social media posts have a broader audience, resulting in different styles and tones. Additionally, most social media posts can be remedied by removing explicit insults, but rude speech requires additional modifications to make it more polite. For instance, we should not just eliminate offensive language and direct insults in a rude utterance, but also transform an accusation of ignorance into an inquiry about knowledge. **Rude Data Selection:** To address the aforementioned differences, we constructed a dialogue-style rude speech dataset by leveraging the OpenSubtitles corpus (Lison and Tiedemann, 2016), known for its conversational nature, extensive size, and availability of multilingual parallel data, enabling future exploration of multilingual politeness paraphrasing. Our approach involved a two-step process to extract target rude utterances. Initially, we fine-tuned a DistilBERT-base model (Sanh et al., 2019) using the Stanford Politeness corpus (Danescu-Niculescu-Mizil et al., 2013) and a subset of manually labeled OpenSubtitles samples to train a three-class model capable of predicting polite, neutral, or rude sentences. We then utilized this fine-tuned model to annotate a larger portion of the OpenSubtitles corpus, bootstrapping additional training data for our final rudeness detection model. A separate portion of the OpenSubtitles dataset was selected and labeled as rude, polite, or neutral, resulting in an intermediate set containing rude samples without polite paraphrases. Detailed information about the training/evaluation of the rudeness detector is provided in Appendix A. **Polite Paraphrase Generation and Evaluation:** We captured context, if available, in the form of prior two turns preceding up to the original utterance. We used the gpt-3.5-turbo model to generate polite paraphrases, and checked the impact of incorporating context on the quality of generated paraphrases. We tested this by employing three different prompts for generating three versions of polite paraphrases - (1) _Context-Free:_ No context included in the prompt; (2) _Context-Infused:_ Prompt includes context and significantly influences the generated paraphrase; (3) _Context-Aware:_ Prompt includes context, with the paraphrase being less impacted by it. 500 rude utterances and their corresponding polite paraphrases were selected for evaluation. An in-house annotator assessed the quality of the paraphrases using the scoring guidelines in Appendix B, Table 3 and was not informed about the type of prompt used. Table 1 shows the final evaluation scores. All scores are significantly different, with the Context-Aware prompt achieves a score comparable to the Context-Free prompt while still incorporating context like the Context-Infused \begin{table} \begin{tabular}{|c||c|} \hline **Prompt** & **Manual Evaluation Score\({}^{\dagger}\)** \\ \hline \hline Context-Free & \(4.214\pm 1.047\) \\ \hline Context-Infused & \(3.324\pm 0.839\) \\ \hline Context-Aware & \(4.096\pm 1.093\) \\ \hline \end{tabular} \end{table} Table 1: Human evaluation scores of 500 polite paraphrases generated using different prompts. A higher score indicates a qualitatively better approach. prompt. Context-Aware combines the benefits of both, and was hence used in the CAPP dataset. ## 3 Experiments and Discussion We realized ICL using OpenAI's text-davinci-003, gpt-3.5-turbo models, and the open-source Vicuna-13b model. We performed evaluation on the APPDIA Atwell et al. (2022), ParaDetox Logacheva et al. (2022), CAPP datasets, with the corresponding (#training, #test) being (1584, 199), (11927, 670), (7939, 1120) respectively. APPDIA contains offensive Reddit comments and their inoffensive paraphrases. The ParaDetox corpus consists of toxic and non-toxic sentence pairs, obtained by filtering the larger ParaNMT corpus Wieting and Gimpel (2017). We used the sentence transformer (_all-mpnet-base-v2_)Reimers and Gurevych (2019) to generate the normalized embeddings described in Section 2.2. We evaluated generation quality using automated evaluation metrics such as BLEU Papineni et al. (2002), BERT-F1 Zhang et al. (2019), ROUGE Lin (2004) and CIDEr Vedantam et al. (2015). For toxicity we used the implementation by Hanu and Unitary team (2020). The exact prompt instruction used in all experiments is provided in Appendix C. ### Number of Demonstrations Figure 2 shows the plot between number of demonstrations versus BLEU (refer to Appendix D.1, Figure 8 for other metrics). We set the number of demonstrations to \([0,1,10,20,30,40]\) for text-davinci-003, gpt-3.5 turbo, and \([0,1,2,4,6,8,10]\) for Vicuna-13b. We used the _"Most Similar (Descending Order)"_ approach described in Section 2.2 to select and order the demos. We observe that BLEU improves rapidly until 10 demos for the OpenAI models and 4 demos for the Vicuna-13b model across all datasets. Further increasing the demos only results in slight improvement, as each additional demo is semantically less similar to the query and thereby less important than the demonstrations selected before Liu et al. (2021). We notice in the case for the gpt-3.5-turbo model on CAPP dataset that BLEU without any demos is better than with 40 demos. One possibility is that the main instruction used here was less effective in the ICL paradigm, and that a different instruction could have increased the BLEU score, as seen in Section 3.3. However, we believe this happens because the polit Figure 3: BLEU score and measured toxicity performance with different instructions but with the same set of demos. Instructions can either complement or work against the selected demos and accordingly affect the BLEU score. No instruction setting shows comparable BLEU to instructions with demos but result in paraphrases with higher toxicity. Figure 2: BLEU as a function of number of demos. Noticeable improvement in BLEU is observed in the beginning, with performance saturating after a certain number of demos. were generated using gpt-3.5-turbo. This hints at the possibility of ICL not necessarily improving paraphrasing performance of LLMs, which in turn were used to generate the dataset. We have similar observations in the following sections as well. ### Selection and Order of Demonstrations We now discuss the effect of selection and ordering the demos in the prompt on BLEU. We set the number of demonstrations to 10 and explore different ordering mechanisms described in Section 2.2. In Figure 4, we observe that the _Random_ strategy sometimes achieves better BLEU than the _Least Similar_ strategy. However, in most cases _Most Similar_ shows better performance than both _Random_ and _Least Similar_. This intuitively makes sense since _Most Similar_ represents samples from the training corpus that are most semantically similar to the query (Liu et al., 2021). This enables the LLM to generate a paraphrase that is also similar to the Gold-Standard paraphrase of the query. Next, the order in which the demos are arranged also has a an impact on BLEU score. We find that curating the demos in decreasing order of similarity often results in better BLEU than arranging them in increasing order of similarity. We also observe similar trends with other automated evaluation metrics. Note, the above observations do not apply to the gpt-3.5-turbo model on the CAPP dataset. ### Significance of Instruction We now investigate the effect of removing the instruction in the prompt. Figure 3 and 4 show prompts that only incorporate demonstrations (_no instruction_) show BLEU scores that are on par with prompts that include both instructions and demonstrations. This is interesting since it is a common practice with recent instruction-tuned LLMs to always include the instruction even if no demonstrations are provided. Our results suggest that if it is difficult to determine effective instructions for the target paraphrasing task, with ICL one can simply use a few systematically selected demonstrations to get paraphrases that have high generation quality. In Figure 3; for text-davinci-003 on APPDIA, we observe that the _no instruction_ setting retains a significant amount of the original content's toxicity, thereby making its usability questionable. Similar observations were made with other models (refer to Appendix D.2, Figure 9). Order of demos also plays an important role in the no instruction setting, with the _Most Similar_ showing much lower toxicity than both _Random_ and _Least Similar_ strategies. Cases that include both instruction and demos, the measured toxicity is less impacted by the order of demos, indicating that the main instruction serves as a toxicity regularizer. We want to also highlight that creating a good instruction for paraphrasing Figure 4: BLEU as a function of order of demonstrations and type of instruction used in the prompt design. Demonstrations that are semantically more similar to the query sample show better performance than less semantically similar and randomly selected samples. Also, prompts that only include demonstrations (_i.e.,_ no instruction) show a BLEU score that is comparable to prompts that include instruction and demonstrations. tasks is non-trivial. Despite using good demos, a bad instruction can negatively impact the quality of the generated paraphrase. In Figure 3, using the text-davinci-003 model on the APPDIA dataset, we see that certain instructions can result in lower BLEU than prompts that have no instruction. Similarly, in Figure 4, the Vicuna-13b model shows better BLEU with just the curated demonstrations on the APPDIA and ParaDetox datasets. ### Comparison with Supervised Approaches We compare our ICL-based approach to prior state-of-the-art supervised baselines. For APPDIA we use BART, T5, DialoGPT, and PDTB+RST methods as done in (Atwell et al., 2022); for ParaDetox we use BART as done in (Logacheva et al., 2022); and for CAPP we fine-tuned BART-base and T5-base on the training set. We obtained the generated paraphrases for APPDIA and ParaDetox by their authors. We used the default hyperparameters defined in the Transformers Seq2SeqTrainer for fine-tuning on CAPP. The comparison between our ICL-based approaches and prior baselines is shown in Table 2. The objective of any paraphraser should be to score high on generation quality and have a low Toxicity in the generated paraphrases. For APPDIA and ParaDetox, the BART and T5 models perform better than our ICL-based approach on generation quality. However, the paraphrases generated by these baselines seem to retain a significant amount of the original toxicity. To better understand this issue, we use the Toxicity for the In-offensive Gold-Standard in each dataset as a point of reference. Ideally, a paraphraser should generate paraphrases whose average Toxicity is no greater than this reference. We observe that all baseline methods except DialoGPT show a higher Toxicity, while the ICL-based methods exhibit Toxicity that is lower or on par with that of the Gold-Standard. Our approach offers a better trade-off between generation quality and Toxicity. Figure 5 illustrates the Toxicity measured for the different ICL-based methods by varying the number of demonstrations. The _Most Similar_ (_Descending Order_) strategy was used to select and organize \begin{table} \begin{tabular}{|c||c||c|c|c|c||c||c|} \hline **Dataset** & **Method** & **BLEU\(\uparrow\)** & **BERT-F1\(\uparrow\)** & **ROUGE\(\uparrow\)** & **CIDEr\(\uparrow\)** & **Toxicity\(\downarrow\)** & **Quality\(\uparrow\)** \\ \hline \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & _Offensive Test Set_ & - & - & - & - & 75.60 & - \\ \cline{2-8} & _Inoffensive Gold-Standard_ & - & - & - & - & 14.37 & 3.68\(\pm\)0.93 \\ \cline{2-8} & BART (Atwell et al., 2022) & 65.0 & 68.1 & 65.6 & 4.77 & 25.91 & 3.42\(\pm\)1.08 \\ \cline{2-8} & T5 (Atwell et al., 2022) & 65.3 & 69.2 & 66.5 & 4.75 & 20.15 & - \\ \cline{2-8} & DialoGPT (Atwell et al., 2022) & 42.3 & 46.7 & 38.0 & 1.11 & 14.51 & 3.52\(\pm\)0.93 \\ \cline{2-8} & PDTB+RST (Atwell et al., 2022) & 46.2 & 50.7 & 42.5 & 1.54 & 16.39 & - \\ \cline{2-8} & text-davinci-003 (10 Demos) & 56.8 & 63.6 & 57.6 & 3.70 & **11.64** & **3.98\(\pm\)1.05** \\ \cline{2-8} & text-davinci-003 (40 Demos) & 60.9 & 66.7 & 62.9 & 4.29 & **12.67** & **3.77\(\pm\)1.08** \\ \cline{2-8} & gpt-3.5-turbo (10 Demos) & 45.8 & 53.3 & 41.6 & 2.12 & **7.00** & **4.24\(\pm\)0.91** \\ \cline{2-8} & gpt-3.5-turbo (40 Demos) & 50.4 & 58.2 & 47.6 & 2.67 & **10.08** & **4.11\(\pm\)1.00** \\ \cline{2-8} & Vicuna-13b (4 Demos) & 38.2 & 46.8 & 34.9 & 1.41 & **12.07** & **3.87\(\pm\)1.00** \\ \cline{2-8} & Vicuna-13b (10 Demos) & 40.3 & 48.0 & 37.6 & 1.79 & **18.44** & **3.91\(\pm\)1.07** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & _Offensive Test Set_ & - & - & - & - & 88.64 & - \\ \cline{2-8} & _Inoffensive Gold-Standard_ & - & - & - & - & 6.56 & 3.77\(\pm\)0.97 \\ \cline{2-8} & BART (Logacheva et al., 2022) & 77.3 & 76.2 & 69.8 & 4.94 & 82.00 & 2.82\(\pm\)0.75 \\ \cline{2-8} & text-davinci-003 (10 Demos) & 68.2 & 67.7 & 58.9 & 3.67 & **6.50** & **4.34\(\pm\)0.91** \\ \cline{2-8} & text-davinci-003 (40 Demos) & 70.1 & 69.3 & 60.4 & 3.95 & **6.21** & **4.22\(\pm\)0.96** \\ \cline{2-8} & gpt-3.5-turbo (10 Demos) & 60.3 & 62.0 & 50.5 & 2.72 & **5.71** & **3.90\(\pm\)1.01** \\ \cline{2-8} & gpt-3.5-turbo (40 Demos) & 64.3 & 65.1 & 54.1 & 3.08 & **6.20** & **3.92\(\pm\)1.02** \\ \cline{2-8} & Vicuna-13b (4 Demos) & 49.3 & 54.1 & 41.1 & 1.78 & **7.23** & **4.00\(\pm\)0.99** \\ \cline{2-8} & Vicuna-13b (10 Demos) & 52.8 & 56.7 & 43.7 & 2.05 & **9.98** & **4.53\(\pm\)0.84** \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & _Offensive Test Set_ & - & - & - & 25.87 & - \\ \cline{2-8} & _Inoffensive Gold-Standard_ & - & - & - & - & 0.94 & 4.38\(\pm\)0.83 \\ \cline{2-8} & BART & 38.5 & 48.3 & 36.3 & 1.86 & 3.54 & 3.78\(\pm\)0.87 \\ \cline{2-8} & T5 & 39.4 & 50.2 & 37.9 & 1.92 & 2.63 & 3.84\(\pm\)0.87 \\ \cline{2-8} & text-davinci-003 (10 Demos) & 40.6 & 49.6 & 36.1 & 1.73 & **1.04** & **4.03\(\pm\)0.96** \\ \cline{2-8} & text-davinci-003 (40 Demos) & 44.5 & 53.2 & 40.7 & 2.10 & **1.09** & **4.10\(\pm\)0.94** \\ \cline{2-8} & gpt-3.5-turbo (10 Demos) & 43.7 & 51.9 & 39.6 & 2.00 & **0.82** & **4.44\(\pm\)0.81** \\ \cline{2-8} & gpt-3.5-turbo (40 Demos) & 47.1 & 55.0 & 43.0 & 2.33 & **0.72** & **4.58\(\pm\)0.76** \\ \cline{2-8} & Vicuna-13b (4 Demos) & 35.8 & 42.4 & 31.3 & 1.34 & **1.04** & **4.36\(\pm\)0.78** \\ \cline{2-8} & Vicuna-13b (10 Demos) & 37.5 & 35.9 & 33.6 & 1.55 & **1.02** & **4.21\(\pm\)0.88** \\ \hline \end{tabular} \end{table} Table 2: Quantitative and qualitative assessment of different LLMs using the ICL paradigm and comparison against different baseline supervised approaches. Toxicity of the offensive test set and inoffensive groud-truth paraphrases is also provided. Differences in the reported Mean\(\pm\)Std Quality scores between each ICL-based approach and the different baselines is significantly different (_i.e.,_\(P-\mathrm{value}<0.05\)). the demos. Figure 5 also displays the measured Toxicity of the Offensive Test Set, Gold-Standard and the different baseline approaches. Note that, B1, B2, B3, B4 refer to BART, T5, DialoGPT and PDTB+RST models respectively. For APPDIA and ParaDetox we observe that LLMs without any demonstrations show much lower Toxicity than when any demonstration is used. The reverse is observed for the CAPP dataset. The absence of demos causes LLMs to fallback on their own task definition which results in paraphrases with Toxicity significantly different from that of the Gold-Standard. However, LLMs without any demos show a dramatically lower Toxicity score, but also exhibit a lower BLEU score as seen before in Figure 2. A balance between the main instruction and demos can ensure generation of paraphrases that reduce offensiveness and score high using different automated metrics. ### Additional Dialogue Context Helps We show preliminary results of using the prior two utterances as additional context in the our ICL-based method. Similar to the example in Section 2.1, we prepend the context for both the demo and the query. Figure 6 shows BLEU score as we add/remove context and vary the number of demonstrations. We clearly see performance improvement by incorporating dialogue context using the text-davinci-003 and gpt-3.5-turbo models. We were unable to create a prompt for Vicuna-13b that successfully uses additional context in the ICL framework. We will develop such prompts for Vicuna-13b in future work. ### Robustness to Reduced Training Data Here we study the impact of available training data on the performance of our best performing strategy i.e. _"Most Similar (Descending Order)"_. We observe only a minimal fall in BLEU up to 10% of the training data as shown for text-davinci-003 in Figure 7 (refer to Appendix D.3, Figure 10 for other models). Further reducing training data results in noticeable drop in BLEU. We also find that reducing training dataset below 10% results in BLEU score that is similar to _"Random"_ demo selection and arrangement strategy but with access to 100% of the training data. That result shows that our ICL-based method can work with limited training data and thus can be adapted quickly to novel settings. ### Manual Qualitative Assessment We also perform quality assessment of the generated paraphrases using human annotators. We select a subset of 150, 200, and 200 samples from the test-set of APPDIA, ParaDetox and CAPP respectively. For ParaDetox and CAPP, we use all Figure 5: Average Toxicity measured using the _Detoxify_(Hanu and Unitary team, 2020). The orange dotted line serves as a reference for the Gold-Standard’s Toxicity. U, GT, B-#, T-#, G-#, V-# along the \(x\)-axis refer to Utterance, Gold-Standard, Baseline methods, text-davinci-003, gpt-3.5-turbo, Vicuna-13b respectively. # in T-#, G-#, V-# indicate number of demonstrations used. Note, T-0, G-0, V-0 only contain an instruction in the prompt. the baseline methods listed in Table 2. For APPDIA we only use BART and DialoGPT models for comparison. Amongst the ICL-based approaches we use the text-davinci-003 model with 10 and 40 demonstrations, and the Vicuna-13b model with 10 demonstrations, on all the three datasets. We use the scoring guidelines described in Appendix B, Tables 3 and 4, and the information about the generation model was not made available to the single annotator. Table 2 shows that the three ICL-based LLM models received a higher average score that is significantly different (_i.e.,_\(P\)-value \(<0.05\)) than the corresponding baseline methods. We also note that Vicuna-13b's qualitative score was better and in some cases comparable to text-davinci-003, despite having scored lower on metrics measuring generation quality. This shows that open-source LLMs are able to generate paraphrases comparable to closed-source LLMs as per human assessment and might be a viable alternative. ## 4 Related Work Our paper focuses on building usable paraphrasing systems by exploring the potential of LLMs with ICL. There has been significant interest in better understanding the capabilities of ICL, but for other applications (Min et al., 2021; Zhao et al., 2021; Razeghi et al., 2022; Xie et al., 2021; Lampinen et al., 2022; Mishra et al., 2021; Chen et al., 2021; Min et al., 2021; Chen et al., 2023). (Lu et al., 2021) showed that order of demonstrations has a significant impact on model performance. (Liu et al., 2021) showed that retrieving demonstrations that are semantically similar to the query can be a more effective approach to control the variability in performance. (Rubin et al., 2021) learned an encoding scheme to retrieve better demonstrations for ICL. Other works also explored the influence of number of demonstrations in different settings (Garg et al., 2022; Min et al., 2022; Wei et al., 2023). (Zhou et al., 2022) evaluated the importance of each part in the prompt has towards the final performance. In this paper we study the impact of various components on the final performance, while ensuring that the toxicity of the outputs is within tolerable levels. This enables us to propose a few-shot solution to offensive content paraphrasing. Most prior works (Atwell et al., 2022; Logacheva et al., 2022) have modeled paraphrasing as a sequence-to-sequence problem and trained models such as T5, BART on human annotated data. Despite good generation results, these models tend towards higher toxicity and are difficult to adapt to new applications without collecting more data. Our solution addresses those challenges successfully, with only a fraction of the original training set. ## 5 Conclusion In this paper, we focus on developing usable offensive content paraphrasing systems by leveraging generalization capabilities of LLMs and quickly adapting them to new tasks using ICL. A paraphraser should generate qualitatively good paraphrases that preserve the original content's meaning, while also minimizing toxicity. Focusing only on one of these aspects compromises overall usability. Compared to supervised approaches that require lot of training data and often produce undesired yet coherent paraphrases, our ICL-based framework is generally comparable on various evaluation metrics like BLEU, but is qualitatively better and helps significantly reduce toxicity in the generated paraphrases. Through systematic experiments we tested the capabilities and limitations of ICL-based offensive paraphrasers. Other key highlights of using our ICL framework include: (1) Selection and arrangement of demos significantly impacts quality of paraphrases; (2) Measured toxicity is lowest when only the instruction is used Figure 6: Comparison of BLEU between including and excluding prior context in the form of prior two utterances for both OpenAI models. Figure 7: BLEU for the _“Most Similar (Descending Order)”_ approach as a function of percentage of training data available and comparison to Random demo selection with access to 100% of the training data. and highest when only demos are used. Combining both instruction and demos helps ensure quality and usability of generated paraphrases; (3) Robust to limited data, _i.e.,_ with just 10% training data we only see a slight decrease in overall performance, thereby enabling us to easily scale and deploy. ## Acknowledgements This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001122C0032. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views or policies of DARPA, the Department of Defense or the U.S. Government. ## Limitations Here, we list the limitations identified in this paper: 1. We found that ICL fails on datasets that were prepared using the same LLMs used in the ICL framework. Since we used the gpt-3.5-turbo model to create polite paraphrases for our CAPP dataset, we were unable to see the same observations in the results that we did using the other models and datasets. This could have been avoided by creating manually annotated polite paraphrases. However, manual annotation is a laborious process and isn't scalable. Hence, a manual qualitative assessment was done on a small subset of the final CAPP dataset to ensure usability of the generated paraphrases. 2. Prompt engineering for the Vicuna-13b model with the ICL framework is nontrivial. We found it difficult to create main instructions in the prompt that result in the Vicuna-13b model to behave in a desired way. Also, unlike the two OpenAI models, the number of demonstrations that can be effectively passed into Vicuna-13b is quite limited. In some cases we were able to concatenate more than 10 demos to the prompt but it often resulted in generating incomprehensible outputs. 3. The _No Instruction_ prompt explored in the paper resulted in paraphrases that are comparable to prompts that include both instruction and demos, on several automated evaluation metrics. However, we notice that the _No Instruction_ setting also retains a significant amount of toxicity from the original content. We propose that in situations where it is difficult to decide on a good main instruction, one could simply use a few carefully curated and ordered demos like the "Most Similar (Descending Order)" approach to generate paraphrases and check if it is within the desired toxicity levels. 4. Our experimental results indicate that there is no single prompt that works in all situations. One must carefully balance the main instruction and the set of demos from the training corpus to get desired paraphrase outputs. 5. We showed preliminary results showcasing the benefit of incorporating additional context in the form of prior two utterances in the ICL framework. We believe there can be better ways to incorporate this contextual information and further improve performance of LLMs. 6. The closed-source OpenAI models are more powerful, faster and expensive to use. Despite open-source models like Vicuna-13b coming close to OpenAI models on other tasks, they still have a long way to go for offensive content paraphrasing. ## Ethics Statement We have to take great care with our collection of offensive content to protect privacy. We have to ensure judicious use of the collected data to protect the vulnerable against such speech. We recognize that our models cannot entirely eliminate offensive content from a given text. Additionally, we acknowledge that utilizing pretrained models may introduce biases in specific situations, as studies have revealed that pretrained models can be influenced by biases present in the data used for their initial training. We have to continue research on making sure that the LLM's do not hallucinate and end up injecting toxicity since we don't know what they have been trained on. There is a danger of this kind of technology being used in reverse, i.e., take harmless content and paraphrase to inject toxicity. We realize that ethics is an ongoing challenge. We are engaged with the Fairness, Accountability and Transparency community and are learning to address key ethics issues on an ongoing basis.
2305.05802
Opportunistic Mutual Exclusion
Mutual exclusion is an important problem in the context of shared resource usage, where only one process can be using the shared resource at any given time. A mutual exclusion protocol that does not use information on the duration for which each process uses the resource can lead to sub-optimal utilization times. We consider a simple two-process mutual exclusion problem with a central server that provides access to the shared resource. We show that even in the absence of a clock, under certain conditions, the server can opportunistically grant early access to a client based on timing information. We call our new protocol opportunistic mutual exclusion. Our approach requires an extra request signal on each channel between client and server to convey extra information, and the server can grant early access based only on the order of events rather than through measuring time. We derive the handshaking specification and production rules for our protocol, and report on the energy and delay of the circuits in a 65nm process.
Karthi Srinivasan, Yoram Moses, Rajit Manohar
2023-05-09T23:22:45Z
http://arxiv.org/abs/2305.05802v1
# Opportunistic Mutual Exclusion ###### Abstract Mutual exclusion is an important problem in the context of shared resource usage, where only one process can be using the shared resource at any given time. A mutual exclusion protocol that does not use information on the duration for which each process uses the resource can lead to sub-optimal utilization times. We consider a simple two-process mutual exclusion problem with a central server that provides access to the shared resource. We show that even in the absence of a clock, under certain conditions, the server can opportunistically grant early access to a client based on timing information. We call our new protocol opportunistic mutual exclusion. Our approach requires an extra request signal on each channel between client and server to convey extra information, and the server can grant early access based only on the order of events rather than through measuring time. We derive the handshaking specification and production rules for our protocol, and report on the energy and delay of the circuits in a 65nm process. Mutual exclusion, arbitration, timing, asynchronous ## I Introduction The mutual exclusion problem - guaranteeing mutually exclusive access to a certain shared process among a number of other competing processes that each request for access - has been known for decades, and several algorithms have been proposed to solve this under various models [1, 2, 3]. Conventionally, the mutual exclusion problem is solved by instantiating a central server that holds a token which grants access to the shared resource. When any of the clients request for use, the server hands out the token to the client, which then uses the resource and returns the token to the server once it is done. This process then repeats. Since only one client may hold the token at any time, mutual exclusion is guaranteed. There also exist distributed solutions to this problem--for example using rings [4, 5], or trees of processes [6]. Generally, no assumptions on the behavior of the requesting processes is assumed in the design of the mutual exclusion server that handles the whole system. However, in the context of asynchronous circuits, timing analysis and timing simulations are used to determine performance during the design flow [7, 8]. These measurements can also inform efforts to further optimize the design. In this article, we look at an opportunity to optimize the mutual exclusion process, based on knowledge of the timing behavior of the requesting processes, and present a novel technique to achieve the same. We look at the simple case with a single token server and two clients making requests to use a shared resource. We show that if certain bounds on the timing behavior of the clients are known by the server beforehand, then it can, without an internal clock and using only the ordering of signal arrivals from the clients, pre-emptively grant access to one client while another is still using, and thus reduce the idle time of the shared resource, potentially by an unbounded amount. In the following sections, we detail the kinds of timing constraints that we use in the design of the opportunistic mutual exclusion circuit, and show how these particular constraints can be incorporated into the circuit itself. We then derive a straightforward implementation of the circuit that uses three arbiters. Next, we show that there is an alternate implementation which seemingly uses two arbiters but can in fact be reduced to a single arbiter. We conclude with SPICE simulation results and a discussion of potential uses of this circuit. ## II Timing Forks and Zigzags In circuit design, there are ways to infer ordering of pairs of events using a common event that caused them both. This is particularly interesting in situations where it is possible to infer the time ordering of two events on two different processes despite there existing no actual communication between the two processes.These are known as point of divergence constraints. A common example of this is the setup time constraint in synchronous logic design, where the data must be valid at least a certain time before the clock edge arrives. A simple model of such a point of divergence constraint, known as a timing fork, is shown in Fig. 1. Suppose A, B and C are three processes such that an event in B, \(e_{1}\) causes, directly or indirectly, events \(e_{3}\) (called the head) and \(e_{2}\) (called the tail) on A and C respectively. Let the time of occurrence of event \(e_{i}\) be \(t_{i}\). Assume that the delay between \(t_{3}\) and \(t_{1}\) is \(d_{A}\), and that between \(t_{2}\) and \(t_{1}\) is \(d_{C}\), both of which can fall anywhere within a certain range. Now, if \(d_{A}\) is always larger than \(d_{C}\), we can conclude that: \[W =\inf(d_{A}-d_{C})\] \[=\inf((t_{3}-t_{1})-(t_{2}-t_{1}))\] \[=\inf(t_{3})-\sup(t_{2})>0\] This kind of constraint regarding the relative time difference of two event occurrence times is called a timing fork, and the quantity \(W\), which measures the minimum time separation between the two events, is referred to as the weight of the fork. Note that the'minimum separation' intuition only holds for positive-weight forks. If the weight of the fork is negative, then the definition does not change, but the intuition about the quantity it captures switches to being a'maximum separation'. In addition, the three processes do not have to be distinct. There can be a degenerate case where A and B, or C and B, are the same process. The constraint still holds as long the occurrence times still behave the same way. Note that the actual times can occur over a range, but the weight of the fork is calculated based on the upper and lower bounds of these occurrence times. Now, we can extend this concept by using multiple timing forks as follows. Consider the case shown in Fig. 2, where A, B and C are three different processes. We want to make an absolute decision on the ordering of events \(e_{5}\) and \(e_{6}\), without information about the actual ordering of the events that caused them, \(e_{1}\) and \(e_{2}\). Suppose \(e_{1}\) in process A causes two events \(e_{3}\) and \(e_{5}\), which can each occur over a range of times such that \[t_{3}-t_{5}\geq-W_{1}\quad(W_{1}>0)\] thereby forming a timing fork. In other words, the minimum separation between the occurrence times of the two daughter events is lower bounded by a number: \[\inf(t_{3})-\sup(t_{5})=-W_{1}\] Similarly, for process C: \[t_{6}-t_{4}\geq W_{2}>0,\text{where}\] \[\inf(t_{6})-\sup(t_{4})=W_{2}\] Note that nothing about this formulation requires events \(e_{1}\) and \(e_{5}\) to be on the same process, only that \(e_{5}\) is caused by \(e_{1}\), and hence occurs at a later time. Fig. 2 is depicted this way to tie in better with later sections. The same holds for \(e_{2}\) and \(e_{6}\). Now, under a particular case, which is when \(e_{4}\) occurs after \(e_{3}\) (\(t_{4}-t_{3}\geq 0\)) and \(W_{2}-W_{1}>0\), we can determine an ordering of the events \(e_{5}\) and \(e_{6}\) as follows: \[(t_{6}-t_{5})-(t_{4}-t_{3}) \geq W_{2}-W_{1}\] \[\implies(t_{6}-t_{5}) \geq W_{2}-W_{1}\] \[\implies t_{6}-t_{5} >0\] This type of timing constraint, where combining multiple timing forks appropriately allows for determining ordering on events that are not connected by a single point of divergence, is called a timing zigzag. The quantity \(W_{2}-W_{1}\) is referred to as the weight of the zigzag. The crucial point to understand is that there is no common event, no single point of divergence, that determines the time of events \(e_{1}\) and \(e_{2}\), and that they are completely independent of each other. In other words, this case is fundamentally different from a simple timing fork and cannot be reduced to one. In fact, we did not even assume an ordering on the events \(e_{1}\) and \(e_{2}\) in our analysis above. Despite this, there is timing information that is not a simple "a-before-b" relation, that can be inferred based on other timing information. Further, this method of combination can be easily extended to create a zigzag with any number of constituent timing forks. This can lead to significantly more detailed, higher-order information about the timing of sets of events that are seemingly unrelated. In the description that follows, we make, to the best of our knowledge, the first known use of zigzag causality [9, 10], in circuit design. In particular, we apply this to solve the classic mutual exclusion problem. ## III Opportunistic Mutual Exclusion In the conventional mutual exclusion setup, there is a shared resource that needs to be used by at most one client at a time, clients which compete for the use of this resource, and a server, which holds a token which determines who is allowed to use Fig. 1: Timing Fork. A simple example of a point of divergence constraint. Red boxes represent the windows of time during which an event may occur. Red dots represent a particular realization of the event. Fig. 2: Timing Zigzag. Combining two timing forks, with ordering information on a pair of events to infer the relative time of events that do not share the same point of divergence. the resource. The server hands out the token to one of the clients that made a request, making a decision arbitrarily. The client returns the token once it is done using the resource. When the server receives the token, it can now make the next decision on which client the resource should be allocated to. This process repeats, possibly forever. Consider the scenario shown in Fig. 3, where C1 and C2 are the two processes making requests to a server, S, in order to use a shared resource (not shown). In the most general case, we do not assume any a priori knowledge on when each process is going to request for use, or stop using and release the shared resource. In the scenario we describe here, suppose the server had knowledge of the following: 1. **Early Release Time**: The time (\(t_{3}-t_{1}\)) prior to the actual cessation of use of the resource, that C1 informs the server. 2. **Pre-emption Time**: The time (\(t_{4}-t_{2}\)) prior to requiring the resource, that C2 sends a request to the server. 3. **Link Delay**: The delays on the wires between C1, C2 and S. Once again, the definitions above are actually intervals, and when we say the server has this knowledge, we mean that it knows the bounds on these time intervals. With this information, S can calculate the upper bound, \(W_{1}\) on \(t_{3}-t_{re1}\) and the lower bound \(W_{2}\) on \(t_{4}-t_{r2}\). If \(W_{2}\geq W_{1}\), then there is something interesting that the server can do. We call this the asymmetric case, since the complementary bounds on the times when C2 releases early, and C1 requests preemptively are not known. If they are known, then we are in the symmetric case. Now, suppose C1 is using the resource and C2 places a request while the resource is still in use. If the request from C2 arrives before the early release, as shown in Fig. 3(a), then the server cannot ensure mutual exclusion. Since it has no internal clock, there is no measure of _how_ early the preemptive request arrived. So, the server must wait for the actual release from C1 to know that the resource is free, and only then grant approval to C2. The important physical difference in the channels, as shown in Fig. 4, is that C1 must have two request wires coming in to the server, the early (\(r_{e}\)) and actual (\(r_{a}\)), both of which are raised when requesting the resource. When releasing the resource, C1 lowers the early wire first, according to the early release time constraint above to signal that it is 'almost done'. Then, when it is finally done with using the resource, it lowers the actual wire as well. The interesting case is if the request from C2 arrives after the early release from C1. In this case, since we know that the zigzag has positive weight (\(W_{2}-W_{1}\geq 0\)), the server knows that even if it grants the approval immediately, the earliest time at which the resource will be used is later than the latest time at which the resource will be released. Hence, it grants the advance approval, as shown in Fig. 3(b). This results in a reduction of the time for which the resource is idle, as opposed to the usual scenario when the server must wait for explicit information about the end of use of the resource to reach it. Note that, for checking if this advance approval is legal, the server does not need to know the relative times between \(t_{r2}\) and \(t_{re1}\), only the order in which they occurred, voiding the necessity for an internal clock. We call this _opportunistic_ mutual exclusion, since the necessary ordering of events that needs to occur can only be known at runtime. If the interesting case does occur, the server can 'opportunistically' grant access to the other resource before revoking access from the first one, without violating the actual mutual exclusion constraint. The symmetric case is quite similar, with both channels needing the early and actual request wires, as in Fig. 5. For this to happen, we need two distinct zigzags to have positive weight. _The two zigzags are not inter-dependent._ One relates the early release time of C1 with the pre-emption time of C2, and the other relates the early release time of C2 with the pre-emption time of C1. In effect, each of C1 and C2 have two independent time variables that they can independently determine, which may result in zero or more zigzags having positive weight. In this case, the designer has additional freedom to decide what would count as a pre-emptive request. In the asymmetric case, C2 only had one request wire to raise in order to potentially receive the opportunistic grant. Here, the server could require that it raise both wires to be considered for the opportunistic case, or require only one. In the implementation described later, we assume the former. The circuit for both cases can be augmented with just a few gates so that the opportunistic mode can be turned on or off with a single bit. Once again, the behavior described above cannot be captured by a timing fork, since the behavior of C1 and C2 are not coupled in any way, and a zigzag is needed in order to obtain any useful timing information. ## IV Three Arbiter Method The straightforward implementation of the asymmetric case described in the previous section, requiring only one internal state variable, is shown below. Refer to Appendix I for details about the CHP notation. \(x=0;\) \(\ast[\![\![\)\(C1.r_{e}\wedge C1.r_{a}\longrightarrow C1.a\!\!\!\uparrow\) \([\![\)\(x\lor C2.r\longrightarrow C2.a\!\!\!\uparrow\) \(]\); \([\![\)\(\neg C2.a\longrightarrow\) \([\![\)\(\neg C1.r_{e}\longrightarrow\) \([\![\)\(\neg C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\![\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\![\)\(C2.r\longrightarrow C2.a\!\!\uparrow\) \([\neg C1.r_{a}]\); \(C1.a\!\!\!\downarrow\) \([\![\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\![\)\(C2.r\longrightarrow C2.a\!\!\uparrow\) \([\neg C1.r_{a}]\); \(C1.a\!\!\!\downarrow\) \([\![\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\![\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\![\)\(C2.r\longrightarrow C2.r\!\!\uparrow\); \(C2.a\!\!\downarrow\);\(x\!\!\downarrow\) \(]\) \([\![\)\) [MISSING_PAGE_POST] \([\)\(C1.r_{a}\longrightarrow C1.r_{a}\) \([\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\)\(C1.r_{a}\longrightarrow C1.a\!\!\!\downarrow\) \([\)\(C1.r_{a}\longrightarrow C1.r_{a}\) \([\)\(C1. of an acknowledge signal, based on an arbitration. Since this is the asymmetric case, if C2 was granted access, there is nothing interesting to be done and the server just waits for the handshake to be completed and returns to the beginning. However, if C1 was granted access, then the server enters the second arbitration. Here, there are three cases: two standard and one interesting. The two standard ones are: * C2 requests before the early release. * C1 finishes using before C2 even requests. In the first case, we cannot exploit the zigzag since it is possible that mutual exclusion will be violated if approval is granted to C2. It is pertinent to remember that the server does not have any information about 'how early' C2 requested before the early release and thus cannot exploit the zigzag. In the second case, C1 completes before C2 requests, so there is nothing interesting to be done, either. Finally, the interesting case, which the circuit is designed to exploit, occurs when: * C2 requests between the early release and actual release of C1. In this case, the server grants approval to C2 and only then completes the handshake with C1, whenever C1 performs the actual release. The state variable \(x\) is used to bypass the first arbitration whenever approval was granted to C2 in this manner, so that the handshake can be completed. Once this is done, the system is back in the quiescent state, and the process repeats. In this implementation, three non-deterministic selections are required, resulting in three arbiters. This can be quite expensive during realization. Next we present an alternative method that reduces the number of arbiters. The handshaking expansion described initially looks like it would require two arbiters but we show that it can be reduced to a single arbiter. ## V Single Arbiter Method In the alternate implementation below, we require two state variables, but only two non-deterministic selections, instead of three. The second process performs the initial arbitration. As before, granting approval to C2 first does not result in any interesting cases. If approval is granted to C1 and C2 requests too early, the handshake with C1 is completed before proceeding. The purpose of the first process is to complete the handshake with C1 in parallel with the other operations, whenever the state variable \(g\) is set. If C2 requests between the early and actual release, then the C1 handshake is allowed to complete in parallel with approving C2. In essence, this is a parallel form of the method described earlier, with three arbiters. Fig. 4: Opportunistic Mutual Exclusion Block - Asymmetric Case Fig. 5: Opportunistic Mutual Exclusion Block - Symmetric Case Fig. 3: Timing Zigzag. (a) The standard case where the request from the second process arrives too early to exploit the timing zigzag. (b) The interesting case where the request from the second process arrives in between the early release and actual release of the first process. \(g=0\); * [[\(g\wedge\neg C1.r_{a}\wedge\neg C1.r_{e}\)]; \(C1.a\!\!\downarrow\); \(g\!\!\downarrow\)] \(\parallel\) * [[\(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the inputs to the circuits (requests) in black and the outputs generated by the circuit (acknowledges) in red. The periods where the two acknowledges overlap are the latency reductions that are obtained by exploiting the timing zigzag. Though there is an added delay of about 600ps in the asymmetric case, the expectation when using this circuit is that the interesting case, where the zigzag can be exploited, occurs frequently. The weight of the zigzag, \(W_{2}-W_{1}\), is actually unbounded, since it is determined by the C1 and C2 processes providing the server information about their usage time of the shared resource. This could be orders of magnitude larger than the delay of this circuit, which can result in significant latency benefits. Comparing the symmetric circuit to the baseline 330ps delay, we see that \(\sim\)700ps of additional delay is incurred. These delays can be used to determine if the overhead of the opportunistic mutual exclusion mechanism is offset by the gains obtained by exploiting the timing zigzag using either the symmetric or asymmetric case. ## VIII Discussion The circuit described in this article is significantly more complex than a simple arbiter that handles mutual exclusion between two clients. However, in cases where certain timing information is known, the additional hardware cost is worth the reduction in idle time of the critical resource. The final reduction to the form that uses only one arbiter has one disadvantage that is not present in the cases with three/two arbiters. Decomposing the 4-way non-deterministic selection into a single arbiter is not without a cost. When using an arbiter with guards that are modified between one use and the next, there is a possibility of instability during the switching. To see this, consider the following: the switching of modes (change of guards) in the arbiter must be caused by a transition on some control variable which, in this case, is \(f\). Since it appears in more than one process, \(f\) is a now a shared variable. In the conventional use of shared variables, there is communication between two processes to ensure that the two do not attempt to modify/read this variable at the same time. This ensures that the variable has the correct value when a process accesses it. However, in our circuit, once \(f\) is asserted, there is no possible transition in the arbiter that can be sent back to the main process to acknowledge the fact that the change of guard has been completed. The arbiter only has three possible output variables, \(u\), \(v\) and \(G\). But none of them have uniquely defined values at the end of the \(f\) transition, since they all depend on the values of the external variables directly. Hence, the main Fig. 6: Waveforms from a SPICE simulation of the circuit for the asymmetric case. Requests, generated by the environment, are shown in black. Acknowledges, generated by the circuit, are shown in red. In the case to the left of the dotted line, both requests on C1 are asserted, followed by the corresponding acknowledge. Then, the early request of C1 is dessetted (top), and the C2 request (bottom) is asserted before the actual request of C1 (middle) is dessetened, resulting in the advance approval described above. In the case to the right of the dotted line, both requests on C1 are asserted, followed by the corresponding acknowledge. Then, before the early request of C1 can be dessetted, the C2 request is asserted. Now, the server must wait for C1 to fully complete before acknowledging C2. 133f was consumed over one sequence of handshakes (30ns - 45ns). process _cannot_ know whether the change of guard has been successfully completed, without knowing the delay of the gates in the arbiter. As explained above, this instability is actually fundamental when 'overloading' an arbiter to perform multiple arbitrations in sequence. Though this is a problem in the simulation scenario where all gates delays are randomized, in practice, this can be resolved by an elementary timing assumption: requiring that the gate that implements the boolean function for \(G\) be faster than the sequence of transitions from \(f\uparrow\) to either of the following \(f\downarrow\) transitions--both of which include a change of the handshake variables in the opportunistic mutual exclusion circuit followed by a response from the environment. Through the use of additional state variables, this assumption can be reduced to requiring that a single inverter be faster than a few more complex gates, which is easily achievable in practice. Our evaluation in Section VII corresponds to a circuit that includes these additional state variables as part of the overhead reported. Finally, since we are already making use of timing constraints in order to even decide whether to use this circuit, this additional local constraint, which can be guaranteed by design, does not pose significant restriction. ## IX Conclusion In this article, we presented a novel method to exploit certain subtle timing constraints in order to design mutual exclusion servers. We showed that these servers can reduce the idle time of a shared resource by opportunistically granting access to more than one client at a time, based on knowledge of the expected time when the clients will stop and start using the resource. We described handshaking expansions for the processes, implemented the same in the TSMC 65nm node and evaluated the performance. Finally, we discussed places where using this circuit is actually warranted, its advantages and possible drawbacks. All of this demonstrates the value of using the notion of zigzag causality, originally introduced in [9], in circuit design. To our knowledge, this is the first work to do so. ## Acknowledgments This work was supported in part by DARPA IDEA grant FA8650-18-2-7850, and in part by DARPA POSH grant HR001117S0054-FP-042. Yoram Moses is the Israel Pollak academic chair at the Technion, and was supported in part by the BSF grant 2015820 which is coincident with NSF-BSF grant CCF 1617945.
2301.06624
TAAL: Test-time Augmentation for Active Learning in Medical Image Segmentation
Deep learning methods typically depend on the availability of labeled data, which is expensive and time-consuming to obtain. Active learning addresses such effort by prioritizing which samples are best to annotate in order to maximize the performance of the task model. While frameworks for active learning have been widely explored in the context of classification of natural images, they have been only sparsely used in medical image segmentation. The challenge resides in obtaining an uncertainty measure that reveals the best candidate data for annotation. This paper proposes Test-time Augmentation for Active Learning (TAAL), a novel semi-supervised active learning approach for segmentation that exploits the uncertainty information offered by data transformations. Our method applies cross-augmentation consistency during training and inference to both improve model learning in a semi-supervised fashion and identify the most relevant unlabeled samples to annotate next. In addition, our consistency loss uses a modified version of the JSD to further improve model performance. By relying on data transformations rather than on external modules or simple heuristics typically used in uncertainty-based strategies, TAAL emerges as a simple, yet powerful task-agnostic semi-supervised active learning approach applicable to the medical domain. Our results on a publicly-available dataset of cardiac images show that TAAL outperforms existing baseline methods in both fully-supervised and semi-supervised settings. Our implementation is publicly available on https://github.com/melinphd/TAAL.
Mélanie Gaillochet, Christian Desrosiers, Hervé Lombaert
2023-01-16T22:19:41Z
http://arxiv.org/abs/2301.06624v1
# TAAL: Test-time Augmentation for Active Learning in Medical Image Segmentation ###### Abstract Deep learning methods typically depend on the availability of labeled data, which is expensive and time-consuming to obtain. Active learning addresses such effort by prioritizing which samples are best to annotate in order to maximize the performance of the task model. While frameworks for active learning have been widely explored in the context of classification of natural images, they have been only sparsely used in medical image segmentation. The challenge resides in obtaining an uncertainty measure that reveals the best candidate data for annotation. This paper proposes Test-time Augmentation for Active Learning (TAAL), a novel semi-supervised active learning approach for segmentation that exploits the uncertainty information offered by data transformations. Our method applies cross-augmentation consistency during training and inference to both improve model learning in a semi-supervised fashion and identify the most relevant unlabeled samples to annotate next. In addition, our consistency loss uses a modified version of the JSD to further improve model performance. By relying on data transformations rather than on external modules or simple heuristics typically used in uncertainty-based strategies, TAAL emerges as a simple, yet powerful task-agnostic semi-supervised active learning approach applicable to the medical domain. Our results on a publicly-available dataset of cardiac images show that TAAL outperforms existing baseline methods in both fully-supervised and semi-supervised settings. Our implementation is publicly available on [https://github.com/melinphd/TAAL](https://github.com/melinphd/TAAL). ## 1 Introduction The performance of deep learning-based models improves as the number of labeled training samples increases. Yet, the burden of annotation limits the amount of data that can be labeled. One solution to that problem is offered by active learning (AL) [1]. Based on the hypothesis that all data samples have a different impact on training, active learning aims to find the best set of candidate samples to annotate in order to maximize the performance of the task model. In such context, medical image segmentation emerges as a remarkably relevant task for active learning. Indeed, medical images typically require prior expert knowledge for their analysis and annotation, an expensive and time-consuming task. Initial attempts have explored active learning in medical imaging [2], but their methodology either relied on simple uncertainty heuristics [3, 4] or required heavy computations during sampling [5, 6] or training [7]. **Deep active learning** Active learning has been extensively explored for the classification [8, 9, 10, 11, 12, 13] or segmentation [14, 15, 16] of natural images. Recent deep active learning approaches based on entropy [12] or ensembles [9] adapted traditional uncertainty-based AL strategies to deep learning models. Similarly, DBAL [10] combined measures such as entropy or mutual information with Monte-Carlo dropout to suggest which samples to annotate next. Core-set selection [11] aimed to find the best batch sampling strategy for CNNs in classification, but did not scale well to high-dimensional data. The use of auxiliary modules [13, 17, 18] has been similarly explored to improve AL sampling strategies. The loss prediction module of [13] measured model uncertainty with intermediate representations. Likewise, a VAE was used in VAAL [17] to learn the latent representation of the unlabeled dataset and distinguish between labeled and unlabeled samples. While these state-of-the-art methods have improved previous approaches, their dependence on auxiliary modules reduces their flexibility and increase the burden of hyperparameter tuning. **Semi-supervised AL** Semi-supervised learning (SSL) exploits the representations of unlabeled data to improve the performance of the task model. Since semi-supervised learning and active learning are closely connected, recent works in AL have attempted to combine both domains [12, 17, 18, 19]. For instance, CEAL [12] used pseudo-labeling of unlabeled samples to enhance the labeled set during training. VAAL [17] and TA-VAAL [18] employed a VAE to learn a latent representation of labeled and unlabeled data. The Mean Teacher framework of [19] combined a supervised loss on labeled data with an unsupervised loss on unlabeled data based on Temporal Output Discrepancy (TOD), evaluating the distance between the model's output at different gradient steps. The model used a variant of TOD at sampling time to identify the most uncertain samples to annotate. However, these semi-supervised AL methods solely focused on classification tasks or the segmentation of natural images in very large quantities, which is a different context than medical imaging. Another recent work comparable to ours combined AL and SSL via consistency regularization [20]. The consistency loss adopted during training employed MixMatch [21] and sample selection measured inconsistency across input perturbations. However, as opposed to our work, [20] kept the consistency loss used during training and the AL inconsistency metric used for sample selection independent of each other, and the latter was quantified through variance. Furthermore, the method was only validated on classification tasks. **Test-time augmentation** Data augmentation is a well-known regularization technique to improve generalization in low-data regimes. These augmentation techniques are particularly essential in medical imaging where datasets tend to be smaller than those of natural images. Yet most recent attempts in active learning do not exploit data augmentation during training [8, 6], or only use random horizontal flipping [17, 18]. Recent learning methods [22, 23] have also investigated the use of augmentation at test-time in order evaluate prediction uncertainty. Randomly augmented test images yield different model outputs. Combining these outputs can improve the overall predictions as well as generate uncertainty maps for these predictions. Uncertainty estimated through test-time augmentation was shown to be more reliable than model uncertainty measures such as test-time dropout or entropy of the output [23]. Motivated by the limitations of current active learning methods for medical image segmentation and the unused potential of active augmentation, this paper proposes a novel semi-supervised active learning strategy called Test-time Augmentation for Active Learning (TAAL). **Our contribution:** Our method leverages the uncertainty information provided by data augmentation during both training and test-time sample selection phases. More specifically, TAAL employs a cross-augmentation consistency loss both to train the model in a semi-supervised fashion _as well as_ to identify the most uncertain samples to annotate at the next cycle. TAAL comprises three key features: 1. a semi-supervised framework based on cross-augmentation consistency that exploits unlabeled samples during training and sampling; 2. a flexible task-agnostic sample selection strategy based on test-time augmentation; 3. a novel uncertainty measure based on a modified Jensen-Shannon divergence (JSD), which accounts for both cross-augmentation consistency and prediction entropy, and leads to improved performance. ## 2 Method **Cross-augmentation consistency training** We consider a semi-supervised setting where we train a multi-class segmentation model \(f_{\theta}(\cdot)\) parameterized by \(\theta\) with \(N\) labeled samples and \(M\) unlabeled samples. We denote the labeled set as \(\mathcal{D}_{L}=\{(\mathbf{x}^{(j)},\mathbf{y}^{(j)})\}_{j=1}^{N}\) and the unlabeled set as \(\mathcal{D}_{U}=\{\mathbf{x}_{u}^{(j)}\}_{j=1}^{M}\), with data \(\mathbf{x},\mathbf{x}_{u}\in\mathbb{R}^{H\times W}\) and segmentation mask \(\mathbf{y}\in\mathbb{R}^{C\times H\times W}\) (\(C\) is the number of classes). The overall loss that we optimize, \(\mathcal{L}=\mathcal{L}_{s}\,+\,\lambda\mathcal{L}_{c}\), is a combination of a supervised segmentation loss \(\mathcal{L}_{s}\) and an unsupervised consistency loss \(\mathcal{L}_{c}\) weighted by a factor \(\lambda\). More explicitly, the objective is defined as \[\mathcal{L}\,=\,\frac{1}{N}\sum_{j=1}^{N}\mathcal{L}_{s}\big{(}f_{\theta}( \mathbf{x}^{(j)}),\mathbf{y}^{(j)}\big{)}\,+\,\frac{\lambda}{M}\sum_{j=1}^{M} \mathcal{L}_{c}\big{(}f_{\theta}(\mathbf{x}_{u}^{(j)}),\Gamma\big{)}, \tag{1}\] where \(\Gamma\) are the transformations applied to \(\mathbf{x}_{u}^{(j)}\). At each iteration, we apply a series of random transformations \(\{\Gamma_{1},...,\Gamma_{K}\}\) to \(\mathbf{x}_{u}\). \(\mathcal{L}_{c}\) measures the variability of segmentation predictions for different augmentations of \(\mathbf{x}_{u}\) measured by a function \(\mathcal{D}iv\): \[\mathcal{L}_{c}\big{(}f_{\theta}(\mathbf{x}_{u}^{(j)}),\Gamma\big{)}\,=\, \mathcal{D}iv\big{\{}\Gamma_{1}^{-1}[f_{\theta}(\Gamma_{1}(\mathbf{x}_{u}^{(j )}))],\,...\,,\Gamma_{K}^{-1}[f_{\theta}(\Gamma_{K}(\mathbf{x}_{u}^{(j)}))] \big{\}}. \tag{2}\] While different measures can be used for \(\mathcal{D}iv\)[24], our consistency loss builds on the Jensen Shannon divergence (JSD), \[\mathrm{JSD}(P_{1},...,P_{K})\,=\,H\big{(}\frac{1}{K}\sum_{i=i}^{K}P_{i}\big{)}\,- \,\frac{1}{K}\sum_{i=i}^{K}H(P_{i}), \tag{3}\] where \(H(P_{i})\) is the Shannon entropy [25] for the probability distributions \(P_{i}\). Minimizing the JSD reduces the entropy of the average prediction (making the predictions more similar to each other) while increasing the average of individual prediction entropies (ensuring confident predictions). In AL we typically want to select samples which have a high output entropy [12]. Selecting samples with highest JSD would thus have the opposite effect. To avoid this issue, and to control the relative importance of average prediction entropy versus entropy of individual predictions, we propose a weighted version of JSD with parameter \(\alpha\). \[\mathrm{JSD}_{\alpha}(P_{1},...,P_{K})\,=\,\alpha H\big{(}\frac{1}{K}\sum_{i= i}^{K}P_{i}\big{)}\,-\,\frac{(1\!-\!\alpha)}{K}\sum_{i=i}^{K}H(P_{i}). \tag{4}\] Note that using \(\alpha=0.5\) is equivalent to using the standard JSD. **Test-time augmentation sampling** In active learning, the goal is to select the best unlabeled samples to annotate after each training cycle to augment the next labeled training set. Hence, after each cycle, we apply our active learning strategy based on test-time augmentation to select the next samples to annotate. For each sample \(\mathbf{x}_{u}\in\mathcal{D}_{U}\), we apply a series of transformations \(\{\Gamma^{\prime}_{1},\ldots,\Gamma^{\prime}_{K_{s}}\}\), and we compute an uncertainty score \(U_{\Gamma^{\prime}}\) based on the same divergence function as the consistency loss: \[U_{\Gamma^{\prime}}\,=\,\mathrm{JSD}_{\alpha}\big{(}{\Gamma^{\prime}_{1}}^{-1 }[f_{\theta}(\Gamma^{\prime}_{1}(\mathbf{x}_{u}))],\,...\,,{\Gamma^{\prime}_ {K_{s}}}[f_{\theta}(\Gamma^{\prime}_{K_{s}}(\mathbf{x}_{u}))]\big{)}. \tag{5}\] The samples with highest uncertainty are annotated and added to the labeled training set. After sample selection, the model goes through a new training cycle. ## 3 Experiments and results ### Implementation details #### 3.1.1 Dataset The publicly available ACDC dataset [26] comprises cardiac 3D cine-MRI scans from 100 patients. These are evenly distributed into 5 groups (4 pathological and 1 healthy subjects groups). Segmentation masks identify 4 regions of interest: right-ventricule cavity, left-ventricule cavity, myocardium and background. For comparative purposes, our experiments focus on the MRI scans at the end of diastole. Preprocessing of the volumes includes resampling to a fixed \(1.0\,\mathrm{mm}\times 1.0\,\mathrm{mm}\) resolution in the x- and y-directions as well as a \(99^{th}\) percentile normalization. The 3-dimensional dataset of volumes are converted to a 2-dimensional dataset of images by extracting all the z-axis slices for each volume. Each image is downsampled to \(128\times 128\) pixels. Testing is performed on 181 images taken from 20 different patients, ensuring subjects are not split up across training and testing sets. The validation uses 100 randomly selected images. The same validation set is used for all experiments. In total, the available training set, both labeled and unlabeled, thus comprises 660 images. #### 3.1.2 Implementation and training We employ a standard 4-layer UNet [27] for our backbone segmentation model with dropout (\(\mathrm{p}=0.5\)), batch normalization and a leaky ReLU activation function. For a fairer comparison in our experiments, we keep the number of training steps fixed during all cycles. We train our models for 75 epochs, each iterating over 250 batches, with \(BS=4\). We use the Adam optimizer [28], with \(LR=10^{-6}\) and weight decay \(w=10^{-4}\). To improve convergence, we apply a gradual warmup with a cosine annealing scheduler [29, 30], increasing the learning rate by a factor 200 during the first 10 epochs. During training, we apply data augmentation, using transformations similar to those utilized for the consistency loss. In this work, we model the transformations \(\Gamma\) as a combination of \(f\), \(r\) and \(\epsilon\), where \(f\) is the random variable for flipping the image along the horizontal axis, \(r\) is the number of \(90^{\circ}\) rotations in 2D, and \(\epsilon\) models Gaussian noise. We set \(f\sim\mathcal{U}(0,1)\), \(r\sim\mathcal{U}(0,3)\) and \(\epsilon\sim\mathcal{N}(0,0.01)\), and use \(K=3\) transformations to compute the consistency loss during training. We use the standard Dice loss as our supervised loss. In the semi-supervised case, following [31], we ramp-up the unsupervised component weight using a Gaussian ramp-up curve such that \(\lambda=\exp(-5(1-t/t_{R})^{2})\), where \(t\) is the current epoch. We use a ramp-up length \(t_{R}\) of 10 epochs, corresponding to the learning rate gradual warmup length. We repeat each experiment 5 times, each with a different seed determining different initialization of our model weights. For all experiments, the same initial labeled set is used for the first cycle. Experiments were run on NVIDIA PV100 GPU with CUDA 10.2 and Python 3.8.10. We implemented the methods using the PyTorch framework. #### 3.1.3 Evaluation metrics To evaluate the performance of the trained models, we employ the standard Dice similarity score, averaged over all non-background channels. We compute both the mean 3D Dice on test volumes and mean 2D Dice on the individual images from these volumes. We give the results as the mean Dice obtained over the repeated experiments. ### Active learning setup We begin each experiment with 10 labeled samples chosen uniformly at random in the training set and use a sampling budget of 1, meaning that we select one new sample to be labeled after each cycle. Following previous active learning validation settings [11], we retrain the model from scratch after each annotation cycle. We use the same types of augmentations during training and sample selection. For test-time augmentation (TTA) sampling, \(\{\Gamma_{1}^{\prime},\ldots,\Gamma_{K_{s}}^{\prime}\}\) comprises all 8 combinations of flip and rotation augmentations, in order to apply similar transformations to all images, and adopts the same augmentation Gaussian noise parameters as for training. For comparative purposes, with dropout-based sampling, we also run 8 inferences with dropout to obtain different predictions. Both TTA and dropout-based sampling then evaluate uncertainty with \(U_{\Gamma^{\prime}}\) computed on the different generated predictions. We set \(\alpha=0.75\) in TAAL's weighted JSD. ### Comparison of active learning strategies Our aim is to evaluate the effectiveness of our proposed semi-supervised active learning approach on a medical image segmentation task. In our active learning experiments, we compare TAAL and its unweighted version (with standard JSD) with random sampling, entropy sampling, sampling based on dropout and core-set selection. Entropy-based sampling selects the most uncertain samples based on the entropy of the output probabilities. Dropout-based sampling [10] identifies the samples with the highest JSD given multiple inferences with dropout. Finally, core-set selection [11] aims to obtain the most diverse labeled set by solving the maximum cover-set problem. Figure 1: Active learning results on the ACDC dataset, given as the mean 3D Dice scores on the test set and corresponding 95% confidence interval. In a fully-supervised setting: random sampling (RS), core-set selection (Coreset), uncertainty-based sampling based on entropy of output probabilities (Entropy), and uncertainty-based sampling based on JSD given multiple inferences with dropout (Dropout). In a semi-supervised setting: random sampling (\(\text{Semi}+\text{RS}\)), TAAL with standard JSD (unweighted TAAL), and TAAL with weighted JSD (TAAL). Our approach TAAL demonstrates significant improvements for low-data regimes in both fully and semi-supervised segmentation. Figure 1 shows the segmentation performance of our proposed method with its 2 variants along with other existing active learning methods. TAAL consistently outperforms the other baselines by a large margin. We observe that our semi-supervised approach based on cross-augmentation consistency (\(\mathrm{Semi}+\mathrm{RS}\)) noticeably improves the fully-supervised vanilla model (RS). We notice that our unweighted version of TAAL (with standard JSD, \(\alpha\!=\!0.5\)) already improves the performance of the semi-supervised model (\(\mathrm{Semi}+\mathrm{RS}\)) by selecting the most uncertain samples based on their cross-augmentation consistency loss. With higher \(\alpha\!=\!0.75\), our proposed TAAL with weighted JSD yields the highest performance gain compared to the fully-supervised vanilla model with random sampling (RS). Figure 2: Examples of images sampled by TAAL at different AL cycles. Are depicted the image sampled (row 1), the ground-truth segmentation (row 2), the segmentation prediction (row 3), and the JSD map given the different predictions from the augmented image (row 4). We observe that TAAL initially selected images with a large amount of hallucinated inaccurate predictions. Figure 2 shows examples of images sampled by TAAL during the first 4 annotation cycles. TAAL initially selects image slices which show the apex of the heart. These samples are more difficult to learn in early stages since the areas to segment are much smaller than in the central slices of the heart and the image qualities are typically of lesser quality due to partial volume effects. Thus, we see that the choice of TAAL is first directed at samples yielding highly inaccurate predictions. The previous model has in fact even hallucinated multiple false segmentations for these samples as seen on the third row of subfigures 1(a) and 1(b). In the next cycles, TAAL selects more central cardiac slices, which have improved predictions when compared to the ground-truth annotations. Hence, TAAL seems to first focus on correcting inaccurate predictions, before sharpening its predictions on a fine-grained level for slices with more prominent areas to segment. Table 1 gathers the model's segmentation performance after 10 cycles in terms of mean 2D Dice and mean 3D Dice scores over whole test volumes. In the fully-supervised setting, test-time augmentation-based sampling (TTA) outperforms random sampling, core-set selection, entropy sampling and sampling based on dropout. Similarly, unweighted TAAL and TAAL outperform random sampling in both semi-supervised and fully-supervised settings. After labeling 10 extra samples, the mean 3D Dice score attains 89.06% with TAAL while only reaching respectively 87.40% and 88.48% with random sampling in fully- and semi-supervised settings. Similar results were observed with 2D Dice on test images. ## 4 Conclusion In this paper, we presented a simple, yet effective semi-supervised deep active learning approach for medical image segmentation. Our method, Test-time Augmentation for Active Learning (TAAL), employs a cross-augmentation consistency framework that produces both an improved training due to its unsupervised consistency loss, and a better sampling method through the uncertainty \begin{table} \begin{tabular}{c|c|c|c|c||c|c|c|c} \hline \multirow{2}{*}{Metric} & \multicolumn{4}{c||}{Fully} & \multicolumn{2}{c|}{Semi (\(\alpha=0.5\))} & Semi (\(\alpha=0.75\)) \\ \cline{2-9} & RS & Coreset & Entropy & Dropout & TTA & RS & unweighted TAAL & TAAL \\ \hline 2D Dice & 80.69 & 79.95 & 80.99 & 81.32 & 81.67 & 81.51 & 81.90 & **82.51** \\ 3D Dice & 87.40 & 86.65 & 88.07 & 88.24 & 88.48 & 88.48 & 88.50 & **89.06** \\ \hline \end{tabular} \end{table} Table 1: Active learning performances after doubling the number of initial labeled samples. We show the mean 2D and mean 3D Dice scores. ‘Fully’: Fully-supervised vanilla UNet. ‘Semi’: Proposed semi-supervised training with standard (\(\alpha=0.5\)) or weighted (\(\alpha=0.75\)) JSD. ‘RS’: Random sampling. ‘TTA’: Sampling with Test-time augmentation. ‘unweighted TAAL’: Our proposed method with standard JSD. ‘TAAL’: Our proposed method with weighted JSD, which finds the best candidate image to annotate. measure it provides. TAAL also uses a modified JSD that significantly improves the model's performance. Our results on the ACDC cardiac segmentation dataset show that, with TAAL, the trained model can reach up to 89.06% 3D Dice with 20 labeled samples when it only reaches 87.40% with random sampling. Because our approach exploits standard augmentation techniques already used in medical image segmentation tasks, TAAL emerges as a simple, yet efficient semi-supervised active learning strategy. While our method highly depends on the presence of disagreeing predictions for augmented inputs to identify the most informative samples, our observed improvements on a cardiac MRI dataset highlight promising avenues for future work, notably the investigation of more complex datasets and types of augmentations. **Acknowledgments** - This work is supported by the Canada Research Chair on Shape Analysis in Medical Imaging, and the Research Council of Canada (NSERC). Computational resources were partially provided by Compute Canada. The authors also thank the ACDC Challenge organizers for providing the data.
2308.15435
Hidden Relaxation Term in Approximate Treatments of Responses to Electric and Magnetic Fields
Recently a generalization of the ``\textit{modern theory of orbital magnetization}'' to include non-local Hamiltonians (e.g. hybrid functionals of the generalized Kohn-Sham theory) was provided for magnetic response properties. Results indicated inequivalence between sampling of direct and reciprocal spaces for those calculations far from the complete basis set limit. We show that this can be explained by a hidden ``relaxation'' contribution to the reciprocal-space derivatives. The missing relaxation term is shown to (generally) affect the results of calculations of not only magnetic, but also electric response properties, within the context of the ``\textit{modern theory of polarization}''. Necessary conditions are provided to permit avoiding the calculation of the hidden relaxation term.
Jacques K. Desmarais
2023-08-29T16:59:34Z
http://arxiv.org/abs/2308.15435v2
# Hidden Relaxation Term in Approximate Treatments of Responses to Electric and Magnetic Fields ###### Abstract Recently a generalization of the "_modern theory of orbital magnetization_" to include non-local Hamiltonians (e.g. hybrid functionals of the generalized Kohn-Sham theory) was provided for magnetic response properties. Results indicated inequivalence between sampling of direct and reciprocal spaces for those calculations far from the complete basis set limit. We show that this can be explained by a hidden "relaxation" contribution to the reciprocal-space derivatives. The missing relaxation term is shown to (generally) affect the results of calculations of not only magnetic, but also electric response properties, within the context of the "_modern theory of polarization_". Necessary conditions are provided to permit avoiding the calculation of the hidden relaxation term. Desmarais _et al._[1] provided a generalization of Ceresoli _et al._'s "_modern theory of orbital magnetization_" to include non-local Hamiltonians and applied the theory to the calculation of the optical rotatory power (OR) of periodic systems. The reported calculations on infinite chains of H\({}_{2}\)O\({}_{2}\), show visible differences in the calculated OR for: * the infinite periodic system versus the large finite system * the 3\(\times\) replicated supercell with 3 evenly spaced \(\mathbf{k}\) points versus the 9\(\times\) replicated supercell with 1 \(\mathbf{k}\) point In both cases i) and ii), the differences diminish as the calculation approaches the complete basis set limit. In the article, these differences (suggestive of a "non-periodic" formulation) were attributed to the gauge-origin dependence of the first-order magnetic Hamiltonian:[1] \[H_{\mathrm{mag}}^{\left(1\right)}=\frac{1}{2}\left(\mathbf{r}+i\boldsymbol{ \nabla}_{\boldsymbol{k}}\right)\wedge\mathbf{p}+\mathrm{H.c.}. \tag{1}\] Here we show that the differences are instead the result of an approximate treatment therein of the action of the \(\boldsymbol{\nabla}_{\boldsymbol{k}}\) operator. In fact, we show that the same differences i) and ii) between the infinite periodic vs. large finite system, as well as between uniform sampling of direct and reciprocal spaces (i.e. uniform sampling of \(\mathbf{k}\)-points vs. supercells) are not only obtained for magnetic properties from Eq. (1), but also electric properties, within a similar approximation to the action of \(\boldsymbol{\nabla}_{\boldsymbol{k}}\). That is, the same "non-periodic" behaviour is found for calculation of those properties employing the first-order electric Hamiltonian:[3; 4; 5; 6; 7; 8] \[H_{\mathrm{ele}}^{\left(1\right)}=\frac{1}{2}\left(\mathbf{r}+i\boldsymbol{ \nabla}_{\boldsymbol{k}}\right)+\mathrm{H.c.}. \tag{2}\] which coincides exactly with King-Smith, Vanderbilt and Resta's "_modern theory of polarization_".[9; 10; 11; 12] We begin by a review of the state of the art in application of the \(\boldsymbol{\nabla}_{\boldsymbol{k}}\) operator. The action of \(\boldsymbol{\nabla}_{\boldsymbol{k}}\) on Bloch orbitals built from atom-centered atomic orbitals (AOs) \(\left|\mu^{\boldsymbol{\mathrm{g}}}\right\rangle\): \[\left|\psi_{i}\left(\mathbf{k}\right)\right\rangle=\sum_{\mu}C_{\mu,i}\left( \mathbf{k}\right)\left|\phi_{\mu}\left(\mathbf{k}\right)\right\rangle=\sum_{ \mu}C_{\mu,i}\left(\mathbf{k}\right)\sum_{\mathbf{g}}e^{i\mathbf{k}\cdot \mathbf{g}}|\mu\mathbf{g}\rangle \tag{3}\] is trivial to apply on the \(\left|\phi_{\mu}\left(\mathbf{k}\right)\right\rangle\) part. The problem of calculating the derivative of the orbital coefficients \(\boldsymbol{\nabla}_{\boldsymbol{k}}C_{\mu,i}\left(\mathbf{k}\right)\) is more subtle. In general, an expansion: \[\boldsymbol{\nabla}_{\boldsymbol{k}}C_{\mu,i}\left(\mathbf{k}\right)=\sum_{l} ^{\mathrm{all}}C_{\mu,l}\left(\mathbf{k}\right)Q_{l,i}\left(\mathbf{k}\right) \tag{4}\] with (as of yet undetermined) coefficients \(Q_{l,l}\left(\mathbf{k}\right)\) provides a solution. To find the coefficients \(Q_{l,l}\left(\mathbf{k}\right)\), the derivative is typically applied to the Kohn-Sham (KS) single-particle equation \(\tilde{F}|\psi_{i}\left(\mathbf{k}\right)\rangle=\epsilon_{i,\mathbf{k}}| \psi_{i}\left(\mathbf{k}\right)\rangle\) yielding:[5; 6] \[Q_{i,l}\left(\mathbf{k}\right)=\frac{K_{i,l}\left(\mathbf{k}\right)-\epsilon_ {l,\mathbf{k}}R_{i,l}\left(\mathbf{k}\right)}{\epsilon_{l,\mathbf{k}}-\epsilon _{i,\mathbf{k}}}\quad l\neq i\] (5a) in which \[\mathbf{K}\] and \[\mathbf{R}\] are the derivatives of the KS Hamiltonian \[\mathbf{F}\] and basis-function overlap \[\mathbf{S}\] matrices at fixed orbital coefficients: \[K_{i,l}\left(\mathbf{k}\right)\to i\sum_{\mathbf{g}}\mathbf{g}e^{i \mathbf{k}\cdot\mathbf{g}}F_{i,l}\left(\mathbf{g}\right) \tag{5b}\] \[R_{i,l}\left(\mathbf{k}\right)=i\sum_{\mathbf{g}}\mathbf{g}e^{i \mathbf{k}\cdot\mathbf{g}}S_{i,l}\left(\mathbf{g}\right) \tag{5c}\] Here we show that Eq. (5b) is, in fact, an approximation to the full \(K_{i,l}\left(\mathbf{k}\right)\) (i.e. the \(\rightarrow\) should be replaced by an approximate equal sign \(\approx\)), in that "orbital-relaxation" or "response" contributions to the derivatives are dropped. This approximation explains the discrepencies i) and ii) in the approximate calculation of responses to external electromagnetic fields. To develop the exact treatment, let us now approach the problem of the analytical calculation of \(\mathbf{k}\)-space derivatives from the perspective of linear-response theory. We consider a small displacement \(\mathbf{h}\) away from \(\mathbf{k}\) and write the displaced KS single-particle equation, as well as orthonormality condition: \[\mathbf{F}\left(\mathbf{k}+\mathbf{h}\right)\mathbf{C}\left(\mathbf{ k}+\mathbf{h}\right)=\mathbf{S}\left(\mathbf{k}+\mathbf{h}\right)\mathbf{C} \left(\mathbf{k}+\mathbf{h}\right)\mathbf{\varepsilon}\left(\mathbf{k}+ \mathbf{\bar{b}}\mathbf{h}\right)\] \[\mathbf{C}^{\dagger}\left(\mathbf{k}+\mathbf{h}\right)\mathbf{S }\left(\mathbf{k}+\mathbf{h}\right)\mathbf{C}\left(\mathbf{k}+\mathbf{h} \right)=\mathbf{1} \tag{6b}\] \[\mathbf{C}\left(\mathbf{k}+\mathbf{h}\right)\equiv\mathbf{C}^{ \left(0\right)}\left(\mathbf{k}\right)\mathbf{Q}\left(\mathbf{k}+\mathbf{h}\right) \tag{6c}\] with (generally) non-canonical displaced Lagrange multipliers \(\mathbf{\varepsilon}\left(\mathbf{k}+\mathbf{h}\right)\) and (as of yet) undetermined coefficients \(\mathbf{Q}\left(\mathbf{k}+\mathbf{h}\right)\). Expanding all quantities in a power series around the point \(\mathbf{k}\): \[\mathbf{F}\left(\mathbf{k}+\mathbf{h}\right) = \mathbf{F}^{\left(0\right)}\left(\mathbf{k}\right)+\mathbf{h} \mathbf{F}^{\left(1\right)}\left(\mathbf{k}\right)+\ldots \tag{7a}\] \[\mathbf{C}\left(\mathbf{k}+\mathbf{h}\right) = \mathbf{C}^{\left(0\right)}\left(\mathbf{k}\right)+\mathbf{h} \mathbf{C}^{\left(1\right)}\left(\mathbf{k}\right)+\ldots\] (7b) \[\mathbf{S}\left(\mathbf{k}+\mathbf{h}\right) = \mathbf{S}^{\left(0\right)}\left(\mathbf{k}\right)+\mathbf{h} \mathbf{S}^{\left(1\right)}\left(\mathbf{k}\right)+\ldots\] (7c) \[\mathbf{\varepsilon}\left(\mathbf{k}+\mathbf{h}\right) = \mathbf{\varepsilon}^{\left(0\right)}\left(\mathbf{k}\right)+ \mathbf{h}\mathbf{\varepsilon}^{\left(1\right)}\left(\mathbf{k}\right)+\ldots\] (7d) \[\mathbf{Q}\left(\mathbf{k}+\mathbf{h}\right) = \mathbf{1}+\mathbf{h}\ \mathbf{Q}^{\left(1\right)}\left(\mathbf{k}\right)+\ldots \tag{7e}\] and taking the derivative of both sides of Eq. (7c) gives: \[\mathbf{S}^{\left(1\right)}\left(\mathbf{k}\right)=\left.\frac{\partial \mathbf{S}\left(\mathbf{k}+\mathbf{h}\right)}{\partial\mathbf{h}}\right|_{ \mathbf{h}=\mathbf{0}} \tag{8}\] then, inserting Eqs.(7b) and (7e) into Eq. (6c) yields: \[\mathbf{C}^{\left(1\right)}\left(\mathbf{k}\right)=\mathbf{C}^{\left(0\right) }\left(\mathbf{k}\right)\mathbf{Q}^{\left(1\right)}\left(\mathbf{k}\right) \tag{9}\] At this point, it is convenient to define the matrices: \[\mathbf{K}^{\left(1\right)}\left(\mathbf{k}\right)=\left[\mathbf{C}^{\left(0 \right)}\left(\mathbf{k}\right)\right]^{\dagger}\mathbf{F}^{\left(1\right)} \left(\mathbf{k}\right)\mathbf{C}^{\left(0\right)}\left(\mathbf{k}\right) \tag{10}\] and: (11) Inserting Eqs. (7) and (9) in Eq. (6) and collecting terms in the first order, then left-multiplying by and using Eqs. (10) and (11) leads directly to the first order perturbation equation: \[\mathbf{K}^{\left(1\right)}\left(\mathbf{k}\right)+\mathbf{ \varepsilon}^{\left(0\right)}\left(\mathbf{k}\right)\mathbf{Q}^{\left(1\right) }\left(\mathbf{k}\right)=\mathbf{R}^{\left(1\right)}\left(\mathbf{k}\right) \mathbf{\varepsilon}^{\left(0\right)}\left(\mathbf{k}\right) \tag{12}\] \[+ \mathbf{Q}^{\left(1\right)}\left(\mathbf{k}\right)\mathbf{ \varepsilon}^{\left(0\right)}\left(\mathbf{k}\right)+\mathbf{\varepsilon}^{ \left(1\right)}\left(\mathbf{k}\right)\] Eq. (12) must be solved self-consistently for \(\mathbf{Q}^{\left(1\right)}\left(\mathbf{k}\right)\), under the condition of orthonormality. The requisite first order orthonormality condition may now be written by inserting Eqs. (7b), (7c), (9) and (11) into Eq. (6b) to get: \[\left[\mathbf{Q}^{\left(1\right)}\left(\mathbf{k}\right)\right]^{\dagger}+ \mathbf{Q}^{\left(1\right)}\left(\mathbf{k}\right)=-\mathbf{R}^{\left(1 \right)}\left(\mathbf{k}\right) \tag{13}\] The standard non-canonical solution procedure [13; 14] provides, by taking advantage of the fact that the occ-virt blocks of perturbed Lagrange multipliers are vanishing: \[\varepsilon_{ia}^{\left(1\right)}\left(\mathbf{k}\right)=\varepsilon_{ai}^{ \left(1\right)}\left(\mathbf{k}\right)=0\quad i\in occ,a\in virt \tag{14}\] the following solution for the occ-virt and virt-occ blocks of \(\mathbf{Q}^{\left(1\right)}\): \[Q_{ia}^{\left(1\right)}\left(\mathbf{k}\right)=\frac{K_{ia}^{\left(1\right)} \left(\mathbf{k}\right)-\epsilon_{a}^{\left(0\right)}\left(\mathbf{k}\right)R_{ ia}^{\left(1\right)}\left(\mathbf{k}\right)}{\epsilon_{a}^{\left(0\right)}\left( \mathbf{k}\right)-\epsilon_{i}^{\left(0\right)}\left(\mathbf{k}\right)}\quad i \in occ,a\in virt\] (15a) and for the occ-occ block (by imposing Hermiticity): \[Q_{ij}^{\left(1\right)}\left(\mathbf{k}\right) = -\frac{1}{2}R_{ij}^{\left(1\right)}\left(\mathbf{k}\right)\quad i,j \in occ \tag{15b}\] with (here and elsewhere) exactly analogous expressions for the virt-virt block. Eq. (15b) is consistent with the following non-canonical matrices of Lagrange multipliers: \[\varepsilon_{ij}^{\left(1\right)}\left(\mathbf{k}\right)=K_{ij}^{\left(1\right)} \left(\mathbf{k}\right)-\frac{1}{2}\left(\epsilon_{i}^{\left(0\right)}+ \epsilon_{j}^{\left(0\right)}\right)R_{ij}^{\left(1\right)}\left(\mathbf{k} \right)\quad i,j\in occ \tag{16}\] In Eqs. (15a) and (16) \(K_{ll^{\prime}}^{\left(1\right)}\left(\mathbf{k}\right)\) is the first-order perturbed KS Hamiltonian matrix: \[iK_{l,l^{\prime}}^{\left(1\right)}\left(\mathbf{k}\right) = -\sum_{\mathbf{g}}\mathbf{g}e^{i\mathbf{k}\cdot\mathbf{g}}F_{l,l^{ \prime}}^{\left(0\right)}\left(\mathbf{g}\right)+\sum_{\mathbf{g}}e^{i \mathbf{k}\cdot\mathbf{g}} \tag{17}\] \[\times \sum_{\mu\nu}C_{\mu,l}^{\left(0\right)\ast}\left(\mathbf{k} \right)C_{\nu,l^{\prime}}^{\left(0\right)}\left(\mathbf{k}\right)V_{\mu\nu}^{ \left(1\right)}\left(\mathbf{g}\right)\] with a first term representing the contribution as in Eq. (5b) from standard approaches, and the second term is an additional "orbital-relaxation" or "response" correction. Thus, the relaxation term is proportional to the KS potential \(V^{\left(1\right)}\) depending on the derivative of the reduced density matrix coefficients \(P_{\mu\nu}^{\left(1\right)}\): \[P_{\mu\nu}^{\left(1\right)}\left(\mathbf{g}\right) \approx \frac{\partial}{\partial\mathbf{h}}\ \frac{2}{\Omega}\sum_{\Omega}\Re\ e^{i\left[\mathbf{k}+\mathbf{h}\right] \cdot\mathbf{g}}P_{\mu\nu}\left(\mathbf{k}+\mathbf{h}\right)\Bigg{|}_{\mathbf{ h}=\mathbf{0}} \tag{18}\] \[= \frac{2}{\Omega}\Re\ e^{i\mathbf{k}\cdot\mathbf{g}}\sum_{i}^{ \mathrm{occ}}\sum_{l}^{\mathrm{occ}}C_{\mu,l}^{\left(0\right)}\left( \mathbf{k}\right)iQ_{l,i}^{\left(1\right)}\left(\mathbf{k}\right)\left[C_{ \nu,i}^{\left(0\right)}\left(\mathbf{k}\right)\right]^{\ast}\] \[+ C_{\mu,i}^{\left(0\right)}\left(\mathbf{k}\right)i\left[Q_{l,l}^{ \left(1\right)}\left(\mathbf{k}\right)\right]^{\ast}\left[C_{\nu,l}^{\left(0 \right)}\left(\mathbf{k}\right)\right]^{\ast}\] with summation being over quadrature points in the volume \(\Omega\) of the first Brillouin zone (FBZ). Once \(\mathbf{Q}^{\left(1\right)}\) has been obtained from a non-canonical solution of Eq. (12) we need to transform the Bloch orbitals at point \(\mathbf{k}+\mathbf{h}\) to canonical ones in order to use them as field-free orbitals for perturbation by an electric or magnetic field. We can transform the Bloch orbitals to canonical form by finding the unitary matrix \(\mathbf{T}\) that diagonalizes the occ-occ (or virt-virt) block of the matrix of Lagrange multipliers \(\mathbf{\varepsilon}_{OO}\), that is \(\mathbf{T}_{O}^{\dagger}\mathbf{\varepsilon}_{OO}\mathbf{T}_{O}=\mathbf{ \varepsilon}_{OO}\) This means that we need to solve the following eigenvalue equation: \[\mathbf{\varepsilon}_{OO}\left(\mathbf{k}+\mathbf{h}\right)\mathbf{T}_{O}\left( \mathbf{k}+\mathbf{h}\right)=\mathbf{T}_{O}\left(\mathbf{k}+\mathbf{h}\right) \mathbf{\epsilon}_{O}\left(\mathbf{k}+\mathbf{h}\right) \tag{19}\] To obtain the orbital energy \(\epsilon_{i}\) and corresponding eigenvector \(\mathbf{T}_{i}\) at point \(\mathbf{k}+\mathbf{h}\) in reciprocal space. Once the matrix \(\mathbf{T}_{O}\) has been obtain, returning to (19), we find: \[\mathbf{F}\left(\mathbf{k}+\mathbf{h}\right)\mathbf{C}^{\prime} \left(\mathbf{k}+\mathbf{h}\right) = \mathbf{S}\left(\mathbf{k}+\mathbf{h}\right) \tag{20}\] \[\times \mathbf{C}^{\prime}\left(\mathbf{k}+\mathbf{h}\right)\mathbf{ \epsilon}\left(\mathbf{k}+\mathbf{h}\right)\] where: \[\mathbf{C}^{\prime}\left(\mathbf{k}+\mathbf{h}\right)=\mathbf{C}\left( \mathbf{k}+\mathbf{h}\right)\mathbf{T}\left(\mathbf{k}+\mathbf{h}\right) \tag{21}\] Then, defining: \[\mathbf{T}\left(\mathbf{k}+\mathbf{h}\right)=\mathbf{1}+\mathbf{h}\ \mathbf{T}^{(1)}\left(\mathbf{k}\right)+\ldots \tag{22}\] and proceeding as in Eqs. (7)-(9), we obtain: \[\mathbf{C}^{\prime(1)}\left(\mathbf{k}\right)=\mathbf{C}^{(0)}\left(\mathbf{k }\right)\mathbf{Q}^{\prime(1)}\left(\mathbf{k}\right) \tag{23}\] with: \[\mathbf{Q}^{\prime(1)}\left(\mathbf{k}\right)=\mathbf{Q}^{(1)}\left(\mathbf{k }\right)+\mathbf{T}^{(1)}\left(\mathbf{k}\right) \tag{24}\] We can calculate \(\mathbf{T}^{(1)}\left(\mathbf{k}\right)\) by solving the eigenvalue Eq. (19) by non-degenerate Rayleigh-Schrodinger perturbation theory giving: \[\mathbf{T}_{i}^{(1)}\left(\mathbf{k}\right)=\sum_{j\neq i}\mathbf{T}_{j}^{(0) }\left(\mathbf{k}\right)\frac{\left[\mathbf{T}_{j}^{(0)}\left(\mathbf{k} \right)\right]^{\dagger}\mathbf{\varepsilon}^{(1)}\left(\mathbf{k}\right)\mathbf{ T}_{i}^{(0)}\left(\mathbf{k}\right)}{\epsilon_{i}^{(0)}-\epsilon_{j}^{(0)}} \tag{25}\] We note in passing that degenerate or quasi-degenerate states would require appropriate modification of our treatment. Then, inserting Eqs. (14) and (16) in Eq. (25) we obtain: \[T_{ia}^{(1)}\left(\mathbf{k}\right)=0\quad i\in occ,a\in virt \tag{26a}\] \[T_{ji}^{(1)}\left(\mathbf{k}\right)=\frac{\varepsilon_{ji}^{(1)}\left(\mathbf{ k}\right)}{\epsilon_{i}^{(0)}\left(\mathbf{k}\right)-\epsilon_{j}^{(0)}\left( \mathbf{k}\right)}\quad i\in occ,j\in occ \tag{26b}\] Note that substitution of Eq. (26b) for occ-occ and virt-virt blocks, along with Eqs. (14)-(16) into Eq. (24) gives \(\mathbf{Q}^{\prime(1)}\) that exactly coincide with the canonical approach of Eq. (5), apart from the correction due to the relaxation term: \[Q_{ll^{\prime}}^{\prime(1)}\left(\mathbf{k}\right)=\frac{K_{ll^{\prime}}^{(1) }\left(\mathbf{k}\right)-\epsilon_{l^{\prime}}^{(0)}\left(\mathbf{k}\right)R_{ ll^{\prime}}^{(1)}\left(\mathbf{k}\right)}{\epsilon_{l^{\prime}}^{(0)}\left( \mathbf{k}\right)-\epsilon_{l}^{(0)}\left(\mathbf{k}\right)} \tag{27}\] Indeed, Eq. (27) is exactly identical to Eq. (5a), except that \(K_{ll^{\prime}}^{(1)}\) now includes also the second term in Eq. (17) (the response correction). At this point, we note three important situations in which the new relaxation term correction vanishes: 1. For a sufficiently fine sampling of the FBZ \(\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left. details are provided at Ref. [18]. Calculations with one or two \(\mathbf{k}\) points include only "real" points (i.e. points \(\mathbf{k}_{\mathrm{real}}\) for which \(\sin\left(\mathbf{k}_{\mathrm{real}}\cdot\mathbf{g}\right)\) is vanishing) and the otherwise missing relaxation term is also vanishing. Thus, we always obtained perfectly consistent results both for SHG and OR between the 2/1 and 1/2 calculations (i.e. 2/1 meaning double cell and 1 \(\mathbf{k}\) point, 1/2 meaning single cell and 2 \(\mathbf{k}\) points). The 1/3 calculation, on the other hand, includes points away from \(\Gamma\), and away from the edge of the Brillouin zone, and at such points the relaxation term is not vanishing. In this case, we obtain significant differences between the 1/3 and 3/1 calculations, particularly for small basis sets (e.g. 58.1 vs. 47.3 \({}^{\circ}/mm.\) for OR and a double-zeta basis set). The differences are diminished by employing larger basis sets (e.g. 62.0 vs. 62.7 \({}^{\circ}/mm.\) for OR and a quadruple-zeta basis set). Thus, we may avoid explicit calculation of the costly (and complicated) relaxation term to reciprocal space derivatives for electric and magnetic properties in different ways: namely, i) by employing many \(\mathbf{k}\) points and a small cell or ii) by employing only \(\mathbf{k}\) points at \(\Gamma\) and the edge of the Brillouin zone and a large supercell and/or iii) by employing a large basis set. _Acknowledgements_ I am grateful to Profs. Michel Rerat, Bernard Kirtman and Michael Springborg for valuable discussions.
2306.03923
Glitch systematics on the observation of massive black-hole binaries with LISA
Detecting and coherently characterizing thousands of gravitational-wave signals is a core data-analysis challenge for the Laser Interferometer Space Antenna (LISA). Transient artifacts, or "glitches", with disparate morphologies are expected to be present in the data, potentially affecting the scientific return of the mission. We present the first joint reconstruction of short-lived astrophysical signals and noise artifacts. Our analysis is inspired by glitches observed by the LISA Pathfinder mission, including both acceleration and fast displacement transients. We perform full Bayesian inference using LISA time-delay interferometric data and gravitational waveforms describing mergers of massive black holes. We focus on a representative binary with a detector-frame total mass of $6 \times 10^7 M_\odot$ at redshift $5$, yielding a signal lasting $\sim 30~\mathrm{h}$ in the LISA sensitivity band. We explore two glitch models of different flexibility, namely a fixed parametric family and a shapelet decomposition. In the most challenging scenario, we report a complete loss of the gravitational-wave signal if the glitch is ignored; more modest glitches induce biases on the black-hole parameters. On the other hand, a joint inference approach fully sanitizes the reconstruction of both the astrophysical and the glitch signal. We also inject a variety of glitch morphologies in isolation, without a superimposed gravitational signal, and show we can identify the correct transient model. Our analysis is an important stepping stone toward a realistic treatment of LISA data in the context of the highly sought-after "global fit".
Alice Spadaro, Riccardo Buscicchio, Daniele Vetrugno, Antoine Klein, Davide Gerosa, Stefano Vitale, Rita Dolesi, William Joseph Weber, Monica Colpi
2023-06-06T18:00:03Z
http://arxiv.org/abs/2306.03923v2
# Glitch systematics on the observation of massive black-hole binaries with LISA ###### Abstract Detecting and coherently characterizing thousands of gravitational-wave signals is a core data-analysis challenge for the Laser Interferometer Space Antenna (LISA). Transient artifacts, or "glitches", with disparate morphologies are expected to be present in the data, potentially affecting the scientific return of the mission. We present the first joint reconstruction of short-lived astrophysical signals and noise artifacts. Our analysis is inspired by glitches observed by the LISA Pathfinder mission, including both acceleration and fast displacement transients. We perform full Bayesian inference using LISA time-delay interferometric data and gravitational waveforms describing mergers of massive black holes. We focus on a representative binary with a detector-frame total mass of \(6\times 10^{7}M_{\odot}\) at redshift 7, yielding a signal lasting \(\sim 30\) h in the LISA sensitivity band. We explore two glitch models of different flexibility, namely a fixed parametric family and a shapelet decomposition. In the most challenging scenario, we report a complete loss of the gravitational-wave signal if the glitch is ignored; more modest glitches induce biases on the black-hole parameters. On the other hand, a joint inference approach fully sanitizes the reconstruction of both the astrophysical and the glitch signal. We also inject a variety of glitch morphologies in isolation, without a superimposed gravitational signal, and show we can identify the correct transient model. Our analysis is an important stepping stone toward a realistic treatment of LISA data in the context of the highly sought-after "global fit". ## I Introduction The Laser Interferometer Space Antenna (LISA) [1], currently planned to be launched in the early 2030s, will detect gravitational waves (GWs) from space. LISA will extend the exploration of the GW spectrum in the milliHertz band - from about \(10^{-4}\) to 1 Hz - providing observations of astrophysical sources ranging from Galactic white-dwarf binaries to mergers of massive black-holes at high redshift [2; 3]. The detection and characterization of different astrophysical sources is an extremely challenging problem of data-analysis. This is due to the combined effect of the all-sky detector sensitivity and the large number, \(\mathcal{O}(10^{4})\), of long-lived GW signals overlapping both in time and frequency. Maximizing the payoff of the LISA mission requires an accurate, efficient, and global analysis [4; 5], simultaneously fitting data models for an unknown number of detectable GW sources and uncertain detector noise. In addition to the abundance of astrophysical sources, the LISA data stream will be polluted by noise transients. These artifacts, also called "glitches" from a terminology borrowed from ground-based detectors, have been observed at a rate of about one per day and extensively characterized by the LISA Pathfinder (LPF) mission [6; 7]. Efforts are ongoing to understand the origin of the LPF glitches by capitalizing on the collected data and eliminating them by design in the LISA hardware. Previous studies stressed the need to assess their impact on the scientific return of the LISA mission [8; 9]. The physical nature of glitches in LPF still needs to be fully understood, with possible interpretations including outgassing phenomena, electronics events, and eddy current transients [8]. Moreover, new types of unexpected noise artifacts can appear in LISA because of the increased complexity of both spacecraft and payload design compared to LPF. Because the occurrence and morphology of glitches in the full LISA setup are uncertain, a conservative approach is to prepare a robust data analysis strategy to mitigate their impact downstream. Tackling the fundamental challenge of including glitches in parameter-estimation pipelines is well recognized by the LISA Consortium as part of the core preparation activities for the imminent mission adoption. To this end, a set of LISA Data Challenges (LDCs) [10] are in progress to develop and demonstrate data-analysis readiness. Among others, the LDC nicknamed _Spritz_ is devoted to investigating glitches and gaps in the reconstruction of signals from massive black-hole binaries (MBHBs). A recent analysis suggests the adoption of heavy-tailed likelihoods to mitigate the effect of noise transients upon the inference of GW sources [11]. In this work, we instead assess for the first time the impact of glitches on short-lived MBHB signals performing direct, joint parameter estimation. We present a complete analytical derivation of the LISA response to two types of instrumental artifacts as detected by LPF, namely force and displacement transients of the test masses. We then report results by including both models in a large, multi-source parameter estimation framework for LISA data analysis. This infrastructure, called Balrog, is currently under active development and has already been tested against different astrophysical sources (see e.g. Refs. [12; 13; 14; 15; 16]). The paper is organized as follows. In Sec. II, we introduce the phenomenology of the expected instrumental artifacts. In Sec. III, we present our glitch models and provide a brief summary of the fiducial GW-source and glitch parameters. In Sec. IV, we derive an alternative set of time-delay interferometric (TDI) variables suitable for the simultaneous treatment of glitches and GW signals. In Sec. V, we provide definitions of relevant statistical quantities and details on our parameter-estimation runs. In Sec. VI, we present our inference results. Finally, in Sec. VII, we summarize our findings and describe future developments. Throughout this paper, we use units where \(c=1\). ## II LPF glitches in LISA data ### Phenomenology of LPF glitches Glitches are observed as additional signals in the data stream. They can be thus modeled and subtracted from the data as such. The strategy here is to (i) get a consistent estimate of the power spectral density (PSD) of the underlying quasi-stationary noise over the entire data stream and thus (ii) improve the astrophysical signal inference by making it robust against glitch-induced biases. The latter constitutes a key element of the LISA data processing pipeline in view of the targeted "global fit" [4; 5]. The properties of glitches, namely amplitude, duration, and time morphology, depend both on the measurement system and the originating physical process. LPF observed two main kinds of glitches: a first class treated as an effective displacement-measurement artifact in the optical metrology chain and another class due to spurious forces acting on the test masses (TMs). Displacement glitches have been rarely observed in nominal conditions, have a typical duration comparable with the LISA sampling cadence, and carry negligible impulse per unit of mass as compared to the typical forces acting on the TMs [8]. As a consequence, fast, low-impulse glitches could be expected to affect the geodesic motion of the LISA constellation only mildly. On the contrary, force events result in impulse-carrying glitches lasting from tens of seconds to several hours, have a significant impact on the noise performance, and can potentially contaminate GW detection and parameter estimation. During its ordinary runs, LPF observed 102 impulse-carrying glitches and 81 of these were visible in the data stream as a sharp, positive offset of the residual force-per-unit-mass (henceforth loosely referred to as "acceleration") [8]. These acceleration glitches correspond to the two TMs moving toward each other along the sensitive axis of the pair, i.e. the direction joining their respective centers of mass. The rate of these events has been estimated to be about 1 per day and compatible with a Poisson distribution [8]. Several possible physical origins for glitches have been vetoed by extensive cross-checking and correlation analysis on LPF data, with the most plausible explanation pointing to either gas outbursts or virtual leaks in the vacuum chamber and the material surrounding the TMs. Dedicated experimental studies are underway to corroborate this hypothesis [8]. ### Guiding principles for LISA differential acceleration measurements We now list a few guiding principles behind our modeling choices: * Long-lived glitches related to force phenomena such as those observed by LPF are the most relevant for LISA. For these, we adopt a phenomenological parameterization suitable to describe their temporal evolution in terms of differential test-mass accelerations. * Constructing the corresponding signal model for fractional phase observables in the frequency domain is more complex, although doable. * position and velocity - are eliminated with an acceleration observable. * In a realistic operational setup, systematic errors arising from force disturbances (e.g. stiffness coupling) could be subtracted directly in acceleration. Thus, our fitting model does not require any additional integration or whitening filter. * When the effective glitch "signal" has spectral content mainly near the low-frequency end of the LISA sensitivity range, differentiation is numerically safer than integration. In this regime, data correction from systematics in the displacement variables is still viable. * The corresponding TDI variables written in acceleration allow for a straightforward inclusion of LPF glitches in a Bayesian inference framework. * GW signal models can be easily rewritten as effective accelerations by differentiating those already available in phase or fractional frequency. These broad considerations are mostly inspired by the observational equivalence between GWs and tidal forces accelerating TMs relative to their local inertial frames [17]. We thus opt to implement our joint inference for glitches and GWs with suitable acceleration TDI variables. ## III Transients modeling The fundamental observable in LISA is the phase evolution \(\Delta\phi\) of a one-way propagating laser along each of the six links connecting the satellites. This can be equivalently written as an optical pathlength \[L=\frac{\Delta\phi}{\omega_{l}}\,, \tag{1}\] where \(\omega_{l}\) is the central frequency of the laser signal, which is assumed to be constant. We now focus on three different mechanisms perturbing the phase readout. ### Acceleration transients The two TMs housed in each of the LISA satellites are expected to independently exchange momentum with their surrounding environment (see Fig. 1 for a schematic representation). We model the resulting transient acceleration profile \(\vec{a}_{i}\) of the \(i\)-th test mass as in Ref. [9]. We use a two-damped exponential model inspired by glitches observed in LPF, namely \[g(t;A,\beta_{1},\beta_{2},\tau) = \frac{A}{\beta_{1}-\beta_{2}}\Big{(}e^{-\frac{t-\tau}{\beta_{1}}}- e^{-\frac{t-\tau}{\beta_{2}}}\Big{)}\Theta(t-\tau), \tag{2}\] which we refer to as Model A1. Equation (2) integrates to the net transferred momentum per unit mass: \[\int_{-\infty}^{+\infty}g(t;A,\beta_{1},\beta_{2},\tau)\;\mathrm{d}t=A\,. \tag{3}\] The parameters \(\beta_{1},\beta_{2}\) describe the typical timescales of the two exponentials while \(\tau\) is the glitch onset time entering the Heaviside step function \(\Theta\). The corresponding Fourier-domain representation is \[g(\omega;A,\beta_{1},\beta_{2},\tau)=-A\frac{e^{-i\tau\omega}}{(\beta_{1} \omega-i)(\beta_{2}\omega-i)}\,. \tag{4}\] Accommodating glitches of unknown shape requires a more flexible model. We construct this using a superposition of \(S\) Gabor-Morlet shapelets \[g(t)=\sum_{i}^{S}\sigma\left(t;A_{i},\tau_{i},\beta_{i},n_{i}\right), \tag{5}\] where \[\sigma\left(t;A,\tau,\beta,n\right) =c_{n}\psi_{n}\left(\frac{t-\tau}{\beta}\right), \tag{6}\] \[\psi_{n}\left(t\right) =\frac{2t}{n}e^{-t/n}L_{n-1}^{(1)}\left(\frac{2t}{n}\right)\Theta \left(t\right),\] (7) \[c_{n} =(-1)^{n-1}\frac{A}{2\beta n^{2}}, \tag{8}\] Figure 1: Schematics of single laser links and glitch reference system conventions. The constellation is made of three satellites (white circles), each housing two TMs (right inset, yellow and gray boxes). Each satellite is connected to the other two by four links, two for each TM. Signals denoted by \(y_{ijk}\) or \(y_{ij^{\prime}k}\) are emitted by the \(i\)-th satellite, received by the \(k\)-th satellite, therefore traveling along either \(L_{j}\) or \(L_{j^{\prime}}\). The indexes \(j\) and \(j^{\prime}\) are used to denote cyclic and anti-cyclic permutations of 123, respectively. Unit vectors \(\hat{a}_{j}\) parametrize the glitch component along the incoming (outgoing) link \(L_{j^{\prime}}\) (\(L_{j}\)) associated with the test mass \(M_{j^{\prime}}\). On satellite 1 a generic acceleration glitches acting on test mass \(M_{2^{\prime}}\) and \(M_{3}\) are described by the components \(a_{2^{\prime}}\) and \(a_{3}\), respectively. The former [latter] affects link \(y_{32^{\prime}1}(t)\) [\(y_{231}(t)\)] at reception and link \(y_{123}(t-L)\) [\(y_{13^{\prime}2}(t-L)\)] at emission. and \(L_{n}^{(\alpha)}(t)\) is the \(n-\)th generalized Laguerre polynomial [18]. We refer to these expressions as Model A2. Comparing to Ref. [9], we use a different normalization \(c_{n}\) for the individual shapelets such that \[\int_{-\infty}^{+\infty}\sigma(t;A,\tau,\beta,n)\,\mathrm{d}t=A\,,\qquad\forall n \in\mathbb{N}\,. \tag{9}\] In the frequency domain Eq. (6) reads \[\tilde{\sigma}(\omega;A,\tau,\beta,n)=(-1)^{n}e^{-i\omega\tau}A\frac{(n\beta \omega+i)^{n-1}}{(n\beta\omega-i)^{n+1}}. \tag{10}\] Shapelets in this parametric family are quasi-orthogonal, i.e. \[\int_{-\infty}^{+\infty}\tilde{\sigma}(\omega;A,\tau,\beta,n)\tilde{\sigma}^{ *}(\omega;A^{\prime},\tau,\beta,m)\,\mathrm{d}\omega=\delta_{nm}\frac{\pi AA^ {\prime}}{2n\beta}\,, \tag{11}\] \[\int_{-\infty}^{+\infty}\tilde{\sigma}(\omega;A,\tau,\beta,n)\tilde{\sigma}^{ *}(\omega;A^{\prime},\tau^{\prime},\beta,n)\mathrm{d}\omega\] \[=\frac{\pi AA^{\prime}}{2n^{2}\beta^{2}}e^{-\frac{|\tau-\tau^{\prime}|}{n \beta}}(n\beta+|\tau-\tau^{\prime}|). \tag{12}\] From Eqs. (4) and (10) it is immediate to show that Model A1 tends to Model A2 with \(n=1\) in the limit where \(\beta_{1}\to\beta_{2}\). ### Displacement transients The interferometer readout system is also expected to generate transient phase fluctuations. From Eq. (1), we model these as effective displacement transients with the same agnostic shapelet parameterization used in Eq. (5). We use a superposition of \(S\) shapelets \[\Delta L(t)=\sum_{i}^{S}\sigma\left(t;D_{i},\tau_{i},\beta_{i},n_{i}\right), \tag{13}\] where \[\int_{-\infty}^{+\infty}\mathrm{d}t\Delta L(t)=\sum_{i}^{S}D_{i} \tag{14}\] is the net integrated displacement experienced by the test mass before returning asymptotically to its free-fall condition. We refer to this parametric family of glitches as Model D. The frequency domain representation follows from Eq. (10) and reads \[\tilde{\sigma}(\omega;D,\tau,\beta,n)=(-1)^{n}e^{-i\omega\tau}D\frac{(n\beta \omega+i)^{n-1}}{(n\beta\omega-i)^{n+1}}. \tag{15}\] ### GW transients Among the large variety of typical sources populating the LISA sensitivity band, the most massive binary systems detectable produce hours to years-long transient signals. To leading-order, the binary time to merger \(t_{m}\) from a reference frequency \(f_{\mathrm{ref}}\)[19; 20] \[t_{m}\sim\left(\frac{3}{4\eta}\right)\left(\frac{f_{\mathrm{ref}}}{0.1\; \mathrm{mHz}}\right)^{-\frac{8}{3}}\left(\frac{M_{\mathrm{z}}}{10^{7}M_{\odot }}\right)^{-\frac{5}{3}}\mathrm{days}\,, \tag{16}\] where \(\eta\equiv m_{1}m_{2}/(m_{1}+m_{2})^{2}\) is the symmetric mass ratio and \(M_{z}=(1+z)(m_{1}+m_{2})\) is the solar-system barycenter frame total mass for a source of component masses \(m_{1}\) and \(m_{2}\). By contrast, glitches observed by LPF have typical durations of seconds to hours and are positively correlated with the transferred momentum per unit mass ranging from \(10^{-2}\) to \(10^{3}\,\mathrm{pm/s}\)[8]. Their broadband, short-lived morphology makes them the most likely to impact parameter estimation for GW transient sources of comparable duration. We select three fiducial noise transients and superimpose them on a short-lived (\(t_{m}=30\) hours) high-mass (\(M_{z}=6\times 10^{7}\,M_{\odot},\eta=3/16\)) MBHB at redshift \(z=5\). We assume zero sensitivity below \(0.1\,\mathrm{mHz}\)[16]. We consider a short-duration Model D glitch (\(\beta=5\,\mathrm{s}\)), a moderate-duration Model A2 (\(\beta=40\,\mathrm{s}\)), and a long-duration Model A1 glitch with \(\beta_{1}+\beta_{2}=3300\,\mathrm{s}\). All three glitches have peak amplitudes close to the merger time of the GW source, as shown in Fig. 2. For a conservative approach, we fine-tune the glitch onset times to maximally impact the reconstruction of GW source parameters. This is done by maximizing the match between the glitch and GW waveforms as shown in Fig. 3 (see Sec. V for more details). We model the GW signal with the IMRPhenomXHM [21; 22] waveform approximant which captures the full coalescence of a quasi-circular, non-precessing black-hole binary. The implementation of the LISA response to this GW signal in the Balrog code has been presented in Ref. [16]. We choose to parametrize the GW signal injected as follows: \(m_{1z,2z}\) and \(\chi_{1,2}\) denote the binary component redshifted masses and aligned dimensionless spin, respectively; \(t_{m},\phi_{0},\psi\) denote the time to merger introduced in Eq. (16), initial phase and polarization, respectively; \(\sin\beta,\lambda\) denote the (sine-)ecliptic latitude and longitude; \(d_{L}\) and \(\iota\) denote the source luminosity distance and inclination. Tables 3, 4 and 5 list the parameter values of our fiducial GW source which has an SNR of 187 and is common across all of our runs. ## IV Acceleration Tdis We use Eqs. (4), (10), and (15) to model the TDI variables [23]\(\tilde{s}_{k}(f;\mathbf{\theta})\) entering the likelihood, cf. Sec. V. We work in the constant equal-armlength approximation and label the three TDI variables \(M_{X},M_{Y}\), and \(M_{Z}\), respectively. In this approximation, one needs a single time-delay operator \(\mathcal{D}\) \[\mathcal{D}\left[f(t)\right]=f(t-L). \tag{17}\] This is applied to the single-link phase measurements \(y_{ijk}\). Signals denoted by \(y_{ijk}\) or \(y_{ij^{\prime}k}\) are emitted by the \(i\)-th satellite, received by the \(k\)-th satellite, therefore traveling along either \(L_{j}\) or \(L_{j^{\prime}}\) (see Fig. 1 for a schematic representation). The indexes \(j\) and \(j^{\prime}\) are used to denote cyclic and anti-cyclic permutations of 123, respectively. We thus to obtain the TDI variables \[M_{X} =y_{231}+\mathcal{D}y_{13^{\prime}2}-y_{32^{\prime}1}-\mathcal{D} y_{123}, \tag{18}\] \[M_{Y} =y_{312}+\mathcal{D}y_{32^{\prime}1}-y_{21^{\prime}3}-\mathcal{D }y_{231},\] (19) \[M_{Z} =y_{123}+\mathcal{D}y_{21^{\prime}3}-y_{13^{\prime}2}-\mathcal{D }y_{231}. \tag{20}\] Incorporating Model A1 and Model A2 signals into Eqs. (18), (19), and (20) requires integrating the single-link differential accelerations twice. However, any non-zero total transferred momentum necessitate artificial regularization or ad-hoc approximations to construct a Fourier-domain representation of the signal. We solve this problem by introducing a set of "acceleration TDIs" \(G_{X,Y,Z}\) which are trivially related to Eqs. (18), (19), and (20) by double differentiation. In the frequency domain one has \[\mathcal{F}\left[G_{X}\right] =(2\pi f)^{2}\mathcal{F}\left[M_{X}\right] \tag{21}\] \[G_{X} =g_{231}+\mathcal{D}g_{13^{\prime}2}-g_{32^{\prime}1}-\mathcal{D }g_{123}, \tag{22}\] where \(\mathcal{F}\) denotes the Fourier transform operator and \[g_{ijk}(t)=\frac{d}{dt^{2}}\left[y_{ijk}(t)\right]. \tag{23}\] Similar definitions hold for \(G_{Y},G_{Z}\) upon cyclic permutation of indices. The key advantage of introducing a new set of TDIs lies in its instrumental robustness. Equation (21) also allows us to conveniently recycles signal models available in fractional displacement by including both Model D glitches and GW signals. Furthermore, Eq. (23) does not require a transfer function to model acceleration glitches. Figure 3: Match between the GW and glitch signals as a function of onset time. The blue, green, and red solid curve considers to the Model A1 (A2, D) glitch shown in Fig. 2. The GW signal is fixed to that of our fiducial MBHB. Dashed vertical lines with matching color denote the onset time that maximizes the match. The black dotted line denotes the GW source nominal merger time. The inset shows a 40-minute interval zoom-in around the merger and glitch onset times. Figure 2: Fiducial waveforms for our parameter-estimation runs. Black solid curves show the MBHB signal we consider (\(M_{z}=6\times 10^{7}\,M_{\odot}\) and \(z=5\)), which is identical across the three panels. Colored curves in the top, middle, and bottom panels describe the Model A1, Model A2, Model D glitch amplitudes, respectively. Signals shown in the three panels correspond to injections in runs 9, 10, and 11 and exemplify glitches lasting hours, minutes, and seconds, respectively (cf. Table 1). The parameters of the injected signals are shown in Tables 3, 4, and 5. Following the conventions shown in Fig. 1, the single-link perturbation \(g_{ijk}(t)\) is obtained from the instantaneous accelerations \(\vec{g}_{i}(t)\) and \(\vec{g}_{k}(t-L)\) which are experienced by sender \(i\) and receiver \(k\) along the link \(j\), and projected along the unit-vectors \(\hat{a}_{j}(t-L)\) and \(\hat{a}_{j^{\prime}}(t)\), respectively. We associate a unit vector \(\hat{a}_{j}\) to each test mass \(M_{j}\) pointing in the direction opposite to \(L_{j}\). For simplicity, we denote the associated vector components \(a_{j}\). Given the choice of the local reference system, a positive value \(a_{i}\) corresponds to a negative displacement \(\Delta L_{i}\). The three TDI observables in terms of the individual test mass accelerations are \[G_{X} =(1+\mathcal{D}^{2})(a_{2^{\prime}}-a_{3})+2\mathcal{D}(a_{2}-a_{ 3^{\prime}}), \tag{24}\] \[G_{Y} =(1+\mathcal{D}^{2})(a_{3^{\prime}}-a_{1})+2\mathcal{D}(a_{3}-a_{ 1^{\prime}}),\] (25) \[G_{Z} =(1+\mathcal{D}^{2})(a_{1^{\prime}}-a_{2})+2\mathcal{D}(a_{1}-a_{ 2^{\prime}}). \tag{26}\] It is importante to note how the acceleration TDI variable \(G_{X}\) (\(G_{Y}\), \(G_{Z}\)) is insensitive to glitches acting on links \(L_{1}\) and \(L_{1}^{\prime}\) (\(L_{2}\) and \(L_{2}^{\prime}\), \(L_{3}\) and \(L_{3}^{\prime}\)). This would no longer be true if a single glitch affects more than one TM (or more optical phase measurements); further modeling on this point will be presented elsewhere. Following the standard procedure [23], we combine \(G_{X}\), \(G_{Y}\), and \(G_{Z}\) into three noise-orthogonal variables \[G_{A} =\frac{G_{Z}-G_{X}}{\sqrt{2}}, \tag{27}\] \[G_{E} =\frac{G_{X}-2G_{Y}+G_{Z}}{\sqrt{6}},\] (28) \[G_{T} =\frac{G_{X}+G_{Y}+G_{Z}}{\sqrt{3}}. \tag{29}\] Equations (27), (28), and (29) define the data pieces entering our inference pipeline. ## V Inference The initial search of a GW in noisy data is achieved through matched-filtering techniques [24] which provide initial guesses on the signal parameters. If glitches are present, their preliminary detection and subtraction might not be sufficient to provide data that are sufficiently cleaned to accurately infer the parameters of the astrophysical source [9]. Previous studies presented a matching-pursuit algorithm for an automated and systematic glitch detection [25] showing that, while the search grid on the damping parameter is too coarse to accurately obtain the best-fit glitch, it provides a reliable initial guess. For practical purposes, here we assume that such guess has been identified from the data and can be used to inform our subsequent analyses. We perform a joint parameter estimation, fitting simultaneously for GW signals and noise artifacts. We construct posteriors on parameters \(\mathbf{\theta}\) \[p(\mathbf{\theta}|d)\propto\mathcal{L}(d|\mathbf{\theta})\pi(\mathbf{\theta}) \tag{30}\] through stochastic sampling of the likelihood \(\mathcal{L}(d|\mathbf{\theta})\) under a prior \(\pi(\mathbf{\theta})\). We employ a coherent analysis on the three noise-orthogonal TDI channels \(d=\{d_{k};k=M_{A},M_{E},M_{T}\}\) when considering displacement variables and \(d=\{d_{k};k=G_{A},G_{E},G_{T}\}\) when considering acceleration variables. We use a Gaussian likelihood [26] \[\ln\!\mathcal{L}(d|\mathbf{\theta})=-\sum_{k}\frac{(d_{k}-s_{k}(\mathbf{\theta})|d_{k} -s_{k}(\mathbf{\theta}))_{k}}{2}+\text{const.}, \tag{31}\] where \(\tilde{s}_{k}\) is the \(k\)-th TDI output frequency series associated to the injected signal \(\tilde{s}(f;\mathbf{\theta})\). The output \(\tilde{s}_{k}\) represent either acceleration or fractional displacements depending on the chosen TDI variable set, thus containing acceleration glitches, displacement glitches, GW transients, or a combination of these (cf. Sec. III). The noise-weighted inner product is defined as \[(a\mid b)_{k}=4\Re\int_{f_{\min}}^{f_{\max}}\frac{\tilde{a}^{*}(f)\tilde{b}(f) }{S_{k}(f)}df, \tag{32}\] where \(\Re\) denotes the real part, \(\tilde{a}(f)\) is the Fourier transform of the time series \(a(t)\), and \(S_{k}(f)\) is the one-sided noise spectral density of the \(k\)-th TDI channel. We use the match between two signals \[M(a,b)=\frac{(a\mid b)}{(a\mid a)^{1/2}(b\mid b)^{1/2}} \tag{33}\] to optimize the onset time of the injected glitches as discussed in Sec. III.3. Model selection is performed using log-Bayes factors \[\log_{10}\mathcal{B}_{i}^{j}=\log_{10}\mathcal{Z}_{i}-\log_{10}\mathcal{Z}_{j}, \tag{34}\] where \(i\) and \(j\) are labels identifying the competing models, and \[\mathcal{Z}(d)=\int d\mathbf{\theta}\mathcal{L}(d\mid\mathbf{\theta})\pi(\mathbf{\theta}) \tag{35}\] is the evidence of each parameter estimation. We consider a LISA mission lifetime of \(T_{\text{LISA}}=4\) years, roughly equivalent to a calendar observation time of 4.5 years with an effective duty cycle of 82%. Our frequency resolution is therefore \(\Delta f\approx 1/T_{\text{LISA}}=1.7\times 10^{-8}\,\text{Hz}\). We set \(f_{\min}=0.1\,\text{mHz}\) and \(f_{\max}=30\,\text{mHz}\), which is well above the fiducial GW and the maximum frequencies of all glitch signals. We use a semi-analytical noise spectral density model \(S_{k}(f)\)[27] describing the superposition of LISA stationary instrumental noise and astrophysical confusion noise from unresolved Galactic binaries [28]. In order to reduce the computational cost, we evaluate inner products from Eq. (32) using a Clenshaw-Curtis integration algorithm [29], see e.g Ref. [13] for a summary of its application to LISA data. Parameter estimation is performed with the Balrog code, which is designed to work with different stochastic samplers. In particular, in this paper we use the nested sampling algorithm [30] as implemented in Nessai[31]. We choose uniform priors on each parameter over either its entire definition domain or a range that is sufficiently large to enclose the entire posterior. ## VI Results We perform two sets of parameter-estimation runs: 1. Joint inference runs on both GW signal and glitches (Sec. VI.1), listed with IDs 1 to 14 in Table 1. 2. Inference runs where we inject and recover glitches without GW signal (Sec. VI.2), listed with IDs 15 to 32 in Table 2. ### Joint inference with glitches and GWs If a preliminary search fails to identify and remove a glitch from the data, it is important to assess its impact on the parameters of the overlapping GW source. We thus tackle the following cases for each of the three signals illustrated in Fig. 2: * Parameter estimation in the absence of glitch in the data ("reference" runs, with IDs 1 and 2). \begin{table} \begin{tabular}{c| * Parameter estimation ignoring a glitch when present in the data ("glitch-ignorant" runs, with IDs 6-8). * Parameter estimation including in the signal model a glitch that is present in the data ("glitch-complete" runs, with IDs 9-11). Bayesian evidence for each run is listed Tab. 1. We report \(\log_{10}\mathcal{B}_{9}^{6}\), \(\log_{10}\mathcal{B}_{10}^{7}\), and \(\log_{10}\mathcal{B}_{11}^{8}\) much greater than 2, indicating a "decisive" evidence [32] in favor of a glitch being present in the data. Summaries are provided in Tables 3, 4, and 5. We find no appreciable differences in the posterior distribution of the GW-source parameters when comparing reference runs and glitch-complete runs, which is encouraging for LISA science. Individual parameters are well reconstructed, which is expected given the brightness of the source (SNR \(\simeq 187\)). In particular, the MBHB component masses, the primary aligned spin components, and time to merger are measured with an accuracy of \(\Delta m_{i}/m_{i}\approx 8-40\%\), \(\Delta\chi_{1}\approx 0.2\), and \(\Delta t_{m}\approx 600\,\)s (where we quote the 90% credible interval of the marginal posterior distributions). Figures 4, 5, and 6 show the posterior distribution for the fiducial MBHB of each glitch-complete run. Similarly, we do not report any appreciable difference with either fractional displacement or acceleration \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} & \multicolumn{11}{c|}{MBHB} & \multicolumn{11}{c|}{Model D} \\ \hline ID & \(m_{1z}\,[10^{7}M_{\odot}]\) & \(m_{2z}\,[10^{7}M_{\odot}]\) & \(t_{m}\) [h] & \(\chi_{1}\) & \(\chi_{2}\) & \(d_{L}\) [Gpc] & \(\iota\) [rad] & \(\beta\) [rad] & \(\lambda\) [rad] & \(\phi\) [rad] & \(\psi\) [rad] & \(D\) [pm.s] & \(\beta\) [s] & \(\tau\) [h] \\ \hline \multirow{3}{*}{11} & 4.5 & 1.5 & 30.0 & 0.4 & 0.3 & 47.6 & 0.6 & 0.3 & 2.0 & 1.0 & 1.7 & 200.0 & 5.0 & 29.507 \\ & 4.5 & 1.5 & 30.0 & 0.4 & 0.3 & 47.6 & 0.6 & 0.3 & 2.0 & 1.0 & 1.7 & 200.0 & 5.0 & 29.507 \\ & 4.5\({}^{+0.2}_{-0.2}\) & 1.5\({}^{+0.3}_{-0.3}\) & 30.01\({}^{+0.09}_{-0.08}\) & 0.4\({}^{+0.1}_{-0.1}\) & 0.3\({}^{+0.6}_{-1.0}\) & 44\({}^{+15}_{-14}\) & 0.8\({}^{+0.3}_{-0.6}\) & 0.3\({}^{+0.5}_{-0.1}\) & 1.99\({}^{+0.25}_{-0.09}\) & 1.6\({}^{+1.1}_{-1.3}\) & 1.6\({}^{+1.3}_{-1.4}\) & 216\({}^{+1.3}_{-1.4}\) & 16\({}^{+1.3}_{-1.4}\) & 216\({}^{+1.3}_{-7.3}\) & 16\({}^{+1.3}_{-1.4}\) & 29.502\({}^{+0.008}_{-0.008}\) \\ & 4.5 & 1.5 & 30.0 & 0.4 & 0.3 & 47.6 & 0.6 & 0.3 & 2.0 & 1.0 & 1.7 & ✗ & ✗ & ✗ & ✗ \\ \hline \multirow{3}{*}{2} & 4.5 & 1.5 & 30.0 & 0.4 & 0.3 & 47.6 & 0.6 & 0.3 & 2.0 & 1.0 & 1.7 & ✗ & ✗ & ✗ \\ \cline{1-1} & 4.5 & 1.5 & 30.0 & 0.4 & 0.3 & 47.6 & 0.6 & 0.3 & 2.0 & 1.0 & 1.7 & ✗ & ✗ & ✗ \\ \end{tabular} \end{table} Table 5: Parameter estimation results for a GW signal contaminated by a Model D glitch. Results are organized as in Table 3. This glitch, if present in data and ignored upon inference, introduces milder biases when compared to run with ID 6: IDs 6 and 7. This is due to its very short duration, which superimposes with the GW swapfm. Joint posterior distributions for both runs are shown in Fig. 6. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} & \multicolumn{11}{c|}{MBHB} & \multicolumn{11}{c|}{Model A1} \\ \hline ID & \(m_{1z}\,[10^{7}M_{\odot}]\) & \(m_{2z}\,[10^{7}M_{\odot}]\) & \(t_{m}\) [h] & \(\chi_{1}\) & \(\chi_{2}\) & \(d_{L}\) [Gpc] & \(\iota\) [rad] & \(\beta\) [rad] & \(\lambda\) [rad] & \(\phi\) [rad] & \(\psi\) [rad] & \(A\) [pm/s] & \(\beta\) [s] & \(\beta\) [s] & \(\tau\) [h] \\ \hline \multirow{3}{*}{9} & 4.5 & 1.5 & 30.0 & 0.4 & 0.3 & 47.6 & 0.6 & 0.30 & 2.0 & 1.0 & 1.7 & 3.0 & 1500.0 & 1800.0 & 30.21 \\ & 4.5 & 1.5 & 30.0 & 0.4 & 0.3 & 47.6 & 0.6 & 0.30 & 2.0 & 1.0 & 1.7 & 3.0 & 1500.0 & 1800.0 & 30.21 \\ \cline{1-1} & 4.5\({}^{+0.2}_{-0.2}\) & 1.5\({}^{+0.3}_{-0.3}\) & 30.01\({}^{+0.09}_{-0.08}\) & 0.4\({}^{+0.1}_{-0.1}\) & 0.3\({}^{+0.6}_{-1.0}\) & 44\({}^{+15}_{-15}\) & 0.8\({}^{+0.3}_{-0.6}\) & 0.3\({}^{+0.6}_{-0.1}\) & 2.0\({}^{+0.3}_{-0.1}\) & 1.6\({}^{+1.2}_{-1.3}\) & 1.6\({}^{+1.2}_{-1.4}\) & 30.9\({}^{+0.25}_{-1.6}\) & 164\({}^{+1.4}_{-1.2}\) & 164\({}^{+1.3}_{-7.3}\) & 30.219\({}^{+0.005}_{-0.09}\) \\ & 4.25\({}^{+0.01}_{-0.01}\) & 0.61\({}^{+0.00}_{-0.02}\) & 29.47\({}^{+0.00}_{-0.005}\) & -0.308\({}^{+0.00}_{-0.005}\) & -0.51\({}^{+0.00}_{-0.005}\) & 10.00\({}^{+0.003}_{-0.005}\) & 0.04\({}^{+0.00}_{-0.005}\) & 0.150\({}^{+0.005}_{-0.005}\) & 1.79\({}^{+0.002}_{-0.005}\) & 1.99\({}^{+0.25}_{-0.005}\) & 1.6\({}^{+1.4}_{-1.4}\) & ✗ & ✗ & ✗ & ✗ \\ \end{tabular} \end{table} Table 3: Parameter estimation results for a GW signal contaminated by a Model A1 glitch. The injected parameters are listed in the white rows. Medians and 90% credible intervals for the recovered posteriors are listed in the two rows highlighted in real. While accounting for the presence of a glitch (ID 9) allows for joint unbiased reconstruction of all parameters, ignoring its potential occurrence (ID 6) yields large systematic biases. Ignoring the presence of a glitch is disfavored with \(\log_{10}\mathcal{B}_{9}^{5}=-14491\). Joint posterior distributions for both these runs are shown in Fig. 4. For comparison, the bottom row shows our reference run where we only inject the GW source (ID 1). The subset of parameters common across runs 1 and 10 does not show appreciable differences. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} & \multicolumn{11}{c|}{MBHB} & \multicolumn{11}{c|}{Model A2} \\ \hline ID & \(m_{1z}\,[10^{7}M_{\odot}]\) & \(m_{2z}\,[10^{7}M_{\odot}]\) & \(t_{m}\) [h] & \(\chi_{1}\) & \(\chi_{2}\) & \(d_{L}\) [Gpc] & \(\iota\) [rad] & \(\beta\) [rad] & \(\lambda\) [rad] & \(\phi\) [rad] & \(\psi\) [rad] & TDIs to model the same GW signal (see runs 1 and 2). On the contrary, glitch-ignorant runs point to a different conclusion. The resulting posterior depends on the chosen duration and amplitude of each transient (see runs 7, 8, and 9). We find a long-duration, small-amplitude Model A1 glitch massively contaminates the reconstruction of the GW parameters, to a point that the signal cannot be recovered at all. This is shown in Fig. 4, where the glitch-ignorant distribution (red) shows evident issues in the underlying stochastic-sampling procedure. This has to be contrasted with the regularity of the glitch-complete posterior distribution (blue), where instead the parameters of both GW signal and noise transient are successfully recovered. In particular, when the glitch is ignored we find that the posterior on the luminosity distance rails heavily against the lower bound of its prior, thus making the GW source reconstruction highly biased, even in a parameter space that largely encloses the posterior of the glitch-complete run. As shown in Fig. 5, a Model A2 glitch with moderate duration and amplitude induces milder biases. Although the posterior support is far from the prior boundaries, the injected values lie outside the 99% credible interval for both mass and spin parameters. For the merger time, the true value lies on the 97% confidence interval of the corresponding marginalized posterior distribution. The injected values of polarization, initial phase, inclination, and source position are within their one-dimensional 90% confidence interval. Equivalent runs for a Model D glitch are shown in Fig. 6. This is a noise transient that overlaps with the GW signal only for a small fraction of a cycle. As expected, we find such a glitch does not significantly impact the measurement of the GW parameters. Finally, we note that our glitch-complete runs do not exhibit significant cross-correlations between the glitch and GW parameters, thus effectively decoupling the inference on the two signals. ### Inference with glitches alone, without GWs We consider all three glitch models presented in Sec. III and inject them separately in the LISA data stream. Results are shown in Figs. 7, 8, and 9 as well as Tables 6 and 7. We perform model selection with different (i) number and order of shapelet components, (ii) number of glitches, and (iii) injection point. In particular, in Tab. 2 we report "strong" evidence in favor of the correct noise-transient model for the selection of the number and order of shapelets; these are discrete parameters we can confidently identify using \(\log_{10}\mathcal{B}_{15}^{j}\) with \(j=16,\ldots,20\). We obtain a "substantial" evidence \(\log_{10}\mathcal{B}_{15}^{21}=0.9\) for selecting the correct number of glitches. Injection points are selected with a "decisive" evidence given by \(\mathcal{B}_{22}^{n}\) with \(n=23,\ldots,27\). All runs point to the same, encouraging result: glitch parameters are confidently reconstructed. In particular, we recover amplitudes across all models (i.e. \(A\), \(A_{0,1}\), \(D_{0,1,2,3}\)) with accuracies of \(1\%-30\%\) at 90% credible level. Glitch-onset times are recovered with fractional accuracy \(\lesssim 0.1\%\). The parameters \(\beta_{i}\)'s in Model D glitches are recovered with an accuracy of 20%. On the other hand, Model A1 glitches exhibit correlation and multimodalities for the joint posterior on \(\beta_{1}\) and \(\beta_{2}\). This is expected given the waveform degeneracy upon exchange of these two parameters, cf. Eqs. (2) and Eq. (4). ## VII Conclusions We presented a parameter-estimation strategy to simultaneously extract GWs from MBHBs and glitches from future LISA data. We developed several models for noise transients inspired by those observed by LPF. Crucially, we point out that dealing with glitches in the frequency domain greatly benefits from expressing the LISA response function (i.e. the TDIs) in terms of acceleration instead of displacement as usually done. Accounting for potential noise transients in the data leads to accurate reconstruction of all GW parameters without significant correlations with the glitch properties. On the contrary, ignoring glitches when present in the data might introduce significant systematic biases on the reconstructed parameters of the MBHB. Our analysis shows that the most crucial property is the length of the glitch, with results ranging from a complete loss of the GW signal to a negligible impact. When considering glitches in isolation, our procedure allows for confident identification of their number, location, and morphology in each of the models considered. It is important to stress that all glitch models in our suite have a relatively low number of parameters and these are largely uncorrelated to those of the GW source. The computational overhead of including potential glitches in the signal model is therefore negligible, thus making our approach promising for a future "global fit" procedure. This study is restricted to a single, fiducial GW source as well as glitches are conservatively placed at the time location that maximizes their matches with the GW signal. A broader injection-recovery study over the full MBHB and glitch parameter space is needed to forecast the impact of noise transients on GW signals in the future LISA catalog; this is left to future work. Overall, this paper showcases our readiness to model and precisely recover glitches when present in the LISA data stream, even when overlapping with GW sources of similar duration such as a MBHB. ###### Acknowledgements. We thank Chris Moore, Federico Pozzoli, Eleonora Castelli, Natalia Korsakova, Stas Babak, Martina Muratore, and all Balrog developers for useful comments and inputs. A.S. and D.G. are supported by ERC Starting Grant No. 945155-GWmining, Cariplo Foundation Grant No. 2021-0555, and MUR PRIN Grant No. 2022-Z9X4XS. A.S., D.G., and R.B. are supported by the ICSC National Research Center funded by NextGenerationEU. R.D., M.C., S.V., D.V.,W.J.W. acknowledge funding from MUR under the grant PRIN 2017-MB8AEZ. R.B. acknowledges support through the Italian Space Agency grant _Phase A activity for LISA mission, Agreement n. 2017-29-H.0_. D.G. is supported by Leverhulme Trust Grant No. RPG-2019-350. Computational work was performed using University of Birmingham BlueBEAR High Performance Computing facility and CINECA with allocations through INFN, Bicocca, and ISCRA project HP10BEQ9JB. _Software_: We acknowledge usage of Mathematica[33] and of the following Python[34] packages for modeling, analysis, post-processing, and production of results throughout: Nessai[31], matplotlib[35], numpy[36], scipy[37]. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|} & \multicolumn{6}{c|}{Model A1} & \multicolumn{6}{c}{Model A2} \\ \hline ID & \(A\) [pm/s] & \(\beta_{1}\) [s] & \(\beta_{2}\) [s] & \(\tau\) [h] & \(A_{0}\) [pm/s] & \(\beta_{0}\) [s] & \(\tau_{0}\) [h] & \(A_{1}\) [pm/s] & \(\beta_{1}\) [s] & \(\tau_{1}\) [h] \\ \hline & ✗ & ✗ & ✗ & ✗ & 1.48 & 3600.0 & 11.94 & 3.72 & 3600.0 & 36.94 \\ 29 & ✗ & ✗ & ✗ & ✗ & \(1.6^{+0.6}_{-0.4}\) & \(3735^{+770}_{-543}\) & \(11.94^{+0.03}_{-0.03}\) & \(4.2^{+2.4}_{-1.4}\) & \(3848^{+993}_{-719}\) & \(36.93^{+0.04}_{-0.04}\) \\ \hline & \multicolumn{2}{c|}{0.3} & 21.0 & 20.0 & 12.0 & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ 30 & \(0.300^{+0.004}_{-0.004}\) & \(20^{+13}_{-17}\) & \(20^{+13}_{-17}\) & \(12.001^{+0.003}_{-0.003}\) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline & \multicolumn{2}{c|}{2.0} & 900.0 & 400.0 & 12.0 & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ 31 & \(2.00^{+0.002}_{-0.02}\) & \(439^{+485}_{-55}\) & \(848^{+785}_{-465}\) & \(12.000^{+0.002}_{-0.002}\) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline & \multicolumn{2}{c|}{100.0} & 7500.0 & 7400.0 & 12.0 & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ 32 & \(102^{+5}_{4}\) & \(7453^{+2211}_{-142}\) & \(7453^{+2201}_{-1402}\) & \(12.000^{+0.002}_{-0.002}\) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \end{tabular} \end{table} Table 6: Parameter-estimation results on Model A1 (ID 30-32) and Model A2 (ID 29) glitches. In particular, the former corresponds to glitches inspired by LPF observations, with varying duration and amplitudes. White rows show the injected values and real rows show the recovered median and 90% confidence interval. The posterior distribution for these runs is provided in Figs. 7 and 8. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} & \multicolumn{6}{c|}{Glitch 1} & \multicolumn{6}{c}{Glitch 2} \\ \hline & \multicolumn{6}{c|}{Component 1} & \multicolumn{6}{c|}{Component 2} & \multicolumn{6}{c|}{Component 1} & \multicolumn{6}{c}{Component 2} \\ \hline ID & \(D_{0}\) [pm \(\cdot\) s] & \(\beta_{0}\) [s] & \(\tau_{0}\) [h] & \(D_{1}\) [pm \(\cdot\) s] & \(\beta_{1}\) [s] & \(\tau_{1}\) [h] & \(D_{2}\) [pm \(\cdot\) s] & \(\beta_{2}\) [s] & \(\tau_{2}\) [h] & \(D_{3}\) [pm \(\cdot\) s] & \(\beta_{3}\) [s] & \(\tau_{3}\) [h] \\ \hline & \multicolumn{2}{c|}{2480.0} & 20.0 & 12.0 & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ 22 & \(2481^{+64}_{-64}\) & \(20.0^{+0.9}_{-0.09}\) & \(12.0000^{+0.0003}_{-0.0004}\) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline & \multicolumn{2}{c|}{542.0} & 40.0 & 12.0 & 1420.0 & 80.0 & 12.0 & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ 15 & \(672^{+536}_{-278}\) & \(69^{+60}_{-29}\) & \(11.997^{+0.005}_{-0.007}\) & \(1336^{+690}_{-578}\) & \(77^{+16}_{-15}\) & \(12.015^{+0.010}_{-0.012}\) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline & \multicolumn{2}{c|}{5000.0} & 100.0 & 11.111 & 1000.00 & 10.0 & 11.111 & 5000.0 & 40.0 & 13.89 & 20000.0 & 120.0 & 13.89 \\ 28 & \(5021^{+3461}_{-1385}\) & \(101^{+25}_{-26}\) & \(11.111^{+0.004}_{-0.003}\) & \(986^{+489}_{-333}\) & \(10.2^{+2.2}_{-1.3}\) & \(11.111^{+0.003}_{-0.004}\) & \(4975^{+820}_{-821}\) & \(40.3^{+3.8}_{-3.5}\) & \(13.889^{+0.00}_{-0.002}\) & \(19822^{+3942}_{-3902}\) & \(12.0^{+8.4}_{-7.6}\) & \(13.89^{+0.02}_{-0.02}\) \\ \end{tabular} \end{table} Table 7: Parameter estimation results assuming Model D glitches of increasing complexity. White rows show the injected values and real rows show the recovered median and 90% confidence interval. In particular, we consider a single-component glitch (ID 22), a glitch with two components (ID 15), and two glitches separated by 200 seconds with two components each (ID 28). The posterior distribution for the latter, most complex case is shown in Fig 9. Figure 4: Posterior distribution in blue (red) corresponding to run ID 7 (10) where a Model A1 glitch is (not) included in the recovery process. Contours indicate the 50% and 90% credible regions; solid black lines indicate the injected values as listed in Table 3. When the glitch is included in the inference, each model injected parameter is recovered within the 90% one-dimensional credible region. We do not report notable correlations between glitch and GW parameters. If the glitch is excluded, all MBHB parameters except the initial phase \(\phi_{0}\) and the polarization angle \(\psi\) are systematically biased. In particular, the posterior on the luminosity distance \(d_{L}\) rails heavily against the prior lower bound. Figure 5: Posterior distribution in blue (red) corresponding to run ID 6 (9) where a Model A2 glitch is (not) included in the recovery process. Contours indicate the 50% and 90% credible regions; solid black lines indicate the injected values as listed in Table 4. When the glitch is ignored, the MBHB parameters are somewhat biased; see in particular the black-hole masses and spins. When the glitch is included in the recovery process, all model parameters are recovered within their 90% one-dimensional credible regions. We do not report notable correlations between glitch and GW parameters. Figure 6: Posterior distribution in blue (red) corresponding to run ID 8 (11) where a Model D glitch is (not) included in the recovery process. Contours indicate the 50% and 90% credible regions; solid black lines indicate the injected values as listed in Table 5. When the glitch is ignored, the MBHB parameters are very mildly biased. In both cases, all model parameters are recovered within their 90% one-dimensional credible regions. Figure 7: Posterior distributions for short (top, ID 30), medium (bottom left, ID 31), and long (bottom right, ID 32) Model A1 glitches. Injected values and some posterior summary statistics are listed in Table 6. Darker (lighter) shaded areas indicate 90% (50%) credible regions and solid black lines indicate the injected values. The correlation between the fall time \(\beta_{1}\) and the rise time \(\beta_{2}\) is caused by the intrinsic degeneracy between these two parameters, see Eqs. (2) and (4). For the medium-duration glitch, the larger separation between the injected value of \(\beta_{1}-\beta_{2}=500\)s partially breaks it into a strong multimodality. Figure 8: Posterior distributions for two Model A2 glitches (run ID 29). Injected values and some posterior summary statistics are listed in Table 6. Darker (lighter) shaded areas indicate 90% (50%) credible regions and solid black lines indicate the injected values. The lower-left panels show the joint distribution between parameters describing the two glitches, which do not present significant correlations. Figure 9: Posterior distributions for two Model D glitches (run ID 28). Injected values and some posterior summary statistics are listed in Table 7. Each glitch is made of two components with injected values \(\tau_{0}=\tau_{1}\) and \(\tau_{2}=\tau_{3}\) for the first and second glitch, respectively. Darker (lighter) shaded areas indicate 90% (50%) credible regions and solid black lines indicate the injected values. Glitch parameters are recovered successfully and cross-glitch correlations are negligible.
2308.02149
Artificial intelligence based load balancing in SDN: A comprehensive survey
In the future, it is anticipated that software-defined networking (SDN) will become the preferred platform for deploying diverse networks. Compared to traditional networks, SDN separates the control and data planes for efficient domain-wide traffic routing and management. The controllers in the control plane are responsible for programming data plane forwarding devices, while the top layer, the application plane, enforces policies and programs the network. The different levels of the SDN use interfaces for communication. However, SDN faces challenges with traffic distribution, such as load imbalance, which can negatively affect the network performance. Consequently, developers have developed various SDN load-balancing solutions to enhance SDN effectiveness. In addition, researchers are considering the potential of implementing some artificial intelligence (AI) approaches into SDN to improve network resource usage and overall performance due to the fast growth of the AI field. This survey focuses on the following: Firstly, analyzing the SDN architecture and investigating the problem of load balancing in SDN. Secondly, categorizing AI-based load balancing methods and thoroughly assessing these mechanisms from various perspectives, such as the algorithm/technique employed, the tackled problem, and their strengths and weaknesses. Thirdly, summarizing the metrics utilized to measure the effectiveness of these techniques. Finally, identifying the trends and challenges of AI-based load balancing for future research.
Ahmed Hazim Alhilali, Ahmadreza Montazerolghaem
2023-08-04T06:13:34Z
http://arxiv.org/abs/2308.02149v1
# Artificial Intelligence based Load balancing in SDN: A Comprehensive Survey ###### Abstract In the future, it is anticipated that software-defined networking (SDN) will become the preferred platform for deploying diverse networks. Compared to traditional networks, SDN separates the control and data planes for efficient domain-wide traffic routing and management. The controllers in the control plane are responsible for programming data plane forwarding devices, while the top layer, the application plane, enforces policies and programs the network. The different levels of the SDN use interfaces for communication. However, SDN faces challenges with traffic distribution, such as load imbalance, which can negatively affect the network performance. Consequently, developers have developed various SDN load-balancing solutions to enhance SDN effectiveness. In addition, researchers are considering the potential of implementing some artificial intelligence (AI) approaches into SDN to improve network resource usage and overall performance due to the fast growth of the AI field. This survey focuses on the following: Firstly, analyzing the SDN architecture and investigating the problem of load balancing in SDN. Secondly, categorizing AI-based load balancing methods and thoroughly assessing these mechanisms from various perspectives, such as the algorithm/technique employed, the tackled problem, and their strengths and weaknesses. Thirdly, summarizing the metrics utilized to measure the effectiveness of these techniques. Finally, identifying the trends and challenges of AI-based load balancing for future research. Load balancing (LB); Artificial Intelligence (AI); Software-defined networking (SDN); Network functions virtualization (NFV); Deep Learning Aided load balancing routing. ## 1 Introduction In recent years, network requirements are changing quickly as network traffic and quality conditions are growing, putting more pressure on the network infrastructure. Traditional network topologies still struggle to adapt to the dynamic nature of modern networks because of their inflexibility. Developers have developed the concept of "Software-Defined Networking" (SDN) to address the need for flexible networks. The concept of SDN was initially proposed by researchers at Stanford University [1]. Service providers have confidence in SDN because it can efficiently manage most network components' functions. SDN provides a network design that separates the control plane from the data plane and allows for a more flexible, scalable, and cost-effective network architecture [2]. A centralized SDN controller is part of the control plane and is responsible for routing packets [3]. At the same time, the data plane is the infrastructural layer, which comprises interconnected forwarding units, such as software-defined networking (SDN) switches. In order to properly apply SDN-based technologies, the networking components need to include the software in their physical infrastructure [4]. Critical technologies supported by the SDN are OpenFlow and Path Computation Element [5]. The Open Networking Foundation (ONF) strongly advises using OpenFlow because it is the standard protocol that decouples the control plane from the switch and offers a communication link between the SDN layers (control and data layers) [5]. Internet Engineering Task Force (IETF) [6] supports Path Calculation Element (PCE) for closed settings like data centers where path computation is transferred to the controller. The OpenFlow protocol development makes network traffic monitoring effective and efficient and provides flexible topology [5]. It allows the software to operate on various routers and promotes packet path association across the network. As conventional networks are incapable of providing a global view of network structure and resources, they did not discuss load-balancing techniques in detail previously. Since the controller provides information about the network resources that could be used for optimizing the load, it is an ideal environment for load balancing implementation. Load balancing (LB) is a strategy whereby numerous resources are used to handle a single task to prevent network overload [7]. LB generally aims to minimize the throughput and response time and optimize the network traffic. In conventional networks, load balancing strategies are notoriously inaccurate, while in SDN, it is characterized by its accuracy and high performance. The comprehensive study of SDN can be challenging because of its multidimensional nature. Although load balancing can improve SDN performance, more studies need to be on it, prompting the authors to conduct further investigation in the area of load balancing in SDN. To our knowledge, this is the first comprehensive Load Balancing (LB) survey in software-defined networking that concentrates on the existing Artificial Intelligence (AI) techniques and their effects on SDN performance. Even though there have been several in-depth studies on SDN LB, such as Ahmad and Khan [8] (2018) Gebremariam et al. [9] (2019), Belgaum et al. [10] (2021), Latah and Toker [11] (2019), Hota et al. [12] (2019), Belgaum et al. [13] (2020) our work relies on different aspects to provide a new classification and analysis for LB methods. In this paper, we examine the load-balancing strategies, policies, and algorithms currently used in SDN, study the variables that affect the load distribution and evaluate its effectiveness, and discuss the significant trends and challenges in SDN load balancing that can help researchers to improve SDN performance. A study by Haris and Khan [8] (2018) offered a systematic study of current techniques and tools for load balancing in cloud computing. In this regard, some important criteria like as throughput, scalability, fault tolerance, and reaction time are taken into account in the evaluation. However, this paper has ignored the published articles between 2016 and 2018. Also, Gebremariam et al. [9] (2019) provided a comprehensive overview of the core AI/ML application fields in SDN and Network functions virtualization (NFV)-based networks. The survey classified essential advancements in these fields according to their application trend and determined the AI methodologies used. However, none of this research considered the load balancing aspects in software-defined networks. Furthermore, the objective of Belgaum et al. [10] (2021) is to study two artificial intelligence optimization approaches, including Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO), and their application for load balancing in Software Defined Networking (SDN). It suggested incorporating a reliable link and node selection approach to enhance the network performance and improve the load. In contrast, in Latah and Toker [11] (2019), three distinct sub-disciplines of Artificial Intelligence (AI) have been investigated: machine learning, meta-heuristics, and fuzzy inference systems. The work highlights the application areas of AI-based techniques and their improvements in the SDN paradigm. However, the drawback associated with the mentioned studies is that it is considered only some of the available AI methods. Hota et al. [12] (2019) suggested a literature review of load balancing algorithms in cloud computing. The algorithms have been categorized into three groups, namely, metaheuristic, heuristic, and hybrid, based on their adopted algorithms. The advantages, disadvantages, and optimization techniques of each algorithm have been outlined. Nevertheless, the study has not taken into account any recently published papers. Finally, Belgaum et al. [13] (2020) suggested a methodical investigation of load balancing techniques and algorithms used by different researchers. Depending on the strategy used to address SDN load balancing difficulties, the articles were divided into two groups: artificial intelligence-based techniques and classical load balancing-based approaches. Similarly, this work focused on the problems that have been raised, the strategies employed, and solutions suggested. The authors observed that several techniques did not fulfill specific crucial requirements necessary to enhance the efficiency of the existing SDN load balancing methods. Table 1 presents a comparison between our study and previous surveys based on several aspects such as review type, publication year, classification, main topic, future work, and years of reviewed papers. The comparison highlights that only two papers provide comprehensive reviews of both dynamic and static load balancing methods. As a result, our research is the first to concentrate on the impact of existing Artificial Intelligence (AI) based load balancing techniques on SDN performance in the domain of software-defined networking. The main contributions of this work as follow: * We comprehensively survey the various Artificial Intelligence (AI) approaches to address load balancing problems and their impacts on software-defined network (SDN) performance. * We present a detailed evaluation and categorization of the existing AI-based LB mechanisms while highlighting their primary features, including the algorithm or technique, the addressed problem, and the strength and weaknesses of each methodology. * We introduce the most used parameters to assess the effectiveness of the proposed techniques. * Finally, this survey highlights the trends and challenges that need to be addressed in the future as prospects for further research. These prospects could provide researchers with assistance and inspiration for future SDN LB endeavors. The rest of this article is structured as follows. In section 2, the background of SDN architecture, the key benefits of utilizing SDN, and the concept and structure of load balancing are presented. Section 3 reviews the chosen load balancing methods and classifies them into four categories. Section 4 provides a comparison of the results obtained from these techniques. Section 5 outlines the trends and challenges. Finally, in section 6, the research is concluded. ## 2 Background This section presents a brief overview about the architecture of SDN and the primary advantages of utilizing SDN. Furthermore, the concept and structure of load balancing have been discussed. ### SDN Architecture SDN architecture represents one of the innovative network designs. It provides a central controller that manages the entire network infrastructure. The OpenFlow protocol is best suited for implementing SDN architecture. Compared to traditional networks, SDN architecture combined with the OpenFlow protocol provides network operators with a superior technique for processing flows via controllers. In a conventional network, the control and data plane are integrated into the equipment. In \begin{table} \begin{tabular}{c c c c c c} \hline Authors & Review type & Publication & Classification & Main & Future & Years of \\ & & year & & topic & work & review papers \\ \hline Ahmad and Khan & Systematic Literature & 2018 & No & Cloud & Not & 2010-2015 \\ [8] & Review (SLR) & & & presented & \\ Gebremariam et al. & Survey & 2019 & Yes & SDN and & Presented & 2016-2018 \\ [9] & & & & NFV & \\ Belgaum et al. [10] & Survey & 2021 & No & SDN & Presented & 2016-2019 \\ Latah and Toker [11] & Comprehensive overview & 2019 & Yes & SDN & Not & 1985-2019 \\ & & & & & presented & \\ Hota et al. [12] & Comprehensive Review & 2019 & No & Cloud & Not & 2008-2016 \\ & & & & presented & \\ Belgaum et al. [13] & Systematic Review & 2020 & Yes & SDN & Presented & 2015-2019 \\ Our work & Comprehensive Survey & & Yes & SDN & Presented & 2017-2023 \\ \hline \end{tabular} \end{table} Table 1: Related studies in the field of load balancing contrast, SDN is an architecture that divides the networks into a control plane and a data plane (forwarding plane). The control plane, which typically comprises one or more controllers, is the network's brain and controls the whole structure. While the real network hardware, such as routers, switches, and middle boxes, that is in charge of transmitting data is represented by the data plane [14]. SDN architecture is organized into three principal planes based on the Open Networking Foundation (ONF). The architecture is depicted in Figure 1. * **Data plane:** It represents the bottom layer in the SDN topology and is considered the network infrastructure. The network forwarding equipment included in this layer, such as routers, physical/virtual switches, access points etc. The main job that is carried out in this layer is packet forwarding in accordance with predetermined guidelines. These rules are defined and installed on the flow table of switches by the SDN controller [14][15]. * **Control plane:** Represents the intermediate layer in an SDN architecture that provides control functionality through software-based SDN controller(s) to manage the network forwarding behavior. The controller manages the network switches, which are responsible for transmitting packets according to specific instructions. Also, it creates an abstract and centralized underlying infrastructure vision for the higher layer [14][16]. The controller uses the southbound API combined with OpenFlow protocol to communicate with the network devices. A part of the SDN controller called the load balancer, which is located in the logical central decision point, used to apply the load lancing algorithms [16][17]. * **Application plane:** This layer includes one or more end-user applications which use the abstract and centralized underlying infrastructure vision to demonstrate their internal decision-making process. In the application implementation process, the programmers use Figure 1: SDN architecture the northbound API to communicate with the SDN controller. This API serves as a software bridge between SDN applications operating on the network and controller platform components [14][15]. ### SDN Advantages SDN provides several advantages to overcoming the challenges presented by conventional network architectures. One of the essential benefits is network programmability. This feature allowed the organizations to have programmatic control over their networks and to grow such networks without affecting performance, reliability, or the quality of the user experience. The SDN eliminates the infrastructure layer complexity and adds visibility for services and applications, thus simplifying network management operations. Network administrators are not required to use custom policies and protocols for the network devices individually in SDN architecture. Simultaneously, an independent controller that is not part of the actual network hardware carries out the control-plane operations. Using SDN enables the network operators to avoid congestion and reduces the complexity of traffic engineering [2]. The scalability issues are significant for Data centers, especially as the number of virtual machines (VMs) grows and they move from one place to another. Therefore, SDN network virtualization presents a significant chance for large-scale data centers. This functionality allowed the network administrators to run Layer 2 traffic across Layer 3 overlays and isolate the MACs of the infrastructure layer devices, making it easier to transfer and create virtual network machines. Moreover, service providers can use the SDN to combine all the network components, such as servers, facilities and clouds, whether physical or virtual, into a single logical network. Consequently, every consumer will have their own personal perspective of the service provider [2][14]. Network device configuration and trouble-shooting with SDN can be accomplished through a single controller; thus, it became easy to add and configure devices when needed to grow the network. By offering a programmable platform, SND encourages those interested in networks to use new protocols and ideas and test them in this environment [1][2][14]. ### Load Balancing Mechanisms in SDN LB technologies are typically employed to enhance the overall performance of distributed systems by effectively spreading incoming clients, requests, and jobs among the available network resources [15]. This technique can be implemented programmatically or in physical equipment to improve the response time, boost throughput, and keep the network from being overloaded.. Integrating the SDN architecture with virtual resources has the potential to enhance energy efficiency and optimize load distribution in the Internet of Multimedia Things (IoMT) [18]. There are many ways to apply load balancing mechanisms, such as static, dynamic or a combination of both [19]. The static methods depend on the system's preliminary information essentially. Static LB mechanisms might be ineffective for all networks due to unexpected user behavior and the immutable load balancer rules. On the other hand, dynamic methods can distribute loads more efficiently than static methods because they use load balancers' pre-programmed patterns [20]. A proper load-balancing approach could effectively reduce response time and packet loss ratio, improve resource utilization, and overload. In addition to this, it has the potential to boost scalability, reliability, the packet delivery ratio, and the longevity of the network. The load-balancing methods need to be analyzed and compared to determine the most effective solution to the load-balancing problem and identify each mechanism's benefits and drawbacks [20]. Different parameters, known as qualitative parameters, such as latency, energy consumption, packet delivery ratio, scalability, etc., should be considered during the comparison process to ensure reliable results [19]. ## 3 Review of SDN Load Balancing Based on Artificial Intelligence These techniques apply an approach known as a meta-heuristic to address real-world challenges. Artificial intelligence (AI) encompasses a variety of topics, including neural networks, natural language processing, deep learning and the AI-based decision-making approaches such as search, planning, and decision theory. In software-defined networking (SDN), load balancing approaches based on artificial intelligence improve learning capabilities and stimulate decision-making. In this section, different mechanisms that researchers propose will be revised in terms of implementation and evaluation metrics. Also, the paper lists the employed load-balancing strategies' characteristics, including the algorithm or technique, the addressed problem, and the strength and weaknesses of each methodology. Moreover, existing SDN load balancing solutions are classified into four main categories, each with sub-categories based on the used technology. Finally, the section will include an explanation of each technique's application. Figure 2 shows the SDN LB four main categories and its sub-categories. Figure 2: Classification of AI-based SDN load balancing methods ### SDN LB ALGORITHMSMECHANISM This section reviews the LB algorithms implementation and classifies the proposed techniques into four main categories based on the nature of the used algorithm. Figure 3 (a) shows the reviewed articles distribution based on the year of publication from 2017 until 2022, and figure 3 (b) shows the number of works for each category. #### 3.1.1. Nature Inspired Based Load Balancing Methods Nature inspired is a term used to describe classes of meta-heuristic algorithms that resemble or are inspired by natural events explained by the scientific sciences [21]. This approach increases the performance of SDN load balancing in terms of reduced overall waiting time, response time, and completion time for resources. The authors in [22] presented a dynamic load balancing solution based on Particle Swarm Optimization (PSO). This study presents an intelligent LB method for controlling resources and running applications on schedule in a cloud environment. In this work, a fitness function was developed to balance loads quickly and efficiently. The authors have asserted that because of their technique, the response time has decreased, throughput has improved, and customer satisfaction has reached the maximum anticipated level. However, the proposed method only effective for applications with relatively limited data. Similarly, in [23], the researchers utilized Type-2 Fuzzy-based Particle Swarm Optimization (TFPSO) to determine the optimal under-loaded local controllers. Also, to predict the future load of local controllers, Markov Chain Model (MCM) is applied. Moreover, a support vector machine (SVM) categorizes the traffic according to its level of importance. The experiment results showed that without priority-based flow classification, this method would overload the network. Research by [24] presented a dynamic approach that uses the Salp Swarm Optimization algorithm (SSOA) and chaotic maps to enhance the optimizer performance. Their technique dynamically establishes the best possible link between switches and controllers and calculates the optimum number of controllers to use. The controller maintains data regarding the global perspective of the whole network. Thus, it allows for the dynamic choice of other routes on demand. In addition, it maintains information for calculating the link's use, checks the latency in the link, and stores load data. However, in the testing process, only some QoS indicators not taken into account. Furthermore, authors in [25] emphasize hybridizing Bacterium Foraging Algorithm (BFA) and PSO algorithms to enhance QoS multicast routing issue solutions. PSO's ability to transmit social information can be paired with BFA to boost their exploration and exploitation capabilities simultaneously. The proposed approach produces delay-compelled connections to each multicast destination. The bacterium foraging algorithm (BFA) constructs a multicast tree from the minor latency paths collection. To maintain a fair balance between the algorithm's intensification and diversification, Figure 3. Reviewed articles publication years and algorithms the authors dynamically changed PSO's parameters to satisfy global search and BFO, reducing delay and providing an ideal solution. Nonetheless, there is a need to take into account certain supplementary factors such as the mobility and energy limitations associated with mobile devices/sensors, in addition to the Quality of Service (QoS) parameters. The researchers in[26], [27] combined two AI load-balancing strategies to overcome the SDN LB. In [26], the author examines two strategies, Ant Colony Optimization (ACO) and PSO. In addition, employing a dependable connection and node to design the path to the target node may improve speed and network load balancing. The authors present a conceptual framework for SDN futurology by analyzing node and network resilience to balance the load and improve QoS. Furthermore, the paper presented by [27] used a Genetic algorithm (GA) with ACO to handle load imbalance and convergence lag. In the second step of the search, GA is utilized to decrease the search area, allowing the ACO algorithm to find the trajectories of the LB streams correctly. With the proposed method, the RTT and the packet-delivery rate are significantly enhanced compared to the Round Robin (RR) and ACO algorithms. Similarly, in [28], researchers propose software-defined wireless bacteria-inspired networks created by combining GA and BIN (SDWBIN). They claimed their method could determine the best route for traffic engineering networks that also satisfied the quality-of-service requirements. In addition, their solution provides a reliable QoS architecture that decreases network end-to-end communication delays while simultaneously improving network performance. However, the proposed methods take high processing time, overload the network, and in the case of GA, only a few QoS factors were considered. Furthermore, the ACO-based algorithms need considerable time to update both forward and backward. Furthermore, a study by [29] proposed a method based on the GA to distribute the controllers' loads in SDNs effectively. This strategy used a configurable threshold to recognize the overloaded controller and carefully select the right moment to migrate switches based on different criteria to ensure the best result regarding load balancing. The Java algorithm determines the importance of load imbalance and migrations number, which are the criteria this work relies on to select the switch-controller pairs. The result showed an improvement regarding throughput, migrations number and response time compared to other techniques, where these parameters have been improved by 47.25%, 67.98% and 9.38%, respectively. Nevertheless, the work does not considered the energy consumption and more efficient predictive methods can be used to identify the threshold value. Table 1 shows a comparison of the nature-inspired LB algorithms regarding different aspects including the used algorithm / technique, the problem and the method strength and weaknesses. \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline Authors & Algorithm / Technique & \multicolumn{1}{p{113.8pt}}{Addressed problem} & Strength & Weaknesses \\ \hline [22] & PSO & Dynamic resources and on-demand user application requirements make cloud application load balancing complicated. & \(\bullet\) Reduces reaction time & \(\bullet\) Only effective for applications with relatively limited data. \\ & & Scalability and load balancing & \(\bullet\) Throughput increases & \\ & & & \(\bullet\) Utilizes more resources & \\ [23] & TFPSO & Scalability and load balancing & \(\bullet\) Reduced latency & \(\bullet\) Improved load balancing & \(\bullet\) They used priority-based flow classification \\ & & & \(\bullet\) Improving throughput & \\ [24] & SSOA & Multi-controller distributed & \(\bullet\) Improved execution time and reliability & \(\bullet\) Other QoS indicators not taken into account \\ [25] & PSO with BFO & Multicast routing under multiple constraints & \(\bullet\) Delay has been reduced & \(\bullet\) QoS multicast over MANET not taken into \\ & & & \(\bullet\) Reduces cost & account \\ \hline \end{tabular} \end{table} Table 2: Nature inspired based load balancing methods and their properties #### 3.1.2. Machine Learning Based Load Balancing Methods Several studies have recommended using machine learning (ML) methods in conjunction with the SDN architecture to achieve enhanced routing performance [30]. In the context of Knowledge-Defined Networking (KDN), the article by [31] explains how to provide load balancing by using an Artificial Neural Network (ANN). The KDN uses artificial intelligence to regulate computer networks; its knowledge plane includes comprehensive network analysis and telemetry. The suggested technique, which uses ANN, forecast the network performance based on the latency and traffic metrics across jobs to choose the least-loaded path. In the same area, authors in [32] proposed an SDN-based ANN-LB method. Several parameters have been used by the suggested technique to improve transmission efficiencies, such as overhead, delay, hop count, packet loss, trust, and bandwidth ratio. Based on these parameters, the algorithm balances the network load by analyzing network congestion and choosing the least-laden transmission path. The evaluation results showed an improvement in latency, bandwidth utilization, and packet loss rate. However, in both works the proposed methods need more processing time and resources and works better on a medium-sized network or local optimum. A Back Propagation Artificial Neural Network (BPANN) has been applied by [33], where it is used to determine the optimal virtual machines (VM) based on factors such as CPU and memory usage and response time. The BPANN is triggered by the controller when a server agent, included in dynamic Agent-Based Load Balancing (DA-LB) architecture, assigns a request to an overloaded VM. The proposed load balancing technique uses SDN's global visibility to transfer VMs in the data center network efficiently. In addition, this technique enhances overall network efficiency and performs well for data transfer, according to the results. The suggested approach optimizes resource usage by increasing processing speed and predicting the loaded VMs in heavy load scenarios. Similarly, a paper by [34] trained a Back Propagation Artificial Neural Networks (BPANN) and K-Mean cluster to predict if a user will access networking equipment seamlessly in the future. The BPNN in use featured three hidden nodes, one output node, and four input nodes. This structure allows the load-balancing method to be implemented under actual service conditions. In this experiment, the authors evaluate the proposed technique's delay times and balancing circumstances with alternative flow forecast algorithms. However, the main disadvantage of this technique, it ignored some services that could impede the process of finding the actual shortest path. On the other hand, the researchers in [35] introduce a novel intelligent SDN-based architecture and a new data transmission optimization technique. The proposed method performs the following tasks; identify the path and required node and predict the traffic flow. The authors applied deep neural networks (DNNs) and Q-learning to identify the optimal route. The experiment results showed that DNNs was the most efficient method for handling the network complex traffic compared to other techniques. Still, essential factors such as scalability, topology changes, loss of packets not take into consideration. Many research studies have introduced Deep Learning (DL) methods to solve the SDN LB problems. The research by [36] suggests a DL approach for load-balancing SDN-based Data Center Networks (DCNs). The authors rely on the connections' varying load levels to train the DL network. The reaction time of the DL approach for load balancing is compared to that of several ML techniques, including ANN, SVM, and logistic regression (LR). The experimental findings show that the ANN and DL algorithms have faster reaction times than the SVM and LR techniques. Furthermore, DL accuracy is superior to ANN accuracy. The study by [37] described a method as a minimal workload routing algorithm that would choose the network path with the fewest users currently using it. When the likelihood of a system transition information is unknown, the Q learning approach is used to learn and explore that knowledge to provide a near-optimal scheduling node strategy. Similarly, Deep Reinforcement Learning (DRL) is used in this article [38] to properly load balance requests sent to services inside a data center network. Consequently, a strategy capable of dynamically adapting to changing request loads, including changes in the capabilities of the underlying infrastructure. Furthermore, an SDN framework based on machine learning has been proposed by [39]; this framework employs a novel DRL technique known as Deep Deterministic Policy Gradient (DDPG) to improve the routing process in software-defined networks. DROM, DDPG Routing Optimization Mechanism, was proposed to provide real-time, regional, and individual control and administration of network data. The evaluation results showed that this technique is characterized by durability, stability and high productivity, and it has the potential to improve network performance by providing more stable and advanced routing services than currently available solutions. The work by [40] also employs DDPG to select the better path between nodes in a network. The proposed architecture improves DDPG's empirical-playback mechanism's random extraction technique by sampling the experience pool with the SumTree structure. It can increase the convergence rate by extracting a more relevant experience for network updating with more likelihood. Compared to other RL algorithms, the suggested technique improves the SDN throughput with less training time. Nevertheless, the performance of the suggested methods is decreases in case of node failure and it is not treated as a distinct network topology during the experimentation phase. In addition, [41], [42], and [43] all employ DRL to enhance the quality of service (QoS) metrics of a network. The authors of [41] present a traffic control method close to optimum to optimize the QoS in a hybrid SDN. In addition, an SDN migration sequence is examined to enhance control traffic and improve the optimization results. After that, the DRL method is implemented in the hybrid SDN to solve the problem of split table routing. Finally, the authors test the technique using open-source traffic information. However, both [42], [43] advocated SINET to improve network routing. For optimal network performance, SINET assigns direct control of numerous key routing nodes to a DRL agent that employs dynamic routing strategies. The experiment done on a network of 82 nodes showed that the proposed method lowered network completion time by 32% and was more resistant to topology changes than earlier DRL-based systems. In the same way, the authors in [44] present a DRL-based technique to generate an SDN route based on human self-learning. This proposal employs deep learning, specifically Bio-Inspired RBM for Bio-Inspired Deep Belief Architecture (BDBA), to find the optimal solution. Basic RBM is included in this bio-inspired approach, as is self-learning based on the limbic system's emotional learning. Every Bio-Inspired RBM uses the reward function R to capture environmental dynamics as network regulations. Otherwise, [45] have provided a Deep Learning Aided load balancing routing approach that combines Queue Utilization with machine learning to control the high and unbalanced load on the router. In order to alleviate the effects of network congestion, they have created a hybrid strategy combining queueing and neural networks and have employed the principal component analysis method to minimize the network's dimensions. In comparison, the study by [46] offers Critical Flow Rerouting-Reinforcement Learning (CFR-RL), a technique based on Reinforcement Learning that automatically develops a strategy to choose critical flows for any given traffic matrix. By creating and solving a basic Linear Programming (LP) problem, CFR-RL reroutes these selected vital flows to balance the network's link use. Still, superior efficiency of using DRL could be only achieved by rerouting a tiny portion of total traffic as the evaluation findings demonstrated. However, authors [47] propose an approach that merges Software Defined Network (SDN) architecture and machine learning technologies. They apply three supervised learning models to categorize data traffic in a software-defined network architecture: Support Vector Machine (SVM), nearest centroid, and Naive Bayes (NB). Then network traffic is studied by capturing traffic traces and creating flow characteristics that pass to the reinforcement learning classifier for prediction. Alternatively, the authors in [48], [49], [50] determine the degree of load congestion using a Bayesian network and a Long Short-Term Memory (LSTM), respectively. In [48], the authors suggest a load-balancing strategy for IoT controllers that mimics the SDN architecture of conventional data centers. The Bayesian network predicts load congestion by integrating reinforcement learning with self-adjusting parameter weight to balance the load and improve network security and stability. Preemptively balancing the SDN control plane load proposed by [49] facilitates low-latency network flows. Firstly, they anticipate SDN controller demand to prevent imbalances and arrange data plane migrations. Then, the authors optimize migration activities to balance load with delay. In the first step, two prediction models were built using ARIMA and LSTM to forecast the SDN controller's load. The two models were compared regarding the accuracy and predicted mistakes. In the second step, the authors formalized the problem as a nonlinear programming model with binary variables, verified its NP-complete, and suggested a DRL as a solution. Also, research by [50] proposed a dynamic architecture that relies on predicting the link state to balance the load in an SDN efficiently and solve controller-switch transmission delay. The architecture works as follows; the link-state values are predicted using the LSTM algorithm, and then Dijkstra weight is used to find the most efficient route between hosts based on those values. The experimental results showed that the proposed method improves load balancing by 23.7% compared to Open Shortest Path First (OSPF) and 11.7% compared to Q-Learning in the GEANT network. Moreover, it solved controller-switch transmission delay. On the other hand, the authors in [51] devised a mechanism that routes TCP/UDP packet traffic based on numerous factors. They performed K-Means and DBSCAN based on twelve selected factors and determined the appropriate number of clusters to send the request to the appropriate servers. A multiple regression-based searching (MRBS) method has been proposed by [52] to select the best server and path in the data center networks. This method works under high-load situations to enhance network performance. The combination of regression analysis and heuristic algorithm, applied to server statistics information such as load, response time, bandwidth, and server usage, allows MRBS to choose the best server to handle the anticipated traffic. MRBS improves server utilization to 83% compared to traditional algorithms while decreasing delay and response time by over 45%. However, the proposed methods in these works suffer from the following; node migration needs to be considered in case of fault, some QoS parameters must be considered in the evaluation stage, and algorithms must be evaluated in large networks. Table \({}^{\nabla}\) shows a comparison of machined learning based LB algorithms regarding different aspects. \begin{table} \begin{tabular}{p{42.7pt} p{85.4pt} p{85.4pt} p{85.4pt} p{85.4pt} p{85.4pt}} \hline Authors & Algorithm / Technique & Addressed problem & Strength & Weaknesses \\ \hline [31] & ANN & Load balancing & \(\bullet\) Improved load balancing & \(\bullet\) Used for medium-sized network deployments \\ [32] & ANN & High volumes of traffic which causing unneeded delay & \(\bullet\) Improved load balancing & \(\bullet\) The result is a local balancing & optimum \\ & & & \(\bullet\) Effective communication & \\ [33] & DA-LB & Efficiently use of existing Cloud resources & \(\bullet\) Enhanced overall network efficiency and performance & \(\bullet\) Live migration not supported \\ [34] & Neural networks and k-means & The extra latency is caused by load balancer packages. & \(\bullet\) Improved load balancing & \(\bullet\) This technique ignored energy savings. \\ [35] & Q-learning algorithm with Deep Neural Networks (DNNs) & Efficient path selection for better load balancing & \(\bullet\) Load-balancing improvements & \(\bullet\) Overhead \(\bullet\) Only a few factors were used \\ & & & \(\bullet\) optimum in path selection & \(\bullet\) Only a few factors were used \\ [36] & DL & Balance the load among servers & \(\bullet\) Improved load balancing & \(\bullet\) Different typologies not considered \\ & & & \(\bullet\) Improved response time & \(\bullet\) Other QoS indicators not taken into account \\ [37] & Q learning & Multiple controllers load balancing & \(\bullet\) Reduced latency & \(\bullet\) Overloading \\ & & & \(\bullet\) Improving throughput & \(\bullet\) The result is a local optimum \\ [38] & DRL & Manage various service requests & \(\bullet\) Improved load balancing & \(\bullet\) One point failure \\ & & & & \(\bullet\) Reduce host CPU’s computational power & \\ [39] & DRL & Uniform route optimization optimization & \(\bullet\) Reduces delay & \(\bullet\) Other QoS indicators not taken into account \\ & & & & \(\bullet\) Improving throughput & \(\bullet\) The result is a local optimum \\ [40] & DRL & Traffic engineering throughput issue & \(\bullet\) Improve the convergence rate & \(\bullet\) One point failure \\ & & & & \(\bullet\) Better performance and stability & \(\bullet\) Other QoS indicators not taken into account \\ [41] & Bio-Inspired DRL & Hybrid SDN routing policy & \(\bullet\) Improved communication & \(\bullet\) Handling the scalability problem not taken into account \\ & & & & \(\bullet\) Delay has been reduced & \\ [42] & SINET & Flow routing performance-optimizing & \(\bullet\) Improved the robustness and scalability & \(\bullet\) Topology changes not taken into account \\ [43] & SINET & Routing optimization & \(\bullet\) Reduced flow completion time & \(\bullet\) Hierarchical node not considered \\ & & & & \(\bullet\) Better robustness & \\ [44] & DRL & Distributed controller failure & \(\bullet\) Improved QoS, security and network policy & \(\bullet\) Compared with traditional approach only \\ [45] & Machine learning aided load balance & High and unbalanced load on the router. & \(\bullet\) Reduced loss of packets & \(\bullet\) Other QoS indicators not taken into account \\ & & & & \(\bullet\) Delay has been reduced & \(\bullet\) Increase loss of packets \\ [46] & CFR-RL & Network disruption impact & \(\bullet\) Better balance link utilization & \(\bullet\) Delay not taken into account \\ & & & & \(\bullet\) Improved network & \(\bullet\) This technique ignored \\ \hline \end{tabular} \end{table} Table 3: Machine learning based load balancing methods and their properties #### 3.1.3 Mathematical Model Based Load Balancing Methods The SDN can be modeled mathematically using algebra, a formal model of transmitting shared resources, or an analytical model employing network calculus [53, 54]. To perform load balancing in software-defined Wi-Fi networks, the authors of [55] have proposed a multi-controller SDN architecture that includes global and local controllers. The global controller utilizes the Analytical Hierarchical Process (AHP) approach to allocate the flow to each controller, where different limitations have been considered based on the local controllers' current state. The global controller is in charge of handling cluster creation and is also responsible for controlling the local controllers, while the local controller is in charge of the local device load, and clustering is regularly updated. Nonetheless, few parameters were used to implement AHP and evalute its performance. Alternatively, Rounding based Route Joint Deployment (RRJD) algorithm employed by [56, 57, 58, 59, 60]. The authors in [56] focused on hybrid routing as a joint optimization issue and first demonstrated that it was an NP-Hard issue. After that, a RRJD method is applied to fix the issue and boost the network's speed. Likewise, in this research [57], factors such as the control link limitation and other data plane constraints in SDNs were considered to improve QoS. They demonstrate that NP-Hardness exists by explaining the problems of low-latency route deployment and the LB of the control link. In addition, two solutions are presented for each issue with bounded approximation factors and implement the suggested approaches on a tested SDN. Similarly, A load load-balancing routing mechanism that works on both links and controllers was proposed by [58] (LBR-LC) to solve the NP-Hard overload issue in an SDN. The approach based on rounding has been presented as a solution to the problem since it offers greater scalability and reduces the load. The suggested technique lowers the maximum controller response time by 70% compared to the existing solution but with a 3% increase in the link load. Also, the work by [59] presents a revolutionary SDN-MPLS method with minimal complexity. This method advances bandwidth-restricted routing in mobile networks by balancing network load, route length, energy savings, and network complexity. Research by [60] examines the issue of how to manage network traffic unpredictability while doing load balancing on commodity switches without the need for extra hardware or software. The article designs and implements the PrePass method, which combines wildcard entries for fine aggregate flows to satisfy the flow table size constraint with reactive routing for newly incoming flows to achieve load balancing despite uncertainties in traffic. The authors present a practical method based on randomized rounding and demonstrate that, in most situations, it may lead to constant bi-criteria approximation. Nonetheless, some Quality of Service (QoS) metrics have neglected, and these techniques required further real-time traffic monitoring and classification of network data. The fuzzy-logic theory was introduced by [61, 62] to solve SDN LB problems. According to [61], a fuzzy function initially examines the parameters that impact server load and then evaluates the virtual server's load. Based on this, SDN control is employed to keep track of server data throughout the whole network and to implement virtual server tasks. The load and energy usage are dynamically balanced when servers freeze and restart. In the same vein, the authors of [62] evaluated network performance using different measures to create a similar technique that ensured load balancing and enhanced the performance of an SDN. However, the proposed method takes more time to restore the traffic; therefore, some packets will be lost during that period. A study by [63] introduced a new method for clustering in WSN-based IoT systems, which utilizes Fuzzy C-Means (FCM). The method involves using FCM to create clusters and reducing energy usage in each cluster to determine the optimal Cluster Head (CH). Instead of constantly replacing CHs for dynamic clustering, the study proposes using an energy threshold to determine whether a CH is still functional based on its current energy level, which can extend the lifespan of the sensor network. The suggested FCMDE has the potential to decrease energy consumption and improve durability while keeping expenses low. However, employing metaheuristic optimization methods can improve the CH selection function. In addition, other QoS indicators are not considered during the experiment process. Similarly, the article [64] presented an IoT protocol called EFUCCSS, which is an energy-efficient based on fuzzy logic and unequal clustering with sleep schedules, and uses WSN. This protocol aims to increase the network's longevity and decrease energy consumption by employing clustering, scheduling, and data transmission techniques. The proposed protocol used Fuzzy C-Means to create unequal clusters, which reduce the distance data travels and balance energy usage. The cluster heads selection process used a fuzzy logic system that takes input variables such as gateway distance, remaining energy, and centrality. Cluster heads (CHs) collect data from other cluster members, consolidate it, and transmit it to the gateway (GW) in a single hop. A sleep scheduling strategy is employed between the coupled nodes to reduce the number of transmitted nodes. According to the findings, the EFUCCSS method could lead to a notable increase in the remaining energy of 26.92% to 213.4% and an extension of network lifespan by 39.58% to 408.13%. Furthermore, EFUCCSS is more effective than other comparable algorithms in extending the life of networks. However, the suggested approach for IoT based on WSN does not involve the management of mobile sensor nodes. Also, utilizing mobile sink-based data aggregation and scheduling may enhance the capability of the sensor nodes. Additionally, prediction methods can be utilized to forecast data for nodes that are not currently active. Similarly, A study by [65] proposed an SDN-based architecture to balance traffic across IoT servers and fulfill the QoS requirements for various IoT services. Initially, the authors model the issue as an NP-hard Integer Linear Programming (ILP) instance. After that, they offer a heuristic technique for proactive and predictive QoS management using time-series analysis and fuzzy logic. Finally, the Open vSwitch, Floodlight controller, and Kaa servers are used to build and test the framework. The outcomes showed improvement in IoT QoS metrics like throughput and latency while preventing server overload in high-traffic environments. In terms of performance, the suggested framework beats competing approaches. The suggested framework has better performance than existing techniques. However, the framework has yet to be tested on a distributed SDN control plane or multi-domain network. Moreover, additional factors could improve performance, such as employing progressive policies to estimate the load, implementing Network Functions Virtualization to conserve energy, and enhancing QoS management. In software-defined elastic optical networks, work by [66] proposed an optimization method to reduce the cost-of-service delay and minimize the load imbalance. The authors proposed a measurement method based on entropy for analyzing load imbalance and developing joint optimization utility functions. The technique works in three stages; firstly, the optimizer selects the possible solutions and then passes them to the defragmentation algorithm for the examination process. Finally, at the end of each connection per wavelength, a power budget algorithm is used to calculate criteria including received power, noise, and OSNR for all network services and use it for validating a routing solution. Similar work by [67] uses active learning based on entropy to detect intrusion patterns at the packet level efficiently. The suggested approach also can be used to spot assaults on the network. Then, a load-balancing technique applies balancing sensor computing capacity and source requirements to maximize the utility of vehicle sensors. Thus, utilizing a convergence-based technique resulted in maximizing resource consumption. Bandwidth is one of the elements that must be considered for effective load balancing. An increase in the number of terminals connected to the network will increase the demand for bandwidth and data transport. The authors in [68] proposed an LB architecture to solve the need for more bandwidth and enhance the network performance by using service-oriented SDN-SFC. The method categorized the incoming requests based on their type and assigned priority for each service. A heuristic approach was then used to choose a transmission path from the available service function chains. This method expedited data transfer, and it also enhanced the degree of load balancing. However, using KKT alone to minimize the response time could cause irregular load distribution as it depends on the arithmetic configurations of the controllers. On the other hand, to overcome the controller migration issues, work by [69] presented a new method that utilized the Karush-Kuhn-Tucker (KKT) conditions and the Demand and supply curve-based SDN (DSSDN). KKT is used to solve the response time issue; however, the controllers with fewer computational configurations (fewer routers) will take on fewer burdens. Therefore, the authors employed DSSDN to dynamically select the OpenFlow devices that maximize controller burden while minimizing user traffic. In [70], the authors suggested a non-cooperative load-balancing strategy that builds based on the principles of mean-field game theory. This approach is intended to achieve load equilibrium based on the response time value of each SDN controller. The algorithm makes the routing decision for each request, known as Wardrop equilibrium, which leads to efficient load-balancing. Work by [71] employed dynamic load balancing to optimize resource and bandwidth usage by determining the shortest path to the destination and improving QoS performance. Implementing Dijkstra's method helps locate many pathways of equal length and narrows the search space in the topology. In addition, Priority traffic flows are assigned a specific sequence. It then directs traffic along the route with the lowest cost and load among those considered. Furthermore, the researchers in [72] used SDN's global network view to implement load balancing and reduce network latency by determining the optimum data transmission path. Each route was surveyed for essential elements. Load balancers evaluate features such as throughput, packet loss, latency, hops, and node utilization. These features are the input for a trained neural network that predicts the overall load state for Dijkstra's shortest pathways. But the proposed methods in those works need more accurate traffic prediction in the case of IoT and satellite contexts. Table 1 shows a comparison of mathematical model-based LB algorithms regarding different aspects. \begin{table} \begin{tabular}{l l c c c} \hline Authors & Algorithm / & Addressed problem & Strength & Weaknesses \\ & Technique & & & \\ \hline \end{tabular} \end{table} Table 4. Mathematical model-based LB algorithms and their properties * [55] (AHP) Load balancing the controller with flow request processing balancing * Improved load balancing * Improvement AHP * Throughput has increased * Delay has been reduced * [56] RRJD Flow optimization and load balancing * Improved load balancing * Simplified flow rules * Faster deployment * Control load is reduced. * [57] RRD Up-links and down-links have a substantial influence on QoS performance because of their limited capacity * The controller can fine-tune each flow * preventing data and control plane congestion * [58] Rounding-based algorithm * Scalability and load management * Better QoS * Effective controller usage * [59] MPLS-SDN LSP configuration * Improved mean * Other load balancing parameters blocking ratio not considered * Improved mean CPU time * [60] PrePass SDN switches limited resources load balancing * Switches resource constraints satisfied * Improved server load balancing * Improved server load balancing * Effective communication * [62] Fuzzy logic & Scalability and load balancing * Improved load balancing * Static load * Effective communication * [63] Fuzzy C-Means The power consumption of sensor nodes * Enhanced network durability. * Lowered energy consumption * Improved throughput * Reduced latency * Reduced deployment cost * Enhanced network * Does not involve the management of mobile sensor nodes. * Lowered energy consumption * Utilizing mobile sink-based data aggregation and scheduling may enhance the capability of the sensor nodes * No prediction methods * [65] Fuzzy logic Load balancing & \(\bullet\) Improving the throughput and latency & \(\bullet\) Not tested on a distributed SDN throughput and \(\bullet\) Other QoS indicators not taken into account * [66] Entropy-based & Optimize the distribution of fiber loads throughout the network. & \(\bullet\) Balancing fiber load & \(\bullet\) The result is a local optimum load & \(\bullet\) Reducing service interruption costs & \(\bullet\) Improved load & \(\bullet\) Works better with large size networks & \(\bullet\) Improved response time & \(\bullet\) Improved energy savings. & \(\bullet\) Optimal bandwidth & \(\bullet\) Other load balancing factors & \(\bullet\) Under load balancing factor & \(\bullet\) Under load balancing & \(\bullet\) Under load balancing & \(\bullet\) The load balancing reaction time, job size, and execution time were ignored & \(\bullet\) Reduced data & \(\bullet\) Transmission time & \(\bullet\) Reduce data & \(\bullet\) Other AHP indicators not taken & \(\bullet\) Balancing & \(\bullet\) into account & \(\bullet\) Effective communication & \(\bullet\) Differ & \(\bullet\) Effective communication & \(\bullet\) Differ & \(\bullet\) Other QoS indicators not taken & \(\bullet\) Other QoS indicators not taken & \(\bullet\) Under load & \(\bullet\) Other QoS indicators not taken & \(\bullet\) Under load & \( offer an adaptive flow statistics gathering system based on (switch) port statistics information, which enabling effective routing by utilizing link load similarities. In [75], researchers also optimized load balancing and congestion in four stages to record the correct information into the OpenFlow switches' flow tables. In the first two levels, sub-topology reduces space and increases performance. The third phase implements load balancing, while the last phase creates pathways and injects flows in switches. Furthermore, an SDN-Based Load Balancing (SBLB) has been suggested by [76]; this strategy prioritized the SDN to optimize resource utilization and user response time. The proposed mechanism comprises a server pool that communicates with the SDN controller through OpenFlow switch flow tables and an application module that runs atop the controller. Also, a dynamic LB algorithm for data plane traffic presented by [77] mitigates bottlenecks. This method rerouted ties as network usage increases. The method evaluated the connection cost by picking the optimum link after detecting bottlenecks. Then, it decreased network latency and packet loss. The paper claimed that the suggested technique could balance data plane traffic in any SDN setup. Despite its advantages over other approaches, these techniques have a significant flaw: it relies only on reactive flow entry and does not undertake real-time load monitoring. Through the use of cloud-based SDN, authors in [78] provided a Services Orchestration and Data Aggregation technique (SODA) to address the issues of data redundancy and sluggish service response. SODA can combine data packets to eliminate redundancy and speed up service response. This technique divides the network into data centers, middle routing, and vehicle network layers. Each layer has its task, while the data centers layer is responsible for service response delay reduction through distributing apps with very particular functions to every device on the network. The middle routing layer adjusts the data packets routing path according to the routing distance and the packets' correlation. The last layer, the vehicle network layer, transmits data packets and services between the network equipment. The proposed method did not relied on QoS indicators to prove its efficiency. The article by [79] suggested a load-balancing algorithm that sees the network as a graph, where the vertices are switches and the edges are the network's channels. They have considered the channel capacity's supremacy over server load while determining the ideal path and proven the approach using a mathematical example. Work by [80] presented SDN dynamic offloading service for an SDN-based fog computing system to choose the optimum offloading node and assist the offloading path. The proposed system selects the best offloading node based on current computational characteristics and network resource information. The results showed outperforming existing strategies that do not employ SDN technology in terms of throughput, request-response time, and end-to-end bandwidth guarantee. The study by [81] provided a multi-path routing system based on SDN that employs several characteristics, such as latency, bandwidth, and node load. By detecting the network state and switching node load, the method builds a model to compute link transmission cost to adjust the end-to-end transmission path in real-time. This work used Systems Tool Kit (STK) to create the inter-satellite propagation delay model to enhance the estimation rate of the transmission cost. The lack of QoS factors, such as cost optimization and network latency, is a significant flaw in the suggested approach. Load imbalance among statically configured controllers is a critical issue. Researchers in [82] presented the Assessing Profit of Prediction (APOP) method to overcome these issues. This technique balances the network load by evaluating the profit and predicting the overloaded. They offer Taylor's method to anticipate network flow change and analyze the profit of moving switches in advance to save migration time and avoid detrimental consequences. Also, the paper by [83] proposed the online controller load balancing (OCLB) approach, which focuses on balancing the load by minimizing the controller's average reaction time. This method executed switch migration sequences with various real-time applications and a consistent parameter in mind, all with the end goal of minimizing response time. The results showed that the proposed scheme executed online to provide almost optimum load balancing over the control plane. Similarly, to solve the migration problem, the authors in [84] integrated swap movements and shifts into a search system. In contrast to the methods already in use, the suggested algorithm will not quit the search if a switch migration cannot work. Rather than relying on basic techniques, it searches for more advanced operations to enhance velocity, such as switching between two distinct keys. In addition, the authors in [85] used a multi-controller to deal with traffic loads caused by several nodes. Using a method called load optimization and anomaly detection (LOAD), they were able to reduce the cost of migration while simultaneously increasing controller performance and decreasing reaction times. In contrast, two switch migration strategies, a balanced controller (BalCon) and BalConPlus were introduced in [86], which both boost the capability of regulating traffic load variations. The first is utilized if serial processing of switch requests is not required; otherwise, BalConPlus is used under rest situations, according to them. The researchers claimed that their method considerably minimizes load polarity among the controllers in the network. Another view is by [87], where the authors analyzed SDN controller load and handover latency, showing that over-loading can lengthen handover latency and use load balancing to prevent it. Their LB management approach used network heterogeneity and context-aware vertical mobility. It has three parts; first, the users are chosen based on context, then the load distribution is minimized amongst controllers, lowering processing and communication overhead. After determining potential users, the algorithm optimizes the selection of diverse candidate networks. These approaches can accomplish the optimal solution, but the runtime and latency in the large-scale network will increase. Moreover, this technique's primary issue is loading fluctuation, which could occur if the target controller and migration switch are incorrectly identified. The article by [88] suggested a technique for wireless sensor networks termed perceptually important points-based data aggregation (PIP-DA). This strategy aims to reduce the quantity of data readings transmitted, resulting in less energy consumption and a longer lifespan for the network. In addition, the PIP-DA maintained the precision of the data readings collected at the base station. A cluster topology is used to develop the aggregating data within the PIP. The assumption is that the topology already exists, so the suggested method is usable for any clusters formed by clustering protocols. The primary goal of this technique is to decrease the amount of sensed data at the sensor node level to extend the lifespan of the WSN. The proposed method outperforms previous techniques regarding data remaining after aggregation, sets transmitted to the CH (Cluster Head), data correctness at CH, and energy usage. However, an additional dynamic segmentation algorithm can be utilized at both the sensor node and gateway levels to predict the missing data at the CH and improve proposed method. Also, the authors of [89] introduced a method called "DAEP" for conserving energy in WSNs by aggregating data performed at the individual sensor node level based on extracting extrema points. The proposed technique aims to reduce energy consumption and prolong the lifespan of the WSN. The suggested approach operates periodically and comprises three stages in each cycle: data collection, aggregation, and transmission. The method's effectiveness was evaluated by running several simulations using actual sensor data gathered at Intel Berkeley labs and comparing the results with previous research. The findings indicate that DAEP can significantly reduce the amount of data transmitted by 69-80% and energy consumed by 73-77% while maintaining reasonable accuracy levels, making it a promising approach for reducing the load on sensor nodes. Unfortunately, AI, ML, and statistical techniques were not utilized, which could extend the WSN lifespan. Also, the method was not tested on a real sensor network. Finally, a centralized network clustering strategy used by [90] in which the base station (BS) splits the network into clusters and identifies the node with the most significant energy level as the cluster head (CH) for each cluster at the start of the protocol. It picks and rotates CHs within clusters depending on node energy levels before transferring data to the BS to decrease energy usage. This study employed a stationary algorithm with fixed clusters. The number of CHs in the network depends on the network's topology. Compared to the MOFCA and IGHND, the proposed ESCA approach effectively addresses the energy consumption problem and significantly increases the network's lifespan. However, to evaluate the proposed strategy, it should consider the mobility of nodes and obstacles in the area of interest. Table \({}^{\circ}\) shows a comparison of the other dynamic LB algorithms regarding different aspects. \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline Authors & Algorithm / Technique & Addressed problem & Strength & Weaknesses \\ \hline [73] & LBBSRT & Hardware limits load balancing to server response times & \(\bullet\) Reduces cost & \(\bullet\) This technique ignored server & LB energy savings \\ [74] & PFSC & Overhead load balancing via flow rerouting. & Load-balancing & \(\bullet\) Normalized traffic flow & \(\bullet\) Limited controllers \\ [75] & Flow statistics & Network congestion, load balancing & \(\bullet\) Increased throughput & \(\bullet\) Lacks dependability, scalability, and network & \(\bullet\) Reduceds reaction time & \(\bullet\) Lacks dependability, scalability, and network \\ [76] & Statistical information and reduce user response time & \(\bullet\) Optimal server utilization & \(\bullet\) No graphical user interface & \(\bullet\) performance evaluation of the suggested load balancing method ignored & \(\bullet\) Limited controlers \\ [77] & Dynamic load balancing algorithm & Distribution of data plane traffic & \(\bullet\) Decreases network latency and packet loss & \(\bullet\) Various network topologies in a clustered environment not taken into account & \(\bullet\) Latency data redundancy & \(\bullet\) Other QoS indicators not taken into account \\ [78] & SODA & Redundancy data and service response time & \(\bullet\) Reduces average reaction time & \(\bullet\) Other QoS indicators not taken into account & \(\bullet\) Other QoS indicators not taken into account \\ [79] & Feature extraction & Low channel capacity causes congestion and decreases the reliability & \(\bullet\) Improved reliability & \(\bullet\) Other QoS indicators not taken into account & \(\bullet\) Other QoS indicators not taken into account \\ [80] & Feature extraction & Selecting the offloading node to handle an overloaded delay & \(\bullet\) Improved request and response time & \(\bullet\) Other QoS indicators not taken into account & \(\bullet\) Other QoS indicators not taken into account \\ [81] & Feature extraction & Network congestion and delay & \(\bullet\) Improved throughput & \(\bullet\) Other QoS indicators not taken into account & \(\bullet\) Other QoS indicators not taken into account \\ [82] & APOP & Load imbalance among multiple controllers & \(\bullet\) Reduces load balancing & \(\bullet\) Other QoS indicators not taken into account & \(\bullet\) Other QoS indicators not taken into account \\ [83] & OCLB & Scalability and load management & \(\bullet\) Switches migration minimized & \(\bullet\) Other QoS indicators not taken into account & \(\bullet\) Improved load balancing & \(\bullet\) Other QoS indicators not taken into account \\ [84] & Heuristic approach & Switch migration problem (SMP) & \(\bullet\) Choosing the best possible controllers & \(\bullet\) Other QoS indicators not taken into account & \(\bullet\) Other QoS indicators not taken into account \\ [85] & Switch migration & Load traffic handling migration & \(\bullet\) Reduces run-time & \(\bullet\) LOADS scheme can lead to load imbalance due to failure of cost & \(\bullet\) Improved execution time & \(\bullet\) Improved accuracy of controllers & \(\bullet\) Lack in real-time traffic collection and categorization of network data & \(\bullet\) Minimum channel bandwidth not guaranteed \\ [86] & Switch migration & Network traffic classification & \(\bullet\) Improved task classification & \(\bullet\) Decreases the amount of extra work on the sensor & \(\bullet\) Introducing an additional dynamic segmentation \\ [87] & Heterogeneous networks & Controller response time & \(\bullet\) Improved load balancing & \(\bullet\) Minimum channel bandwidth not guaranteed & \(\bullet\) Minimum channel bandwidth not guaranteed \\ [88] & Perceptually Important Points Based & The power consumption of sensor nodes & \(\bullet\) Decreases the amount of extra work on the sensor & \(\bullet\) Introducing an additional dynamic segmentation \\ \hline \end{tabular} \end{table} Table 5. Other dynamic LB algorithms and their properties According to the reviewed articles, critical challenges regarding SDN load balancing have yet to be examined exhaustively and thoroughly. One of the significant issues was the failure of a centralized controller, which could lead to the collapse of the entire network. The suggested solution was distributing the controllers into several domains, but the controller deployment cost will increase, and the controller efficiency to handle network change will decrease. Most of the research examined has yet to demonstrate the impact of the load balancing mechanism on all Quality of Service (QoS) parameters. Also, most of the techniques studied need to consider the challenge of conserving energy and reducing carbon emissions. Incorporating these factors could enhance the effectiveness and popularity of current load-balancing mechanisms. Finally, more research must be conducted incorporating artificial intelligence techniques on load balancing. A hybrid approach that effectively integrates two or more methods can be used to balance the load and improve network performance efficiently. ### SDN LB ALGORHMS EVALUATION METRICS During the process of evaluating LB algorithms, a set of metrics must be taken into account to prove the effectiveness of those algorithms. The researchers use a wide range of metrics to outline the benefits and drawbacks of the available approaches. This section describes the most used parameters mentioned in the selected papers. * Response Time (RT): The response time is an essential parameter for LB methods. It is the time it takes for a user to get the info they requested after submitting a query. It is affected by different variables, including bandwidth, network users, requests, and processing time. It is calculated using Equation (1), where \(t_{I}\) is the request submission time, and \(t_{2}\) represents the request start processing time. \[RT=\Delta(t_{1}-t_{2})\] (1) Handling a large number of requests in a short amount of time can improve the response time. * Throughput (T): Refers to the proportion of job requests that were scheduled within a specific time frame (t) and were successfully executed and processed, compared to the total number of completed job requests. High throughput is required for the load balancing mechanism to function correctly. It is calculated using Equation (2). \[T=\sum\nolimits_{requests\ i}^{n}(time\ t)\] (2) * Resource Utilization (RU): Represents the efficient network's resource utilization ratio during the request processing (e.g., memory, CPU, etc.). It is essential in the LB evaluation process; high RU means the LB algorithm performs well. It is calculated using Equation (3), where \(ET\) is the execution time. \[RU=\frac{\sum_{request\,i}^{n}ET}{Maxrequest\ ET}\] (3) * Latency: The latency measures how long it takes a packet of data to move across a network. It considers both the delay during transmission and propagation resulting from the packet forwarding process. It is calculated using Equation (4), where \(L\) is the latency, \(S_{td}\) and \(D_{ud}\) are the source and destination transmission delay, respectively, \(S_{d}\) represents the switch delay, and \(P_{d}\) represents the propagation delay. \[L=S_{td}+D_{td}+S_{d}+P_{d}\] (4) * Work load degree: This metric is used to assess the load distribution throughout the networking components. It can be calculated by using a variety of indices, including Jain's fairness index and load balance rate. * Deployment cost: The cost of network elements deployment includes CAPEX and OPEX. This metric is essential to minimize the SDN implementation cost by calculating the best number of the needed controllers to build an efficient network. * Jitter: It is the term used to describe the variation in packet transmission time between networking elements; when there is network congestion, the jitter will increase. * Packet loss ratio: It is calculated by subtracting the number of transmitted and received packets between the source and destination. It occurs when at least one informational package fails to accomplish its goal. LB algorithms always aim to have a low packet loss rate to guarantee efficiency. * Delay: Represents the time a packet takes to get from one node to another; it includes communication, routing, processing and migration delay. * Round trip time (RTT): The time it takes a packet to travel from its source to its destination and back again is called its round-trip time. It's a key performance metric for evaluating the efficiency of LB methods. The timeouts will be ineffective since they will be longer than necessary if the round journey takes less time than expected. It is calculated using Equation (5), where \(AV_{RTTs}\) is the average round-trip time in the server and \(AV_{RTTc}\) represent the average round-trip time in the client. \[RTT=AV_{RTTs}+AV_{RTTc}\] (5) * Bandwidth utilization ratio (BU ratio): This metric checks the load placed on the links by assessing the network's transmission capabilities. The link's bandwidth ratio calculates by the SDN controller based on the total number of bytes sent at the associated switch ports during two consecutive periods. * Migration delay: It is the amount of time that must elapse from when a packet is moved from one switch to another until it reaches its final destination. The number of migrations should be kept to a minimum for effective communication. * Link utilization: It represents packet transmission speed throughout the communication between the networking components and includes the uplink/downlink rate. It is calculated using Equation (6), where \(Lu_{ij}\) represent the link utilization value between two nodes \(i\) and \(j\). While \(b_{y}\) indicates the link bandwidth and \(\mathbf{u}_{ij}^{t}\) is the amount of bandwidth utilized during the time frame t between the \(i\) and \(j\) nodes. \[Lu=\left[(Lu)_{ij}=\frac{u_{ij}^{t}}{b_{ij}}\right]_{N*N}\] (6) Flow completion time (FCT): It is used to determine the efficiency of flow transportation in data center networks. It represents the amount of time required to finish transferring a file within a flow. LB methods aim to keep the flow completion time as short as feasible. * Migration cost: Two preliminary charges are involved: the load cost and the cost of sending messages. During the switch migration process, messages such as migration, role requests, and asynchronous messages are needed to transmit between the controllers. * Overhead: It represents the total sum of all the extra time, space, data transfer, and processing power that the activity requires. Overhead includes communication, flow stealing, synchronization and flow statistics collection overhead. * Packet load ratio (PLR): This metric is introduced to measure the route performance and calculate the maximum traffic load on each link. * Power consumption: Represent the amount of energy each node in the network uses to process a request, whether that request is successful or not. Effective LB reduces power use. * Consumer Satisfaction (CS): It is the overall customer attitude or behavior addressing the discrepancy between what customers expect and what they receive. * Cumulative distribution function (CDF): This metric is used to determine whether network links are congested by checking if the required flow entries exceed the flow table size on all switches to avoid flows dropping. This part presents the metrics that most authors consider when conducting state-of-the-art research. The LB algorithm design and development process depend primarily on these metrics, and it is used to assess the algorithm's performance in SDN-based applications. Some of these parameters are widely used by researchers, such as response time, throughput, RU, latency, delay, workload degree, deployment cost, packet loss ratio, link utilization, and overhead. However, many QoS metrics have yet to be considered in the algorithm evaluation process; Table 1 presents the metrics used to measure the proposed LB approach performance. \begin{table} \begin{tabular}{|c [36] & ## 4 Results and Discussion In this section, we will summarize and compare the different metrics applied by several LB methods published in the past few years. Moreover, we investigated LB strategies that can be used to balance the load in an SDN efficiently and analyzed the limitation of each technique to support innovation in the SDN research field. We evaluated recent studies from reputable journals and conferences to find the most frequent LB performance-enhancing tactics with a particular emphasis on AI-based LB. Tables 1, 2, 3, and 4 compare the four categories regarding different aspects such as the algorithm/technique, the addressed problem, strengths and weaknesses. This study focuses on AI-based LB techniques and divides them into nature-inspired, machine learning, mathematical model and other LB methods. These mechanisms use LB algorithms that manage work distribution based on each node's actual load and output and adjust the load at the proper time to ensure that the network operates effectively and smoothly. Nevertheless, these techniques have various drawbacks, including the need to respond to burst traffic, dynamically alter the load of the controllers, and neglect some services. Algorithms based on PSO [22, 23, 24, 25] offered a dynamic LB solution that applies to various service types to ensure the high performance of SDN and customers satisfaction. However, several limitations persist, such as; resource overloading, priority-based flow classification, effective only with relatively limited data and many QoS indicators are not taken into account. On the other hand, heuristic algorithms [26, 27, 28], such as GA and ACO, which are combined with POS and BIN, all of them operate relatively better for huge SDN. In addition, [29] utilized GA to balance the load in distributed SDNs. Still, they take high processing time, overload the network, and in the case of Figure 4: Evaluation metrics of reviewed articles GA, only a few QoS factors were considered. Also, the ACO-based algorithms need considerable time to update both forward and backward. Figure 5 presents the metrics used to evaluate the nature-inspired-based LB in the surveyed papers. Most of the works used response time, throughput, and latency to judge their method's efficiency. Also, some of the studies that conducted from 2017 to 2022 used other parameters in the evaluation process, such as deployment cost, work load degree, jitter, packet loss ratio, RTT, RU, CS, and migration delay. However, many QoS indicators need to be taken into account, where papers included in this survey used only one to four factors to prove the efficiency of the proposed algorithms. Figure 5: The metrics trends used by nature-inspired-based LB Using machine learning (ML) in conjunction with the SDN architecture has proven efficient in achieving enhanced routing performance. Based on the graphical results presented in Figure 6, the proposed LB methods that adopt ML have used various metrics in the evaluation process. The majority of the papers that conducted from 2017 to 2022 have used throughput, workload degree, and latency for efficiency proven. However, parameters such as packet loss ratio, delay, migration delay, and migration cost have been used in a few works. Also, publications included in this study employed just one to four parameters to demonstrate the effectiveness of the suggested algorithms. ANN has been proposed as a solution for load balancing in SDN [31, 32]; this technique improved transmission efficiency using packet overhead, delay, hop count, packet loss, trust, and bandwidth ratio. However, it needs more processing time and resources and works better on a medium-sized network or local optimum. BPNN algorithm has been applied in [33, 34] to help predict the best path with the most negligible load. The results showed a general improvement in network performance, especially concerning network latency. However, the primary disadvantage of this technique, it ignored some services that could impede the process of finding the actual shortest path. DNN is also used by [35] to choose the optimal route as well it is appropriate for handling the network traffic. Based on the DNN results, the abnormal flow is prevented by predicting the flow rules and identifying all the significant nodes. However, essential factors such as scalability, topology changes, loss of packets not take into consideration. Figure 6: The metrics trends used by ML based LB A deep learning technique is employed by [36], [37], [38], [39], [40] to develop a mapping link between states and behaviors to expedite the problem-solving process of determining the best approach for all conditions. The experiment shows that DL algorithms improve both the LB and response time. As a result, these strategies can dynamically adapt to shifting request loads, including adjustments to the capabilities of the underlying infrastructure. However, it suffers from poor functionality when node failure happens and it is not considered a different network topology in the experiment process. A reinforcement learning scheme has been introduced by [41], [42], [43], [44], [45], [46] to solve the rerouting optimization problem. This scheme uses a heuristic approach to automatically learn essential flow selection strategies without being given any guidelines for specific rules. The evaluation's findings demonstrated that RL might achieve superior efficiency by rerouting a tiny portion of total traffic. Other ML algorithms, such as SVM, Bayesian network, LSTM, K-Means, DBSCAN and multiple regression, are applied by [47], [48], [49], [50], [51], [52] to solve the LB issues. All algorithms showed promising results in reducing response time. However, node migration is not considered in case of fault, some of the QoS parameters are not taken in consideration in the evaluation stage and algorithms not evaluated in large networks. Many studies have used mathematical models to solve SDN load-balancing problems and choose the most efficient decisions. AHP is used by [55] to compare several routes and determine the optimal one regarding different limitations. This technique could enhance the decision by using consistency measures. The method organized the controllers into global and local; they are responsible for handling cluster formation and local device load, respectively. While the approach improved load balancing, increased throughput and reduced delay, using other variables during the implementation process may lead to the best results. Rounding based approach is another mathematical model presented by [56], [57], [58], [59], [60] as a solution to the problem since it offers greater scalability and reduces the load. The method improved load balancing, reduced response time, and prevented data and control plane congestion. However, other QoS indicators are not considered, and it needs more real-time traffic collection and categorization of network data. Another branch of artificial intelligence known as fuzzy logic is employed by [61], [62], [63], [64],[65] to address the load balancing issue in SDN. In this approach, the flow-handling rules at the controller are used to dynamically calculate and adjust the paths depending on the network's global perspective. The experiments showed that this mechanism efficiently detects faulty links instantly and chooses a backup path. However, it takes more time to restore the traffic; therefore, some packets will be lost during that period. Also, the entropy-based method proposed by [66], [67] to minimize load imbalance and reduce the cost of the required service. This mechanism identifies network bottlenecks and installs new links where needed. However, it does not consider other QoS parameters, such as route delay and packet loss ratio. A Greedy-based Service Orientation Algorithm (GSOA) was introduced by [68] to solve the server overload by selecting the closest and compliant Service Functions (SF). The proposed GSOA shows promising results in reducing data transmission time and balancing the SFs load. However, this approach cannot provide efficient load balance due to the proportion of fixed-packets or non-fixed-packets variation. Also, the researchers did not consider QoS realistic factors in the evaluation process. While in [69] the Karush\(\Box\)Kuhn\(\Box\)Tucker conditions were employed to pick the OpenFlow\(\Box\)enabled devices that generate the most load on the controller and pass the fewest users through it. The method improved the end users' QoS metrics by reducing jitter, delay, throughput, and packet loss. However, using KKT alone to minimize the response time could cause irregular load distribution as it depends on the arithmetic configurations of the controllers. Mean-field game theory and Dijkstra's method were implemented by [70], [71], [72] to optimize resource and bandwidth usage by determining the shortest path to the destination and improving QoS performance. Also, these methods could efficiently mitigate the bottleneck, packet loss ratio, overhead and delay. But it needs more accurate traffic prediction in the case of IoT and satellite contexts. Figure 7 highlights the metrics trends that are significantly used from 2017 to 2023 by the LB methods, which adopted mathematical models. Most of the papers used the following parameters: response time, throughput, work load degree, and latency. The majority of the papers used from two to four metrics in the evaluation process. However, parameters such as \(\mathrm{RU}\), \(\mathrm{FCT}\), migration delay, and over load have been used in a few works, also other QoS parameters are not considered in the surveyed works. Other LB techniques proposed by [73] considered the response time a critical factor in the path selection process. In this method, the controller collects the response times of each server and chooses the one with the shortest or most consistent response time. This strategy is considered more cost-effective than conventional alternatives due to the decreased hardware needs and software customization. Although it solves many LB issues, this strategy needs to consider ways to reduce energy consumption in server LB. Algorithms based on statistical flow introduced by [74], [75], [76], [77] applied dynamic LB to several service types in cloud-based SDN. It enhanced the server's CPU efficiency, memory usage, and response time. Despite its advantages over other approaches, this technique has a significant flaw: it relies only on reactive flow entry and does not undertake real-time load monitoring. Services Orchestration and Data Aggregation method (SODA) applied by [78] to overcome the problem of data redundancy and slow service response through cloud-based SDN. However, other QoS indicators are not considered. In, [79], [80], [81] a dynamic LB method is proposed Based on current computational features and network resource information. This approach depends on different characteristics to select the best path dynamically. The technique enhances the QoS parameters, making the network more stable and Figure 7: The metrics trends used by mathematical models-based LB effective. The lack of QoS factors, such as cost optimization and network latency, is a significant flaw in this approach. Switch migration mechanisms proposed by [82], [83], [84], [85], [86] to solve the LB issues through selecting the migrated switch, target controller and migration of switch. Migration techniques used by many researchers as an LB method in SDN, it belongs to the deterministic category. These approaches can accomplish the optimal solution, but the runtime and latency in the large-scale network will increase. Moreover, this technique's primary issue is loading fluctuation, which could occur if the target controller and migration switch are incorrectly identified. Heterogeneous networks have been used by [87]; the method improves the load distribution and reduces the response time. However, the minimum bandwidth for the data transport channel is not guaranteed. The article by [88] suggested a technique for wireless sensor networks termed perceptually important points-based data aggregation (PIP-DA). This strategy aims to reduce the quantity of data readings transmitted, resulting in less energy consumption and a longer lifespan for the network. However, an additional dynamic segmentation algorithm can be utilized at both the sensor node and gateway levels to predict the missing data at the CH and improve proposed method. Also, the authors of [89] introduced a method called "DAEP" for conserving energy in WSNs by aggregating data performed at the individual sensor node level based on extracting extrema points. The proposed technique aims to reduce energy consumption and prolong the lifespan of the WSN. Unfortunately, AI, ML, and statistical techniques were not utilized, which could extend the WSN lifespan. Finally, a centralized network clustering strategy used by [90] in which the base station (BS) splits the network into clusters and identifies the node with the most significant energy level as the cluster head (CH) for each cluster at the start of the protocol. Figure 8 graphically presents the metrics used by the fourth category to check the effectiveness of the proposed techniques. The majority of articles published between 2017 and 2022 employed response time, throughput, workload degree, and packet loss ratio to demonstrate efficiency. Although many different indicators of quality of service (QoS) should be considered, the articles included in this review only employed one to four QoS metrics to demonstrate the effectiveness of the suggested algorithms. Figure 8: The metrics trends used by other LB methods In the proposed LB solutions, different SDN architectures have been suggested, such as centralized controller, distributed but logically centralized controller, and distributed controller. There are advantages and disadvantages associated with each architecture. Scalability issues emerged, for instance, with a centralized single controller. The multiple controllers-based architectures improved the control plane scalability; however, it suffers from uneven load distribution; while some controllers are overloaded, others are unused. In order to devise a successful load management strategy, it is essential to employ the correct load-balancing algorithm for each design. Detailed analysis of the methods used in the surveyed papers to choose the optimal path and the algorithms/techniques applied to detect the network overload are given in figures 9 (a) and (b). ## 5 Trends and Challenges Critical trends and challenges regarding SDN load balancing have yet to be examined exhaustively and thoroughly. This section lists and discusses some of these trends and challenges obtained from reviewed papers. 1. A single controller failure can be overcome by migration to another controller. However, there is still an overhead associated with moving a load of a failing controller to an operating one. Distributing the controllers into several domains could be a good solution, but the controller deployment cost will increase, and the controller efficiency to handle network change will decrease. In the currently proposed methods, the reduction of load migration overhead and fee has yet to be considered. This issue can be explored in the future, as the failure of a centralized controller could lead to the collapse of the entire network. 2. Initially, SDN was implemented on small enterprise networks. Recently, researchers have proposed several load-balancing methods for medium and small-sized networks. However, applying load balancing for dynamic traffic load failure scenarios in large-scale networks is still a research topic. Therefore, researchers must investigate different load-balancing techniques to build a reliable and effective network. 3. Researchers have relatively limited studies in the field of load balancing utilizing methods informed by artificial intelligence. Instead, they could efficiently employ a hybrid strategy, which combines two or more approaches, as a future path for balancing the load. Figure 9: The algorithms/techniques used in the reviewed papers 4. Most of the surveyed research has yet to show the effect of the used LB mechanism on all QoS parameters. For instance, many methods prioritize factors like throughput, scalability, and response time while overlooking latency, packet loss, and stability parameters. Therefore, load-balancing decision-making must incorporate additional QoS criteria. Furthermore, further research may find it interesting to study global QoS compliance. 5. Some of the chosen papers did not consider factors such as traffic patterns and packet priority. Therefore, using these considerations in load-balancing decisions might be a potential avenue for future study. 6. The energy saving and carbon emission challenge do not consider in the most studied techniques. These factors could increase the popularity and efficacy of existing load-balancing mechanisms. Consequently, load-balancing methods that consider carbon emissions and energy usage are promising research direction. 7. Furthermore, some of the reviewed methods do not incorporate the algorithm used to accomplish load detection. Hence, introducing a novel approach for load detection is another route for future work. 8. Security challenges in SDN are a significant issue that all researchers should considered. The majority of SDN security risks target the availability of the control plane. However, using distributed controllers requires more cost, and it can cause controller cascade failures. Therefore, SDN security must be associated with implementing a secure design that assures control plane high availability. ## Conclusion Using multiple load-balancing techniques in SDN networks can boost network performance since the SDN controller has the capability to provide a comprehensive overview of the available resources. This article provides a detailed and comprehensive survey of different load balancing techniques that adopt artificial intelligence to improve the load distribution in Software Defined Networks. We discuss the SDN architecture, its advantages, and its LB mechanisms. Followed by revising the artificial intelligence-based load-balancing methods researchers proposed regarding implementation and evaluation metrics. Additionally, the paper categorized the current load-balancing solutions for SDN into four primary classifications and explained their applications, each with sub-classifications based on the utilized technology. These classifications include; Nature-inspired LB methods that resemble or are inspired by natural events; Machine learning methods that conjunction with the SDN architecture to achieve enhanced routing performance; Mathematical model-based techniques that use to perform load balancing in software-defined and other methods that apply different ways to predict the network overload and optimal path. We provided detailed information about various metrics associated with the performance evaluation of load balancing in SDN. These metrics include response time, throughput, resource utilization, latency, workload degree, deployment cost, jitter, packet loss ratio, delay, round trip time, bandwidth utilization ratio, migration delay, link utilization, flow completion time, migration cost, overhead, packet load ratio, power consumption, consumer Satisfaction, and cumulative distribution function. Furthermore, we summarized and compared the techniques applied by the surveyed articles and analyzed the limitation of each one. In conclusion, we list and discuss some trends and challenges in potential areas for future investigation that can enhance the widespread adoption of load balancing in SDN. In our future work, we will consider more databases, journals, and conferences. We will also employ more keywords and search strings to search the literature. This review did not incorporate articles published prior to 2017. Additionally, we will cover other related issues, as this work focuses only on using AI in SDN load balancing problems.
2310.19045
Constraints on Tsallis Cosmology from Big Bang Nucleosynthesis and the Relic Abundance of Cold Dark Matter Particles
By employing Tsallis' extensive but non-additive $\delta$-entropy, we formulate the first two laws of thermodynamics for gravitating systems. By invoking Carath\'{e}odory's principle, we pay particular attention to the integrating factor for the heat one-form. We show that the latter factorizes into the product of thermal and entropic parts, where the entropic part cannot be reduced to a constant, as is the case in conventional thermodynamics, due to the non-additive nature of $S_{\delta}$. The ensuing two laws of thermodynamics imply a Tsallis cosmology, which is then applied to a radiation-dominated universe to address the Big Bang nucleosynthesis and the relic abundance of cold dark matter particles. It is demonstrated that the Tsallis cosmology with the scaling exponent $\delta$$\sim$$1.499$ (or equivalently, the anomalous dimension $\Delta\sim0.0013$) consistently describes both the abundance of cold dark matter particles and the formation of primordial light elements, such as deuterium ${}^{2}\!H$ and helium ${}^{4}\!He$. Salient issues, including the zeroth law of thermodynamics for the $\delta$-entropy and the lithium ${}^{7}\!Li$ problem, are also briefly discussed.
Petr Jizba, Gaetano Lambiase
2023-10-29T15:38:35Z
http://arxiv.org/abs/2310.19045v1
Constraints on Tsallis Cosmology from Big Bang Nucleosynthesis and the Relic Abundance of Cold Dark Matter Particles ###### Abstract By employing Tsallis' extensive but non-additive \(\delta\)-entropy, we formulate the first two laws of thermodynamics for gravitating systems. By invoking Caratheodory's principle, we pay particular attention to the integrating factor for the heat one-form. We show that the latter factorizes into the product of thermal and entropic parts, where the entropic part cannot be reduced to a constant, as is the case in conventional thermodynamics, due to the non-additive nature of \(S_{\delta}\). The ensuing two laws of thermodynamics imply a Tsallis cosmology, which is then applied to a radiation-dominated universe to address the Big Bang nucleosynthesis and the relic abundance of cold dark matter particles. It is demonstrated that the Tsallis cosmology with the scaling exponent \(\delta\)\(\sim\)1.499 (or equivalently, the anomalous dimension \(\Delta\sim 0.0013\)) consistently describes both the abundance of cold dark matter particles and the formation of primordial light elements, such as deuterium \({}^{4}\!H\) and helium \({}^{4}\!He\). Salient issues, including the zeroth law of thermodynamics for the \(\delta\)-entropy and the lithium \({}^{7}\!Li\) problem, are also briefly discussed. \(\delta\)-entropy; Tsallis cosmology; Big Bang nucleosynthesis; cold dark matter Article ## 1 Introduction Since the discovery of black hole thermodynamics by Bekenstein [1] and Hawking [2], it has become clear that a non-perturbative aspect of Einstein's gravity could potentially be linked to holographic thermodynamics. In particular, Jacobson's pioneering work [3] has been instrumental in pointing out a deep formal connection between holographic thermodynamics and gravitation, culminating in the derivation of the Einstein field equations (see also Refs. [4; 5; 6]) and the cosmological equations (the Friedmann equations) [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] from the first two laws of thermodynamics. These studies have generated considerable interest, paving the way for an analysis that would extend to a broader range of entropies than just the standard Boltzmann-Gibbs-Shannon entropy [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Ensuing models generally account for various modifications of the Bekenstein-Hawking (BH) entropy area law where the conventional holographic scenario is inapplicable. For example, when considering entropic cosmology [30] or loop quantum gravity [31; 32; 33], logarithmic corrections to the area law are observed as a result of entanglement between quantum fields situated both within and beyond the horizon [34; 35; 36; 37; 38]. Similarly, generalized non-additive entropies typically tend to induce a power-law behavior rather than an area law; see, e.g., Refs. [19; 39; 40; 41; 42]. It should be stressed that the opposite behavior is also true; namely, the area-law formula for black hole entropy holds only in Einstein's theory, i.e., when the ensuing action functional includes only a linear term of the scalar curvature \(R\). For instance, the Bekenstein-Hawking entropy no longer holds in generic higher-derivative gravity theories [12], and, in particular, in \(f(R)\) gravity, the entropy of a static black hole acquires the form \(S\propto L^{2}f^{\prime}(R)\), cf., e.g., Ref. [43]. Recently, C. Tsallis proposed a thermodynamic entropy in 3 spatial dimensions for systems with sub-extensive scaling of microstates, such as, e.g., black holes. This so-called \(\delta\)-entropy is an entropic functional of the form [19; 42] \[S_{\delta}\ =\ \eta_{\delta}\sum_{i}p_{i}\bigg{(}\log\frac{1}{p_{i}}\bigg{)}^{ \delta}\,,\quad\delta\ >\ 0\,. \tag{1}\] Here, the values \(p_{i}\) represent the probabilities of elementary events (or microstates), and the multiplicative constant \(\eta_{\delta}\) reflects the units used to measure the entropy. An equiprobable distribution (1) acquires the form \[S_{\delta}\ =\ \eta_{\delta}(\log W)^{\delta}\,, \tag{2}\] where \(W\) is a number of available microstates. According to Ref. [19], the entropy (1) can be regarded as a valid thermodynamic entropy in 3 spatial dimensions for systems with the sub-extensive scaling. Such a scaling typically appears in various holographic scenarios. For instance, according to the holographic principle, the entropy of a black hole and more generally the entropy of the universe is a Shannon entropy with a peculiar area-law scaling, namely \[S_{\rm BH}\ =\ -k_{B}\sum_{i}p_{i}\log p_{i}\ \propto\ \ L^{2}\,. \tag{3}\] Here, \(L\) is a characteristic length-scale in the problem, and the Boltzmann constant \(k_{B}\) is typically chosen in the context of holographic thermodynamics. By the asymptotic equipartition property [44], the \(S_{\rm HB}\) entropy behaves as \[S_{\rm BH}\ \propto\ \log W\,, \tag{4}\] implying that the number of microstates \(W\) (more precisely a volume of a typical set) scales exponentially so that \[W\ =\ f(L)\,\eta^{L^{2}}\,,\quad\text{with}\quad\eta>1\quad\quad\text{and} \quad\lim_{L\to\infty}f(L)/L\ =\ 0\,. \tag{5}\] While the scaling (5) prevents the Bekenstein-Hawking entropy from being considered as a full-fledged thermodynamic entropy, the entropy (1) may be considered a proper thermodynamic entropy, provided a suitable scaling exponent \(\delta\) is chosen [19]. This is because with a proper \(\delta\), the entropy \(S_{\delta}\) preserves the structure of thermodynamic Legendre transforms [19; 41; 42]. By combining (2) with (5), we obtain that in the large \(L\) limit, the entropy \(S_{\delta}\) can be written as \[S_{\delta}\ =\ \gamma_{\delta}A^{\delta}\,, \tag{6}\] where \(A\) is the horizon area and \(\gamma_{\delta}\) is a \(\delta\)-dependent constant, which for \(\delta=1\) reduces to Hawking's conventional form \(\gamma=1/(4L_{p}^{2})\). When the number of microstates scales according to (5), the scaling exponent \(\delta\) should be \(3/2\) in three spatial dimensions to ensure the entropy is an extensive thermodynamic quantity [19; 42]. On the other hand, it is expected that at the quantum gravitational level, the black hole surface (and by extension cosmological horizons) will follow a deformed holographic scaling [45; 46; 47]. In particular, one may employ Barrow's idea, which posits that black holes and more generally cosmological horizon surfaces possess a fractal structure with associated generalized Bekenstein-Hawking entropy \[S_{\rm Gen.BH}\ \propto\ L^{2+\Delta}\,. \tag{7}\] Here, \(\Delta\) is nothing but an _anomalous dimension_ because similarly as in conventional Quantum Field Theory (QFT), it simply measures how much the scaling dimension (i.e., \(2+\Delta\)) deviates from its classical value (i.e., \(2\)) due to quantum effects. Since the coupling constant in conventional quantum gravity decreases with increasing distance, the larger the distance scale [48], the smaller the value of \(\Delta\). At large scales (low energies), it may be expected that \(\Delta=0\), and one recovers the classical Bekenstein-Hawking entropy. Equation (7) implies the scaling \[W\ =\ f(L)\eta^{L^{2+\Delta}}\,,\ \ \ \text{with}\ \ \ \eta>1\,. \tag{8}\] The Barrow entropy (7) was originally proposed in Ref. [45] as a toy model for understanding the possible effects of quantum gravitational spacetime foam [49]. The Barrow entropy reduces to the standard Bekenstein-Hawking entropy in the limit \(\Delta\to 0\), whereas the case \(\Delta=1\) corresponds to maximal deformation. Barrow provided a simple "sphere-flake" fractal model for \(\Delta\), which allows only for \(\Delta\in[0,1]\). While indeed the Hausdorff dimension of very rough surfaces might be arbitrarily close to the embedding Euclidean dimension (i.e., max 3), the lower value of \(\Delta\) might acquire negative values for "spongy" or "porous" surfaces. For instance, for the Sierpinski carpet, the Hausdorff dimension is \(\sim\)1.89, so that \(\Delta\)\(\sim\)\(-0.11\), while real-world porous surfaces can have Hausdorff dimensions substantially lower than 2, cf., e.g., [50; 51]. The fact that anomalous dimensions can be negative stems also from QFT where the renormalization group reasonings generally allow for negatively-valued \(\Delta\) in various systems [52; 53]. In passing, we might note that for \(\Delta>0\), the scaling (8) indicates that there are more quantum microstates available than there are in the classical situation, while for \(\Delta<0\), the number of available states is lower than what is seen classically. If we now insert Barrow's microstate scaling (8) to (2), we obtain \[S_{\delta}\ =\ \gamma_{\delta}\,A^{(1+\Delta/2)\,\delta}\,. \tag{9}\] The extensivity of the \(\delta\)-entropy then implies the relation between \(\Delta\) and \(\delta\), namely \[(1+\Delta/2)\,\delta\ =\ \frac{3}{2}\ \ \ \Leftrightarrow\ \ \ \delta\ =\ \frac{3}{2+\Delta}\,. \tag{10}\] Entropy \(S_{\delta}\) belongs to the two-parameter class of entropic functionals, referred to as the \(S_{q,\delta}\) entropies that were proposed by Tsallis in Ref. [19]. There, in particular, \(S_{\delta}\equiv S_{1,\delta}\). It is important to stress that \(S_{\delta}\) does not correspond to the widely used Tsallis entropy [54; 55; 56; 57], which is a prominent concept in statistical physics and the theory of complex dynamical systems. Specifically, Tsallis' entropy with the non-extensivity parameter \(q\) is the \(S_{q,1}\) member in the aforementioned two-parameter class of entropies. The so-called _Tsallis cosmology_ is an approach that incorporates the \(S_{\delta}\) entropy into the first law of thermodynamics to produce modified Friedmann cosmological equations. The standard cosmological model is then recovered in the limit \(\delta=1\) and \(\Delta=0\). It is worth noting that the use of \(S_{\delta}\) in formulating the first law of thermodynamics appears to be somewhat arbitrary in the existing literature. For this reason, we adhere to Tsallis' original suggestion [19] for \(S_{\delta}\) in this paper and formulate the first law so that the entropy will be extensive but not additive. Due to the non-additive nature of the entropy, particular attention must be paid to the integration factor of the heat one-form, which in this case is not a simple inverse of thermodynamic temperature, but instead, it factorizes into entropic and thermal parts. With the correct first law of thermodynamics at hand, one can explore the potential consequences of Tsallis cosmology. It is clear that the consistency of the approach strictly relies on the available observational datasets that should be matched with Tsallis cosmology at various epochs. In particular, the Big Bang nucleosynthesis (BBN) plays a crucial role in this respect, providing an independent and powerful constraint for any cosmological model. Notably, the formation of primordial light elements in the BBN represents an important epoch of the universe's evolution. In fact, during this era, the formation of light elements left an imprint on their abundance that is observed today. With the advancements in high-precision instrumentation and the infusion of new ideas from particle cosmology and astrophysics, the BBN currently represents a powerful probe for testing the early universe, with non-trivial consequences on scenarios beyond the Standard Models of particle physics and the standard cosmological model. Ensuing "new" physics may alter the evolution of events that occurred in the BBN era as compared to the standard theories, and current observations thus provide strong constraints on parameters characterizing such models. One of the key messages of this paper is that Tsallis cosmology is capable of consistently describing the primordial light elements' formation in the BBN era. In addition, we also show that the range of \(\delta\)-parameters obtained is compatible with bounds on the cold dark matter relic abundance. In contrast to other works on cosmology that rely on the \(\delta\)-entropy, we put emphasis on the second law of thermodynamics. By the second law of thermodynamics, we mean the Caratheodory formulation. This formulation states that the heat one-form in any thermodynamically consistent system should be holonomic, which implies the existence of a new state function--entropy. Moreover, this allows for the definition of a unique absolute temperature for the \(\delta\)-entropy-driven thermodynamic systems. We demonstrate that the integration factor of the heat one-form cannot simply be described as the inverse of thermodynamic temperature due to the non-additive nature of entropy. Instead, it factorises into entropic and thermal parts. We further show that the factorization property of the integration factor allows us to identify absolute temperature uniquely (up to a multiplicative factor). This, in turn, permits us to follow the established methodology from conventional thermodynamics to derive the (modified) Friedmann equations. It is worth noting that our use of \(S_{\delta}\) is based on Formula (9), which conceptually differs from a more commonly used version [58; 59; 60]. Therefore, while the first law of thermodynamics reflects energy conservation, and as such, it is crucial in setting up the Friedmann equations (basically along the same lines as in the original Jacobson's paper [3]), it is the second law (more precisely its modified version--new integration factor, new entropy, and new absolute temperature) that brings about the key modifications into the Friedmann equations and allows for novel cosmological implications. The layout of the paper is as follows. In the following section, we examine the role of the \(S_{\delta}\) entropy and discuss how it can be incorporated into the thermodynamic framework. Particular attention is paid to an integrating factor for the heat one-form. It is shown that the latter cannot be simply identified with the inverse thermodynamic temperature, but instead, it factorises into entropic and thermal parts. In Section 2.2, we briefly discuss the modified Friedmann equations that result from the application of the first law of thermodynamics to the apparent horizon of a FRW (Friedmann-Robertson-Walker) universe. We use the constraints from the BBN physics in Section 3 and from dark matter in Section 4 to infer limits on the anomalous dimension and on Tsallis' parameter \(\delta\). It is shown that the Tsallis cosmology is consistent with both the formation of primordial light elements (such as deuterium \({}^{2}H\) and helium \({}^{4}He\)) and the relic abundance of dark matter particles given that the scaling exponent \(\delta\)\(\sim\)1.499, or equivalently, the anomalous dimension \(\Delta\)\(\sim\)0.013. Finally, Section 5 summarizes our results and identifies potential avenues for future research. For the sake of clarity, we relegate some more technical considerations to two appendices. ## 2 Thermodynamics Based on \(S_{\delta}\) and Cosmological Equations ### \(S_{\delta}\)-Entropy and the First and Second Laws of Thermodynamics In this section, we briefly review thermodynamics based on the \(S_{\delta}\)-entropy. In particular, we will focus on introducing the \(S_{\delta}\)-entropy into the first law of thermodynamics by utilizing Caratheodory's formulation of the second law of thermodynamics. It is important to note that while Caratheodory's formulation may not be as common in the literature, it can be derived directly from the conventional Kelvin-Planck statement of the second law [61; 62]. In our exposition, we will loosely follow Ref. [41]. The result obtained will be further instrumental in deducing modified Friedmann-Robertson-Walker equations, which will be discussed in following sub-section. Many cosmological systems, such as black holes, have entropies that exhibit sub-extensive scaling. A paradigmatic example of this phenomenon is the area-law scaling of the BH entropy. Since the laws of black hole mechanics are mathematically analogous to the laws of thermodynamics, one often formally postulates black hole thermodynamics without any reference to arguments coming from statistical mechanics [63]. This strategy was further extended by Gibbons and Hawking [64] and later by 't Hooft and Susskind [65; 66], who have demonstrated that black hole thermodynamics is more general than black holes, namely that cosmological event horizons also have an entropy and temperature and that one may again affiliate formal thermodynamic rules with them. These findings have prompted an ongoing debate on whether the aforementioned systems are merely analogous to thermodynamic systems or whether they should be considered genuine thermodynamic systems. Recently, Tsallis offered an alternative viewpoint [19; 42] in which he advocated that such systems may be viewed as genuine thermodynamic systems, provided the cosmological entropy is replaced with an extensive but not additive entropy, while the holographic scaling of the state-space remains unchanged. Let us now take a closer look at Tsallis' proposal. We start by recalling that the key property in the thermodynamic framework is the Legendre transform, which, for instance, for the Gibbs free energy, takes the form \[G(T,p,N,\ldots)\ =\ U(S,V,N,\ldots)\ +\ pV\ -\ TS\,, \tag{11}\] where \(G\) and \(U\) stand for the Gibbs free energy and internal energy, respectively. Both \(G\) and \(U\) are expressed in terms of their _natural variables_, and dots stand for prospective additional (non-mechanical) state variables. By following [19; 42], we now define the length-scale independent thermodynamic potentials \(g=\lim_{L\to\infty}G/L^{\varepsilon}\) and \(u=\lim_{L\to\infty}U/L^{\varepsilon}\), where \(L\) is the characteristic linear scale of the system and \(\varepsilon\) is a scaling exponent (not necessarily identical to the spatial dimension \(d\)). Note that \(g\) and \(u\) must satisfy (for large \(L\)) \[G(T,p,N,\ldots)\ =\ L^{\varepsilon}g(T/L^{\nu},p/L^{\nu},N/L^{d}, \ldots)\,,\] \[U(S,V,N,\ldots)\ =\ L^{\varepsilon}u(S/L^{d},1,N/L^{d},\ldots)\,. \tag{12}\] Here, we do not assume that the scaling exponent \(\nu\) has a typical laboratory value \(\nu=0\). Therefore, because (for large \(L\)) \(G(T,p,N,\ldots)\propto L^{\varepsilon}\), \(U(S,V,N,\ldots)\propto L^{\varepsilon}\), \(p\propto L^{\nu}\), and \(T\propto L^{\nu}\), then (11) inevitably implies that \(S\propto L^{d}\) (this was implicitly used in (12)) and \(\varepsilon=\nu+d\). In this way, one can (for large \(L\)) rewrite (11) in the form \[g(T/L^{\nu},p/L^{\nu},N/L^{d},\ldots)\ =\ u(S/L^{d},1,N/L^{d},\ldots)\ +\ \frac{p}{L^{\nu}}\cdot 1\ -\ \frac{T}{L^{\nu}}\frac{S}{L^{d}}\,, \tag{13}\] Hence, the structure of the Legendre transform is also satisfied for length-scale-independent thermodynamic potentials. This analysis shows that entropy should be an extensive quantity provided that \(T\) and \(p\) scale in the same manner, regardless of the precise scaling of the thermodynamic potentials (which should be the same for all of them as they all refer to energy). In addition, it is clear that one could also repeat the same reasoning for other thermodynamic potentials. The required extensivity of thermodynamic entropy is a starting point of Tsallis' analysis. In order to satisfy both the holographic state-space scaling (5) (and more generally (7)) and the extensivity condition \(S\propto L^{d}\), Tsallis proposed the \(S_{\delta}\) entropy with a specific value of \(\delta\) that enforces the extensivity. In particular, for \(d=3\), one has that \(\delta\) should be \(3/2\) for conventional holographic scaling (5) and \(3/(2+\Delta)\) for Barrow's type of scaling (8). In the spirit of Tsallis' suggestion, we now set \(\alpha=2+\Delta\) and assume that \(S_{3/\alpha}\) is a thermodynamic entropy. There are two apparent drawbacks associated with this assumption. First, \(S_{3/\alpha}\) is not additive (not even in the \(L\to\infty\) limit) but instead follows the pseudo-additivity rule \[S_{3/\alpha}(A+B)\ =\ \left[S_{3/\alpha}^{a/3}(A)\ +\ S_{3/\alpha}^{a/3}(B)\right]^{3/ \alpha}, \tag{14}\] for any two independent subsystems \(A\) and \(B\). Second, it is unclear what _conjugate thermodynamic variable_ is associated with this entropy. The first point, which is an unavoidable result of working with systems that have sub-extensive scaling, such as gravity, is not a major issue, as we shall see. However, the second point is more serious. Caratheodory's formulation of the second law of thermodynamics states that a heat one-form, \(\ d\mathcal{Q}\), must have an integration factor (with the heat one-form being _holonomic_) so that entropy is a state function [67; 68]. However, since the entropy is not additive, one cannot use the conventional Carnot cycle argument [69] in the proof of Clausius equality to simply equate the integration factor with inverse temperature. Let us examine this last point more closely to understand better what is involved. Since the exact differential associated with the heat one-form is entropy, we can write \[dS_{3/\alpha}(\mathbf{a},\theta)\ =\ \mu(\mathbf{a},\theta)\ d\mathcal{Q}( \mathbf{a},\theta)\,, \tag{15}\] where \(\mathbf{a}\) represents a collection of relevant state variables and \(\theta\) is some _empirical_ temperature whose existence is guaranteed by the zeroth law of thermodynamics (see also Appendix A). We now divide the system in question into two subsystems, \(A\) and \(B\), that are described by state variables \(\{\mathbf{a}_{1},\theta\}\) and \(\{\mathbf{a}_{2},\theta\}\), respectively. Then, \[\ d\mathcal{Q}_{A}(\mathbf{a}_{1},\theta) =\ \frac{1}{\mu_{A}(\mathbf{a}_{1},\theta)}dS_{A,3/\alpha}( \mathbf{a}_{1},\theta)\,,\] \[\ d\mathcal{Q}_{B}(\mathbf{a}_{2},\theta) =\ \frac{1}{\mu_{B}(\mathbf{a}_{2},\theta)}dS_{B,3/\alpha}( \mathbf{a}_{2},\theta)\,. \tag{16}\] Therefore, for the whole system \[\ d\mathcal{Q}_{A+B}\ =\ d\mathcal{Q}_{A}\ +\ d\mathcal{Q}_{B}\,, \tag{17}\] with \[d\mathcal{Q}_{A+B}(\mathbf{a}_{1},\mathbf{a}_{2},\theta)\ =\ \frac{1}{\mu_{A+B}( \mathbf{a}_{1},\mathbf{a}_{2},\theta)}\,dS_{(A+B),3/\alpha}(\mathbf{a}_{1}, \mathbf{a}_{2},\theta)\,, \tag{18}\] we can write \[dS_{(A+B),3/\alpha}(\mathbf{a}_{1},\mathbf{a}_{2},\theta) = \frac{\mu_{A+B}(\mathbf{a}_{1},\mathbf{a}_{2},\theta)}{\mu_{A}( \mathbf{a}_{1},\theta)}\,dS_{A,3/\alpha}(\mathbf{a}_{1},\theta) \tag{19}\] \[+ \frac{\mu_{A+B}(\mathbf{a}_{1},\mathbf{a}_{2},\theta)}{\mu_{B}( \mathbf{a}_{2},\theta)}\,dS_{B,3/\alpha}(\mathbf{a}_{2},\theta)\,.\] Let us now assume that there is only one state variable so that \(\mathbf{a}=a\). If there were more state variables, our subsequent argument would still be valid, but we would need to consider more than two subsystems. Under this assumption, we can invert \(S_{A,3/\alpha}(a_{1},\theta)\) and \(S_{B,3/\alpha}(a_{b},\theta)\) in terms of \(a_{1}\) and \(a_{2}\) and write (at least locally) that \[a_{1}\ =\ a_{1}(S_{A,3/\alpha},\theta)\ \ \ \text{and}\ \ \ a_{2}\ =\ a_{2}(S_{B,3/\alpha},\theta)\,. \tag{20}\] With this, Equation (19) can be cast into the form \[dS_{(A+B),3/\alpha}(S_{A,3/\alpha},S_{B,3/\alpha},\vartheta) = \frac{\mu_{A+B}(S_{A,3/\alpha},S_{B,3/\alpha},\vartheta)}{\mu_{A}(S _{A,3/\alpha},\vartheta)}dS_{A,3/\alpha} \tag{21}\] \[+ \frac{\mu_{A+B}(S_{A,3/\alpha},S_{B,3/\alpha},\vartheta)}{\mu_{B} (S_{B,3/\alpha},\vartheta)}dS_{B,3/\alpha}\;+\;0d\vartheta\,.\] Since \(dS_{3/\alpha}\) is a total differential, the following integrability conditions must hold: \[\frac{\partial\log(\mu_{A}(S_{A,3/\alpha},\vartheta)}{\partial \vartheta}\;=\;\frac{\partial\log(\mu_{B}(S_{B,3/\alpha},\vartheta)}{ \partial\vartheta}\;=\;\frac{\partial\log(\mu_{A+B}(S_{A,3/\alpha},S_{B,3/ \alpha},\vartheta)}{\partial\vartheta}\;, \tag{22}\] \[\frac{1}{\mu_{A}(S_{A,3/\alpha},\vartheta)}\frac{\partial\mu_{A+ B}(S_{A,3/\alpha},S_{B,3/\alpha},\vartheta)}{\partial S_{B,3/\alpha}}\;=\; \frac{1}{\mu_{B}(S_{B,3/\alpha},\vartheta)}\frac{\partial\mu_{A+B}(S_{A,3/ \alpha},S_{B,3/\alpha},\vartheta)}{\partial S_{A,3/\alpha}}\,. \tag{23}\] Note that in (22), the derivatives cannot depend on entropy but only on \(\vartheta\). We can thus denote the right-hand-side (RHS) of (22) as \(-w(\vartheta)\) and write the solutions in the form \[\mu_{A}(S_{A,3/\alpha},\vartheta)\;=\;\Phi_{A}(S_{A,3/\alpha})\exp\biggl{(}- \int w(\vartheta)d\vartheta\biggr{)}\;=\;\Phi_{A}(S_{A,3/\alpha})\,T^{-1}( \vartheta)\,,\] \[\mu_{B}(S_{B,3/\alpha},\vartheta)\;=\;\Phi_{B}(S_{B,3/\alpha})\exp\biggl{(}- \int w(\vartheta)d\vartheta\biggr{)}\;=\;\Phi_{B}(S_{B,3/\alpha})\,T^{-1}( \vartheta)\,,\] \[\mu_{A+B}(S_{A,3/\alpha},S_{B,3/\alpha},\vartheta)\;=\;\Phi_{A+B}(S_{A,3/ \alpha},S_{B,3/\alpha})\exp\biggl{(}-\int w(\vartheta)d\vartheta\biggr{)}\] \[=\;\Phi_{A+B}(S_{A,3/\alpha},S_{B,3/\alpha})\,T^{-1}(\vartheta)\,. \tag{24}\] Here, \(\Phi_{X}\) (where \(X\) stands for \(A\), \(B\), and \(A+B\), respectively) are some arbitrary functions of the entropy, and \(T(\vartheta)\) is a subsystem-independent (but generally \(\alpha\)-dependent) function of the empirical temperature. The negative sign in front of \(w(\vartheta)\) is adopted to ensure that for the monotonically increasing function \(w(\vartheta)\), the temperature function \(T(\vartheta)\) will be a monotonically increasing function of the empirical temperature, \(\vartheta\). By differentiating Equation (14), we can observe that \[dS_{(A+B),3/\alpha}\;=\;\frac{S_{A,3/\alpha}^{\alpha/3-1}}{S_{(A+B),3/\alpha} ^{\alpha/3-1}}dS_{A,3/\alpha}\;+\;\frac{S_{B,3/\alpha}^{\alpha/3-1}}{S_{(A+B),3/\alpha}^{\alpha/3-1}}dS_{B,3/\alpha}\,. \tag{25}\] By comparing this with (21) and (24), we can infer that the condition \(\Phi_{X}(S_{X,3/\alpha})=\kappa S_{X,3/\alpha}^{1-\alpha/3}\) must hold (here, \(\kappa\) is an arbitrary multiplicative constant). Using this identification, we can easily verify that the remaining integrability condition (23) is also satisfied. In conventional thermodynamics, \(\Phi_{X}\) is a constant, enabling the integration factor to be identified with an absolute temperature. In the context of non-additive entropy \(S_{3/\alpha}\), this is not the case. Fortunately, \(\mu_{X}\) satisfies a simple factorization rule in which the dependence of \(\mu_{X}\) on \(S_{3/\alpha}\) and \(\vartheta\) is separated. We note that up to a multiplicative constant (that sets the units), \(T\) is a unique temperature quantifier of the system that is described by \(\vartheta\). For this reason, we can identify \(T\) with an _absolute temperature_ (see also Appendix A). Finally, the heat one-form, \(d\mathcal{Q}\), which is part of the first law of thermodynamics, assumes the form \[d\mathcal{Q}\;=\;\frac{1}{\mu}\,dS_{3/\alpha}\;=\;T\,\frac{S_{3/\alpha}^{ \alpha/3-1}}{\kappa}dS_{3/\alpha}\;=\;\frac{3T}{\kappa\alpha}\,dS_{3/\alpha}^ {\alpha/3}\,. \tag{26}\] We will denote \((3S_{3/\alpha}^{\alpha/3}/\kappa\alpha)\) as \(\mathcal{S}_{(\alpha)}\) and note that \(\mathcal{S}_{(\alpha)}\propto L^{\alpha}\). By analogy with (6), the proportionality factor between \(\mathcal{S}_{(\alpha)}\) and \(L^{\alpha}\) in the limit of a large \(L\) is set to be \((4\pi)^{\alpha/2}\gamma_{\alpha/2}\) where the value of \(\gamma_{\alpha/2}\) is still to be determined (see the next sub-section). Finally, we can express the first law of thermodynamics in a simple form as \[dU\ =\ TdS_{(a)}\ -\ pdV\,. \tag{27}\] This can also be obtained with the help of the zeroth law of thermodynamics (cf. Appendix A). We note that, similar to fluid dynamics, the work density \(W\) plays the role of pressure in the cosmological framework. So \(pdV\to WdV\). In particular, in cosmology, \(W=-\frac{1}{2}\text{Tr}(T^{\mu\nu})\), where "\(\text{Tr}\)" denotes the two-dimensional normal trace, i.e., \(\text{Tr}(T^{\mu\nu})=T^{\alpha\beta}h_{\alpha\beta}\). In the latter, \(T^{\mu\nu}\) is the energy-momentum tensor, and \(h_{\alpha\beta}\) represents the metric on the horizon [24; 41]. ### \(S_{\delta}\)-entropy and Friedmann Equations Equation (27) is often employed as a basis for deriving cosmological equations in Tsallis cosmology, even though the justification and rationale for its use are based on a premise that is distinct from that discussed in the preceding sub-section. In the literature, Equation (27) is typically derived from a formal analogy with black hole thermodynamics [707 ]. In contrast, our subsequent approach directly follows the original Tsallis proposal, which considers cosmological systems with holographic micro-state-space scaling as genuine thermodynamic systems, provided that an extensive, non-additive entropy \(S_{\delta}\) is used. By utilizing Equation (27), one can deduce the first and second modified Friedmann equations for a homogeneous and isotropic universe. In particular, for the flat FRW universe, these read (see, e.g., Ref. [41]) \[\frac{8\pi M_{\text{Pl}}^{-\alpha}}{3}\rho\ =\ \left(H^{2} \right)^{2-\alpha/2}, \tag{28}\] \[\frac{\ddot{a}}{a}\Big{(}H^{2}\Big{)}^{1-\alpha/2}\ =\ \frac{8\pi M_{\text{Pl}}^{- \alpha}}{3(4-\alpha)}[(1-\alpha)\rho\ -\ 3p]\,. \tag{29}\] Here, \(a(t)>0\) is the Robertson-Walker scale factor, \(H=\dot{a}/a\) is the Hubble parameter, and \(M_{\text{Pl}}\) is the Planck mass [in natural units (\(c=1=\hbar\)) \(M_{\text{Pl}}=\sqrt{1/G}\simeq 2.18\times 10^{19}\) GeV \(\simeq 2.18\times 10^{-8}\) kg]. The function \(\rho\) represents the energy density of the universe's matter content. Since the FRW universe must have the perfect fluid form, \(\rho\) is the perfect fluid energy density. In addition, the proportionality parameter \(\gamma_{\alpha/2}\) assumes the form [41] \[\gamma_{\alpha/2}\ =\ \frac{3(4-\alpha)(4\pi)^{1-\alpha/2}M_{\text{Pl}}^{ \alpha}}{4\alpha}\,. \tag{30}\] In connection with (29), one interesting observation is in order. Cosmological measurements, including type Ia supernovae [71], the cosmic microwave background (CMB) [72], and the large-scale structure [73; 74], suggest that the universe is currently in an accelerated phase, which means that \(\ddot{a}>0\). By using the equation of state for a perfect fluid \(p=w\rho\) (where \(w\) is a dimensionless number), we obtain from (29) that \[(1-\alpha)\rho\ -\ 3w\rho\ >\ 0\ \ \ \Rightarrow\ \ \ w<(1-\alpha)/3\,. \tag{31}\] We have used the fact that \(\alpha\) is less than 4 to obtain Inequality (31). In fact, \(\alpha\) can be maximally 3 (as seen in the Introduction). We note that (31) implies that for \(\alpha\geq 1\), \(w\) is always negative (thus corresponding to dark energy), while for \(\alpha<1\), \(w\) can be either negative or positive. This means that in Tsallis cosmology, the accelerated phase of the late-time universe is possible even with ordinary matter, i.e., without invoking the concept of dark energy. In particular, for \(w=0\) (ordinary dust matter), accelerated expansion can be obtained with the scaling exponent \(\alpha<1\). In this paper, we will, however, not be exploring any further the fascinating topic of the universe's accelerated phase. This is because our focus here will be on a radiation-dominated universe, which is the early universe era whose dynamics is determined by radiation, such as photons, neutrinos and ultra-relativistic electrons and positrons. To proceed, we rewrite (28) as \[H(T)\ =\ Q(T)\,H_{\text{St.Cosm.}}(T)\,, \tag{32}\] where \(H_{\text{St.Cosm.}}=\sqrt{\dfrac{8\pi}{3M_{\text{Pl}}^{2}}\rho(T)}\) is the Hubble parameter in the standard cosmology and \(Q(T)\) (the so-called amplification factor) is given by \[Q(T) = \left[\sqrt{\dfrac{8\pi}{3}}\,\dfrac{\rho^{1/2}}{M_{\text{Pl}}^{ 2}}\right]^{\frac{6-2}{4-a}}\ =\ \eta\left(\dfrac{T}{T_{*}}\right)^{\nu}, \tag{33}\] where \[\eta = \left[\dfrac{2\pi}{3}\sqrt{\dfrac{\pi g_{*}(T)}{5}}\right]^{\frac {6-2}{4-a}}, \tag{34}\] \[\nu = \dfrac{2(\alpha-2)}{4-\alpha}\,,\quad T_{*}\ \equiv\ M_{\text{Pl}}\,. \tag{35}\] In deriving (33) we used the fact that for a radiation-dominated universe, the Stefan-Boltzmann law \(\rho=\dfrac{\pi^{2}g_{*}(T)}{30}\,T^{4}\) holds. In the latter, \(g_{*}(T)\) counts the total number of effective degrees of freedom (those species with the rest mass \(m_{i}\ll T\)), cf. [75]. The explicit form of the amplification factor (33) is the key input from Tsallis cosmology, and it will be crucial in the following two sections. Another important consequence of the generalized Friedman equations is the modified time scaling for \(a(t)\). To see this, we observe that the equation of state for radiation, \(p=\rho/3\), implies the continuity equation \[\dot{\rho}(t)\ +\ 4H\rho(t)\ =\ 0\,. \tag{36}\] This is a consequence of the conservation of the energy-momentum tensor in the FRW background, i.e., \(\nabla_{\mu}T^{\mu\nu}=0\). Equation (36) can be solved with \(\rho(t)=\rho_{0}/a^{4}(t)\), where \(\rho_{0}\) is a constant. By inserting this relation into Equation (28), we obtain that the scale factor \(a(t)=a_{0}\big{(}\frac{t}{4-a}\big{)}^{1-a/4}\), which should be compared to the scaling behavior \(a(t)\propto t^{1/2}\) that results from the standard Friedmann equations. In addition, by employing the Stefan-Boltzmann law, we obtain the relation between the cosmic time \(t\) and the temperature \(T\), namely \(t\propto T^{\frac{4}{4-a}}\), which implies that \(Ta(t)=\) constant. ## 3 Tsallis Cosmology and Bounds from BBN ### General Analysis In this section, we will examine the effects of Tsallis cosmology, discussed in the previous section, on Big Bang nucleosynthesis. In Appendix B, we provide supplementary technical details regarding the derivation of the equations employed here. In our exposition, we will chiefly follow the approach discussed in Bernstein et al. (2016); Torres et al. (2017), and Capozzielo et al. (2018). We start by equating the expansion rate of the universe (32) with the interaction rates of relevant processes involved during the BBN (cf. Appendix B). This allows us to compute the freeze-out temperature \(T_{f}\)1 \[T_{f}\ =\ M_{\rm Pl}\left[\eta\frac{2\pi}{3}\sqrt{\frac{\pi g_{*}}{5}}\frac{1}{ \mathcal{A}_{0}\,M_{\rm Pl}^{4}}\right]^{\frac{1}{3-\eta}}, \tag{37}\] where \(\mathcal{A}_{0}=9.6\times 10^{-10}\) GeV\({}^{-4}\). Defining \(\delta T_{f}=T_{f}-T_{0f}\), with \(T_{0f}\sim 0.76\) MeV (which follows from the standard computation with \(H_{\rm St,Cosm.}\simeq\mathcal{A}_{0}\,T^{5}\)), one obtains \[\left|\frac{\delta T_{f}}{T_{f}}\right|\ =\ \left|1\ -\ \frac{T_{0f}}{M_{\rm Pl}} \left[\left(\frac{2\pi}{3}\sqrt{\frac{\pi g_{*}}{5}}\right)^{\frac{2}{1-\alpha }}\frac{1}{\mathcal{A}_{0}\,M_{\rm Pl}^{4}}\right]^{-\frac{4-\alpha}{16-5 \alpha}}\right|. \tag{38}\] At the same time, the BBN consistent \({}^{4}\!He\) mass fraction [79; 80] \[Y_{p} = 0.2449\pm 0.004\,, \tag{39}\] with an uncertainty \(|\delta Y_{p}|<10^{-3}\) can be used to infer an upper bound on (38). In particular, from Equation (A18), one can directly deduce that \[\left|\frac{\delta T_{f}}{T_{f}}\right|\ <\ 3.7\times 10^{-2}\,. \tag{40}\] The constraint on the parameter \(\alpha\) can be obtained by comparing Equations (38) and (40). For \(T_{f}\simeq 0.76\) MeV, we have to use \(g_{*}\simeq 10\). In fact, in the standard cosmology, the value \(g_{*}\simeq 10\) is constant in the temperature range \(1\) MeV \(\lesssim T\lesssim 100\) MeV, cf. [75]. Figure 1 shows that BBN restricts the values of \(\alpha\) to the interval \(2.0013\lesssim\alpha\lesssim 2.0057\). The later means that \[1.4957\ \lesssim\ \delta\ \lesssim\ 1.4990\ \ \ \Leftrightarrow\ \ \ 0.0013\ \lesssim\ \Delta\ \lesssim\ 0.0057\,. \tag{41}\] ### Constraints on Tsallis Cosmology from Primordial Abundance of Light Elements Here, the bounds on the Tsallis parameter \(\delta\) are derived by analyzing the primordial abundances of light elements, namely deuterium \({}^{2}\!H\), helium \({}^{4}\!He\), and lithium \({}^{7}\!Li\). Since the uncertainties in respective mass abundances are different from that used in the previous section, the bounds on \(\delta\) will also be slightly different. The \(Q\)-term entering the primordial light elements [81] is replaced by the amplification factor (33) (where the value \(Q\neq 1\) corresponds to the modification of GR induced by Tsallis cosmology). Moreover, we shall assume three generations of neutrinos (\(N_{\nu}=3\)). For the following analysis, it is important to recall the main features of the formation of light elements. Following Refs. [82; 83], we have: [MISSING_PAGE_POST] * \({}^{7}\!Li\)_abundance_--When considering lithium abundance, the \(\eta_{10}\) parameter successfully fits the abundances of \({}^{2}\!H\) and \({}^{4}\!He\), but it does not align with the observations of \({}^{7}\!Li\). This fact is known as the _lithium problem_[81]. In standard cosmology, the ratio of the expected value of \({}^{7}\!Li\) abundance and the observed value (Obs.) is \((Li|_{\rm St.Cosm.})/(Li|_{\rm Obs.})\in[2.4,4.3]\), cf., e.g., [81; 90]. The best fit for \({}^{7}\!Li\) abundance is presently given by [86], namely \[Y_{Li}\ =\ 4.82(1\ \pm\ 0.1)\bigg{[}\frac{\eta_{10}\ -\ 3(Q\ -\ 1)}{6}\bigg{]}^{2}\,.\] (48) The phenomenological constraint on lithium abundance \(Y_{Li}=1.6\pm 0.3\), cf. Ref. [89], yields \(Q_{Li}=1.960025\pm 0.076675\), see [82]. Such a value does not overlap with the constraints on \({}^{2}\!H\) and \({}^{4}\!He\) abundances. In fact, from Figure 2, we see that the range for admissible \(\alpha\)s is \[1.9836\ \lesssim\ \alpha\ \lesssim\ 1.9854\] \[\Rightarrow 1.511\ \lesssim\ \delta\ \lesssim\ 1.5124\ \Leftrightarrow\ -0.0164\ \lesssim\ \Delta\ \lesssim\ -0.0146\,.\] (49) The above results indicate that there is no overlap between \(\alpha\)s from \(\{^{4}\!He,^{2}\!H\}\) and \({}^{7}\!Li\), i.e., \(\alpha\)s from Equations (44), (47), and (49), respectively. Therefore, the lithium problem persists also in Tsallis cosmology. On the other hand, the respective ranges of admissible \(\alpha\)s (and \(\delta\)s) are tantalizingly close. This indicates that within the context of Tsallis cosmology, the lithium problem could potentially be mitigated by considering additional elements of \(\delta\)-entropy statistics, for example, considering the effects of the \(\delta\)-entropy-based statistics (the \(\delta\)-entropy-based statistics is a statistical physics based on a maximum entropy probability distribution) on the computation of the relevant (electroweak) reactions occurring during the BBN. Such a study is, however, beyond the scope of the present paper. ## 4 Tsallis Cosmology and Bounds from the Relic Abundance of Cold Dark Matter Particles Further constraint on \(\alpha\) can be deduced by utilizing the dark matter (DM) annihilation cross-section, which is linked to the cold DM relic abundance \(\Omega_{\rm CDM}\). In order to achieve this, we connect Tsallis cosmology with the \(f(R)=M_{*}R^{n}\) cosmology following the approach outlined in Ref. [91]. It should be noted that the scenario where \(n=1\) corresponds to the conventional Einsteinian cosmology. The aforementioned strategy will enable us to utilize the results derived in [92]. In a spatially flat FRW metric, and under the assumption that the scale factor follows a power-law evolution, \(a(t)=a_{0}t^{\xi}\) (in the case of Tsallis cosmology \(\xi=1-\alpha/4\), as discussed in Section 2.2), the \(f(R)\) cosmological equations can be expressed in the form Figure 2: \(\{Q_{{}^{4}\!He},Q_{{}^{2}\!H},Q_{{}^{7}\!Li}\}\) vs \(\alpha\). The experimental ranges of \(Q_{{}^{4}\!He,^{2}\!H,^{2}\!Li}\) are reported. The baryon parameter is fixed to \(\eta_{10}=6\), while the freeze-out temperature is set as \(T_{f}\simeq 1\) MeV. that is formally identical to (32) provided the amplification factor \(Q(T)\) is substituted with \(Q_{n}(T)\), namely (cf. Refs. [93; 94]) \[Q(T)\ \to\ Q_{n}(T)\ =\ \eta_{n}\bigg{(}\frac{T}{M_{Pl}}\bigg{)}^{v_{n}}\,, \tag{50}\] where \[\eta_{n} = \sqrt{\frac{\gamma(8\pi/3)^{(1-n)/n}}{6|2\varsigma-1|}}\bigg{(} \frac{\pi^{2}g_{*}}{30}\bigg{)}^{\frac{1-n}{2n}}\frac{1}{(\bar{M}_{*}\Omega)^ {1/2n}}\,, \tag{51}\] \[v_{n} = \frac{2}{n}-2\,. \tag{52}\] Here, the dimensionless constant \(\bar{M}_{*}\) is linked to the constant \(M_{*}\) by the equation \(\bar{M}_{*}=M_{*}M_{\rm Pl}^{-2(1-n)}\), while \(\Omega\) is related to \(n\)[93], but its explicit expression is not relevant for us. Comparing (35) and (52), one obtains \[n\ =\ 2-\frac{\alpha}{2}\ =\ 1\ -\ \frac{\Delta}{2}\ =\ 2\ -\ \frac{3}{2\delta}\,, \tag{53}\] while by comparing (33) with (51), we obtain the relation between \(M_{*}\) and the parameters \(\{\alpha,\xi\}\). Being irrelevant to our analysis, we will not report the relation here. These results can be used for investigating DM relic abundance (we assume that the DM is composed of weakly interacting massive particles (WIMPs) that are conventionally assumed to be fermions). In modified cosmology, the cold DM relic density assumes the form [75; 92] \[\Omega_{\rm CDM}h^{2}\ \simeq\ 10^{9}\frac{(\bar{I}\ +\ 1)x_{f}^{(\bar{I}+1)}}{(h_{*}/ g_{*}^{1/2})M_{\rm Pl}\bar{\sigma}}\,, \tag{54}\] where \(h_{*}\) is the number of relativistic degrees of freedom for entropy density (typically \(h_{*}\sim g_{*}\)) and \(x_{f}\equiv m/T_{f}\) has an explicit form \[x_{f} = \log[0.038(\bar{I}\ +\ 1)(g/g_{*}^{1/2})M_{p}m\bar{\sigma}] \tag{55}\] \[-\ (\bar{I}\ +\ 1)\log\{\log[0.038(\bar{I}\ +\ 1)(g/g_{*}^{1/2})M_{p}m \bar{\sigma}]\}\,.\] Here, \(g=2\) is the spin polarization of the DM particle, \(m\) is the mass of the WIMP particle, \(\bar{\sigma}\) is the WIMP cross-section, \(\bar{I}=l+(1-n)\), and \(h_{*}\) is the number of relativistic degrees of freedom for entropy density [92]. Here, \(\bar{I}=l\) for conventional GR 2 (where, as we know, \(\alpha=2\)), while \(l=0,1\) correspond, respectively, to \(s\)-wave and \(p\)-wave polarizations. Cosmological observations constrain the cold DM density to \(\Omega_{\rm CDM}h^{2}=0.1198\pm 0.0012\) (see Ref. [72]), where \(h\in[0.2,1]\) is the reduced Hubble constant [75]. Following the analysis of Ref. [92], one finds that the annihilation of the cross-section (\(\sigma/(10^{10}{\rm GeV}^{-1}/M_{\rm Pl})\)) vs. allowed WIMP masses (\(m({\rm GeV})\in[10^{2},5\times 10^{2}]\)) for the cold DM abundance \(\Omega_{\rm CDM}\) gives that \(n\in(1\)-\(10^{-4}\), \(1\)-\(10^{-3})\). In obtaining the latter bound, we have used in (54) the relation (cf., e.g., [75]) Footnote 2: Notice that in [92], it has been used the parametrization \(\langle\sigma v\rangle=\sigma_{0}x^{-l}\), where \(l=0\) corresponds to \(s\)-wave annihilation, \(l=1\) to \(p\)-wave annihilation, and so on. The modification of standard cosmology induces the corrections to the parameter \(l\) via \(\bar{I}=l+(1-n)\). In the case of GR, \(n=1\), and one obtains \(\bar{I}=\bar{l}\). \[\bar{\sigma}\ =\ \frac{3.2g_{*}^{1/2+/2n}}{n}\bigg{(}\frac{m}{M_{P}}\bigg{)}^{ 2-2/n}\bigg{(}\frac{4\pi^{3}g_{*}}{15}\bigg{)}^{-2/n}\sigma\,. \tag{56}\] From Equation (53), we thus see that \(\alpha\) has the range of admissibility \[2.0002\ \lesssim\ \alpha\ \lesssim\ 2.01\ \ \Rightarrow\ 1.493\ \lesssim\ \delta\ \lesssim\ 1.499\ \ \Leftrightarrow\ 0.0002\ \lesssim\ \Delta\ \lesssim\ 0.01\,. \tag{57}\] The key takeaway from this section is that even a minor deviation of Tsallis cosmology from the standard cosmological model can have significant impacts on the cross-section of WIMPs. Remarkably, present values of \(\alpha\), \(\delta\), and \(\Delta\) are also compatible with the values (41), (44), and (47) obtained in the framework of the BBN. ## 5 Discussion and Conclusions This paper delves into the thermodynamic structure implied by Tsallis' \(\delta\)-entropy, emphasizing the crucial role played by an integrating factor of the heat one-form. In particular, our emphasis was on the precise formulation of the first law of thermodynamics. Furthermore, the zeroth law of thermodynamics has also been addressed, and the role of empirical temperature in determining absolute temperature has been elucidated. With the first law of thermodynamics at hand, we have addressed the issue of Tsallis cosmology and its prospective role in the Big Bang nucleosynthesis and the relic abundance of cold DM particles. From the perspective of BBN, we have determined the bounds on the Tsallis parameter \(\alpha\) through both a general analysis of BBN using the freeze-out temperature formula and an examination of the current best fits for primordial abundances of light elements \({}^{4}\!He,^{2}\!H,^{2}\!Li\). The admissible range of \(\alpha\)s steaming from freeze-out temperature variation is given in Equation (41), while from the primordial abundance of light elements, we deduced the bounds (44), (47), and (49). As noted, there is a pairwise overlap of the ranges of \(\alpha\) originating both from the freeze-out temperature formula and from helium and deuterium abundances with values around \(\alpha\simeq 2.0013\), while for lithium, such an overlap does not exist (the lithium problem). On the other hand, the range of admissible \(\alpha\)s from lithium is tantalizingly close to admissible values from other considered BBN sources. This indicates that within the context of Tsallis cosmology, the lithium problem could potentially be mitigated by considering additional elements of \(\delta\)-entropy statistics. At this point, it should be noted that all that was needed to pass from the first law of thermodynamics to this modified cosmology was the entropy scaling law (9). In particular, the derivation of modified Friedman equations did not require explicit knowledge of the entropic functional (i.e., how it depends on probability). Thus, by going beyond such a paradigm, one could, for instance, utilize the effects of the \(\delta\)-entropy-based statistics on the computation of the relevant (electroweak) reactions occurring during the BBN. Such a statistical physics consideration would, however, go beyond the scope of the present paper. As a next point, we have investigated the role of Tsallis cosmology in the framework of cold DM theory. We have found that the tiny deviation from the standard cosmological scenario induced by \(\alpha\)-corrections may account for the observed relic dark matter abundance, with a value of the Tsallis parameter \(\alpha\) compatible with the constraints obtained from BBN. In passing, we might note that the obtained values \(\alpha\simeq 2\) are not compatible with the bound \(\alpha<1\), i.e., a situation which allows us to explain the accelerated phase of the late-time universe without invoking the concept of dark energy. On the other hand, one might assume, in the spirit of renormalization theory, that the anomalous dimension \(\Delta\) runs during the evolution of the universe from the BBN era to today, so that at low energies \(\Delta<-1\) ("porous" horizon surfaces), or, in other words, \(\alpha_{\text{\tiny BBN}}\simeq 2\to\alpha_{0}<1\), where the index \(0\) refers to the current value of \(\alpha\). A scenario along these lines has been recently considered, e.g., in Ref. [58]. The results discussed in this paper can be further employed in various ways. In particular: (1) they might contribute to the ongoing debate on the most pertinent cosmological scenario among models based on Tsallis' \(\delta\)-entropy (and hence the ensuing entropic origin of gravity) and the dark sector; (2) they might be instrumental in inferring bounds on Tsallis cosmology from primordial gravitational waves. The latter may be successfully addressed provided the tensor perturbations, generated during the inflation era and propagated during the Tsallis cosmological era, will be clear enough to be measurable by the future gravitational-wave detectors [95]. Work along those lines is presently being actively pursued. Conceptualization, P.J. and G.L.; formal analysis, G.L.; methodology, P.J. and G.L.; software design, data structures, computer calculation, and visualization, G.L.; writing--original draft, P.J.; writing--review and editing, P.J. and G.L. All authors have read and agreed to the published version of the manuscript. P.J. was in part supported by the Ministry of education grant MSMT RVO 14000. Not applicable. Not applicable. Not applicable. Not applicable. The authors declare no conflict of interest. The following abbreviations are used in this manuscript: \begin{tabular}{l l} BH & Bekenstein-Hawking \\ GR & General relativity \\ QFT & Quantum Field Theory \\ BBN & Big Bang nucleosynthesis \\ FRW & Friedmann-Robertson-Walker \\ CMB & Cosmic Microwave Background \\ DM & Dark Matter \\ WIMP & Weakly-interacting massive particle \\ CDM & Cold dark matter \\ \end{tabular} ## Appendix A Zeroth Law of Thermodynamics and \(S_{\delta}\) Entropy In this appendix, we discuss the zeroth law of thermodynamics for systems described by \(S_{\delta}\) entropy, which complements Section 2.1. The zeroth law of thermodynamics codifies the concept of _thermal equilibrium_ and posits that if two thermodynamic systems are in thermal equilibrium with each other, and also separately in thermal equilibrium with a third system, then the three systems are in thermal equilibrium with each other. This transitive property of thermal equilibrium allows to divide the Universe into disjoint classes of systems that are in thermal equilibrium with each other, and to quantify each such class with a unique number known as _empirical temperature_, i.e., physical temperature (such as the Celsius scale) which may not not necessarily coincide with the absolute temperature. Aside from thermal equilibrium systems can also be in another type of equilibria, e.g., in mechanical, chemical or diffusive equilibria. For example, _physical pressure_ quantifies the mechanical equilibrium of homogeneous chemical systems in physical contact. To address the issue of the zeroth law of thermodynamics, we start by considering two systems (\(A\) and \(B\)) in contact (both thermal and mechanical) with each other. Suppose that these have volumes \(V(A)\) and \(V(B)\) and internal energies \(U(A)\) and \(U(B)\), and that the total internal energy and total volume are fixed. In thermodynamic equilibrium, the total entropy \(S_{\delta}(A+B)\) must be maximal. By using (14) we thus have \[0 = dS_{\delta}(A+B)\ =\ d\Big{[}S_{\delta}^{1/\delta}(A)\ +\ S_{\delta}^{1/ \delta}(B)\Big{]}^{\delta}\] (A1) \[= [S_{\delta}(A+B)]^{1-1/\delta}\Bigg{\{}[S_{\delta}(A)]^{1/\delta- 1}\bigg{(}\frac{\partial S_{\delta}(A)}{\partial U(A)}\bigg{)}_{V(A)}\] \[\qquad\qquad\qquad-\ [S_{\delta}(B)]^{1/\delta-1}\bigg{(}\frac{ \partial S_{\delta}(B)}{\partial U(B)}\bigg{)}_{V(B)}\Bigg{\}}dU(A)\] \[\qquad\qquad+\ [S_{\delta}(A+B)]^{1-1/\delta}\Bigg{\{}[S_{\delta}(A)]^{1/ \delta-1}\bigg{(}\frac{\partial S_{\delta}(A)}{\partial V(A)}\bigg{)}_{U(A)}\] \[\qquad\qquad\qquad\qquad\qquad-\ [S_{\delta}(B)]^{1/\delta-1}\bigg{(}\frac{ \partial S_{\delta}(B)}{\partial V(B)}\bigg{)}_{U(B)}\Bigg{\}}dV(A)\,\] where we have employed the fact that the total internal energy and volume are fixed \[U(A+B)\ =\ U(A)\ +\ U(B)\ =\ \text{const.}\,,\] (A2) \[V(A+B)\ =\ V(A)\ +\ V(B)\ =\ \text{const.}\,.\] (A3) We have also assumed that the \(S_{\delta}\) entropy is expressed in terms of its natural state variables, namely \(U\) and \(V\). From (A1), we obtain two identities that reflect the simultaneous thermal and mechanical equilibrium of systems \(A\) and \(B\). The first identity can be expressed as follows: \[k_{\delta}\beta(A)\,[S_{\delta}(A)]^{1/\delta-1}\ =\ k_{\delta}\beta(B)\,[S_{ \delta}(B)]^{1/\delta-1}\ \equiv\ k_{\delta}\beta^{*}\,,\] (A4) where (by analogy with conventional thermodynamics) we have defined \[k_{\delta}\beta\ =\ \bigg{(}\frac{\partial S_{\delta}}{\partial U}\bigg{)}_{V}\,.\] (A5) It is important to emphasise that the physical temperature is not equal to \((k_{\delta}\beta)^{-1}\), but rather: \[\vartheta\ =\ \frac{1}{k_{\delta}\beta^{*}}\ =\ \frac{[S_{\delta}(B)]^{1-1/ \delta}}{k_{\delta}\beta}\,.\] (A6) Equation (A4) encapsulates the zeroth law of thermodynamics, which guarantees that the same empirical temperature \(\vartheta\) can be assigned to all subsystems in thermal equilibrium. The second identity can be cast as \[\Big{(}\partial S_{\delta}^{T}(A)/\partial V(A)\Big{)}_{U(A)}[S_ {\delta}(A)]^{1/\delta-1}\\ =\ \Big{(}\partial S_{\delta}^{T}(B)/\partial V(A)\Big{)}_{U(B)} \,[S_{\delta}(B)]^{1/\delta-1}\ \equiv\ \frac{p_{\text{phys}}}{\vartheta}\,.\] (A7) Equation (A7) reflects that when two systems are in mechanical equilibrium, their pressures are equal. This allows to identify _physical pressure_\(p_{\text{phys}}\) as \[p_{\text{phys}}\ =\ \vartheta[S_{\delta}(B)]^{1/\delta-1}\bigg{(}\frac{ \partial S_{\delta}}{\partial V}\bigg{)}_{U}\,.\] (A8) When the microstates scale exponentially with volume (as in conventional thermodynamics) then \(\delta=1\), cf. Equations (1) and (2). In fact, in the limit \(\delta\to 1\) Equation (100) approaches the conventional result. Note that by writing [as in Equations (101) and (100)] \[dS_{\delta} = \left(\frac{\partial S_{\delta}}{\partial U}\right)_{V}dU\;+\; \left(\frac{\partial S_{\delta}}{\partial V}\right)_{U}dV \tag{101}\] \[= [S_{\delta}]^{1-1/\delta}\,\frac{1}{\vartheta}dU\;+\;[S_{\delta} ]^{1-1/\delta}\,\frac{p_{\rm phys}}{\vartheta}\,dV\,,\] we obtain \[\frac{\vartheta}{[S_{\delta}]^{1-1/\delta}}dS_{\delta}\;=\;dU\;+\; p_{\rm phys}dV\,. \tag{102}\] Since \[dS_{\delta}\;=\;\mu\;d{\cal Q}\;=\;\frac{[S_{\delta}]^{1-1/\delta} K}{T}\;d{\cal Q}\quad\Rightarrow\quad\frac{\vartheta}{[S_{\delta}]^{1-1/ \delta}}\,dS_{\delta}\;=\;\frac{\vartheta\kappa}{T}\;d{\cal Q}\,. \tag{103}\] We note that the first law of thermodynamics follows from (102) if we equate the empirical temperature defined in (101) with the absolute temperature \(T\) discussed in Section 2.1. For \(\vartheta\) so chosen, we obtain that \(w(\vartheta)\) from (24) must be \(1/\vartheta\). Additionally, \(T\) has the same units as the temperature \(\vartheta\), provided \(\kappa=1\). Consequently, the ensuing first law of thermodynamics reads \[d{\cal Q}\;=\;T\,\frac{[S_{\delta}]^{1/\delta-1}}{\kappa}\,dS_{ \delta}\;=\;dU\;+\;p_{\rm phys}dV\,, \tag{104}\] which precisely coincides with the form (27). When we require the Legendre structure, then we must set \(\delta=3/\alpha\) for the microstate scaling (9). ## Appendix B BBN Physics--A Short Review In this appendix, we provide an overview of some essentials of Big Bang nucleosynthesis that are needed in Section 3. Let us first recall that BBN occurred in the early Universe when the temperature was \(T\sim{\cal O}(1)\) MeV. During this epoch, the energy density was dominated by electrons, positrons, neutrinos and photons, which were in thermal equilibrium owing to the weak interaction neutron-proton conversion processes : \(e^{+}+n\;\leftrightarrow\;p+\bar{v}_{e}\), \(v_{e}+n\;\leftrightarrow\;p+e^{-}\), and \(n\;\leftrightarrow\;p+e^{-}+\bar{v}_{e}\) (the other neutrino flavors do not contribute to these reactions). The conversion rate of protons into neutrons \[\Gamma_{np}(T)\;=\;\Gamma_{n+v_{e}\to p+e^{-}}\;+\;\Gamma_{n+e^{+} \to p+\bar{v}_{e}}\;+\;\Gamma_{n\to p+e^{-}+\bar{v}_{e}}\,, \tag{105}\] and its inverse \(\Gamma_{pn}(T)=e^{{\cal Q}/T}\Gamma_{np}(T)\) (here \({\cal Q}=m_{n}-m_{p}=1.293\)MeV is the mass difference between neutron and proton), allow to compute the neutron abundance (see Equation (106) and details in [75; 76]). For instance, the interaction rate for the process \(n+v_{e}\to p+e^{-}\) is [75; 76] \[\Gamma_{n+v_{e}\to p+e^{-}} = \int\frac{d^{3}p_{e}}{(2\pi)^{3}2E_{e}}\frac{d^{3}p_{v_{e}}}{(2\pi )^{3}2E_{v_{e}}}\frac{d^{3}p_{p}}{(2\pi)^{3}2E_{p}}\;|{\cal M}|^{2}(2\pi)^{4} \tag{106}\] \[\times\;\delta^{(4)}(p_{n}+p_{v_{e}}-p_{p}-p_{e})f(E_{v_{e}})[1-f (E_{e})]\,.\] Here \(f(E)\) is the Fermi-Dirac distribution and \({\cal M}=\big{(}\frac{g_{\pi}}{8M_{\rm W}}\big{)}^{2}[\bar{u}_{p}\gamma^{\mu}( c_{V}-c_{A}\gamma^{5})u_{n}][\bar{u}_{e}\gamma^{\mu}(1-\gamma^{5})v_{v_{e}}]\) is the corresponding scattering amplitude (\(c_{V}\) and \(c_{A}\) stand for coupling constants in front of the vector and axial vector interactions, respectively). Similar relationships can be written for the remaining two interaction rates. The (total) weak interaction rate reads, see, e.g., Refs. [75; 76] \[\Lambda(T)\ =\ \Gamma_{np}(T)\ +\ \Gamma_{pn}(T)\ \simeq\ \mathcal{A}_{0}T^{5}\ +\ \mathcal{O}\bigg{(}\frac{\mathcal{Q}}{T}\bigg{)}\,, \tag{100}\] with \(\mathcal{A}_{0}=9.6\times 10^{-10}\,\mathrm{GeV}^{-4}\). One can estimate the primordial \({}^{4}He\) abundance \(Y_{p}\) (referred also mass fraction, that is the quantity of proton number/mass as a fraction of the total proton and neutron numbers/masses) by using the formula [75] \[Y_{p}\ \equiv\ \lambda\,\frac{2y(t_{f})}{1\ +\ y(t_{f})}\,. \tag{101}\] Here \(\lambda\) represents the fraction of neutrons decaying into protons in the interval between the freeze-out time of the weak interactions \(t_{f}\) and the freeze-out time of the nucleosynthesis \(t_{n}\). Explicitly, \(\lambda\) is given by the relation \(\lambda=e^{-(t_{n}-t_{f})/\tau}\) where \(\tau=8803\pm 1.1\) seconds is the neutron mean lifetime [96]. The function \(y(t_{f})\) denotes the neutron-to-proton equilibrium ratio at \(t_{f}\), namely \[y(t_{f})\ =\ \bigg{(}\frac{n}{p}\bigg{)}_{\text{freeze-out}}\ =\ e^{-\mathcal{Q}/T_{f}}\,. \tag{102}\] The variation of the freeze-out temperature \(T_{f}\) in (101) implies a formula for the deviations from the \({}^{4}He\) mass fraction \[\delta Y_{p}\ =\ Y_{p}\bigg{[}\bigg{(}1-\frac{Y_{p}}{2\lambda}\bigg{)}\log \bigg{(}\frac{2\lambda}{Y_{p}}-1\bigg{)}\ -\ \frac{2t_{f}}{\tau}\bigg{]}\frac{\delta T_{f}}{T_{f}}\,. \tag{103}\] In (103), we have set \(\delta T(t_{n})=0\) as \(T_{n}\) is fixed by the deuterium binding energy [77; 91]. Recent observational data for the primordial \({}^{4}He\) mass fraction \(Y_{p}\), see e.g., [79; 80], has yielded the numerical value given in (39). By using this together with the fact that \(|\delta Y_{p}|<10^{-3}\), we can compute an upper bound on \(|\delta T_{f}/T_{f}|\) from (103). The estimations for both \(\delta\) and \(\Delta\) can be then obtained by employing (39) as outlined in Section 3. There we also needed the freeze-out temperature (37), which is computed through the defining relation \(\Lambda(T_{f})=H(T_{f})\) [their expressions are given in (100) and (32)].
2307.02914
Revealing the structure of the lensed quasar Q 0957+561 III. Constraints on the size of the broad-line region
Our aim is to examine the size, kinematics, and geometry of the broad-line region (BLR) in the double-lensed quasar Q 0957+561 by analyzing the impact of microlensing on various rest-frame ultraviolet broad-emission lines (BELs). We explore the influence of intrinsic variability and microlensing on the C IV, C III], and Mg II emission lines through multiple spectroscopic observations taken between April 1999 and January 2017. By utilizing the line cores as a reference for no microlensing and correcting for the long time delay between the images, we estimate the sizes of the regions emitting the broad-line wings using a Bayesian approach. Our study of the microlensing amplitudes between the lensed images of the quasar Q 0957+561 reveals differing sizes of the regions emitting the three prominent BELs C IV, C III], and Mg II. The strength of the differential microlensing indicates that the high-ionization line C IV arises from a compact inner region of the BLR with a half-light radius of $R_{1/2} \gtrsim 16.0$ lt-days, which represents a lower limit on the overall size of the BLR and is comparable to the size of the region emitting the r-band continuum in this system. A somewhat larger size of $R_{1/2}\gtrsim 44$ lt-days is obtained for the semi-forbidden line C III]. Microlensing has a weak impact on the lower-ionization line Mg II, which is emitted from a region with a half-light radius of $R_{1/2} \gtrsim 50$ lt-days. These findings suggest that the BEL regions may have distinct geometries and kinematics, with the more extended ones being spherically symmetric, and the most compact ones being nonspherical, with motions likely confined to a plane.
C. Fian, J. A. Muñoz, E. Mediavilla, J. Jiménez-Vicente, V. Motta, D. Chelouche, A. Wurzer, A. Hanslmeier, K. Rojas
2023-07-06T11:05:06Z
http://arxiv.org/abs/2307.02914v1
# Revealing the structure of the lensed quasar Q 0957+561 ###### Abstract Context: Aims:Our aim is to examine the size, kinematics, and geometry of the broad-line region (BLR) in the double-lensed quasar Q 0957+561 by analyzing the impact of microlensing on various rest-frame ultraviolet broad-emission lines (BELs). Methods:We explore the influence of intrinsic variability and microlensing on the C IV, C III], and Mg II emission lines through multiple spectroscopic observations taken between April 1999 and January 2017. By utilizing the line cores as a reference for no microlensing and correcting for the long time delay between the images, we estimate the sizes of the regions emitting the broad-line wings using a Bayesian approach. Results:Our study of the microlensing amplitudes between the lensed images of the quasar Q 0957+561 reveals differing sizes of the regions emitting the three prominent BELs C IV, C III], and Mg II. The strength of the differential microlensing indicates that the high-ionization line C IV arises from a compact inner region of the BLR with a half-light radius of \(R_{1/2}\gtrsim 16.0\) l-days, which represents a lower limit on the overall size of the BLR and is comparable to the size of the region emitting the r-band continuum in this system. A somewhat larger size of \(R_{1/2}\gtrsim 44\) lt-days is obtained for the semi-forbidden line C III]. Microlensing has a weak impact on the lower-ionization line Mg II, which is emitted from a region with a half-light radius of \(R_{1/2}\gtrsim 50\) lt-days. These findings suggest that the BEL regions may have distinct geometries and kinematics, with the more extended ones being spherically symmetric, and the most compact ones being nonspherical, with motions likely confined to a plane. Conclusions: ## 1 Introduction The twin quasar Q 0957+561 was discovered in 1979 by Dennis Walsh (Walsh et al., 1979) and was the first gravitational lens system to be identified. The quasar is lensed into two bright images (with a separation of \(\sim\)6\({}^{\prime\prime}\)) by a giant elliptical lens galaxy at a redshift of \(z_{l}=0.36\), which is part of a galaxy cluster that also contributes to the lensing (Stockton, 1980; Garrett et al., 1992). The redshift of Q 0957+561 is \(z_{s}=1.41\), causing a significant portion of its ultraviolet (UV) emission to be observed at optical wavelengths. Multi-wavelength observations of the temporal evolution of magnification ratios in Q 0957+561 have generated a wealth of monitoring data, making it an attractive target for studying physical phenomena taking place in the lens galaxy, such as gravitational microlensing by stars and extinction by gas and dust clouds (see, e.g., Zuo et al., 1997; Goicoechea et al., 2005; Motta et al., 2012), as well as physical processes taking place in the background source itself, such as intrinsic variability of the quasar (see, e.g., Lloyd, 1981; Miller et al., 1981; Gondhalekar and Wilson, 1982; Planesas et al., 1999; Hutchings, 2003). Optical monitoring of lensed quasars has revealed a diverse range of intrinsic flux variations, which can be used to determine accurate time delays between quasar images. In the case of Q 0957+561, image A leads image B by 417 days (Shalyapin et al., 2008). These time delays can be used to constrain the Hubble constant (Wong et al., 2020; Millon et al., 2020; Napier et al., 2023) and lensing mass distributions (Acebron et al., 2022; Fores-Toribio et al., 2022), and serve as a powerful probe of dark energy (Wang et al., 2022; Liu et al., 2022). Compact objects (i.e., stars) in galaxies can cause extrinsic variations in the photometric and spectroscopic observations of lensed quasars through a phenomenon known as microlensing. This effect can be used to constrain the sizes of the continuum-emitting sources surrounding the central supermassive black holes (Fian et al. 2016, 2018b, 2021a; Cornachione et al. 2020a,b) in order to reveal the structures of the broad-line regions (BLRs; Rojas et al. 2020; Hutsemekers & Sluse 2021; Fian et al. 2018a, 2021b), and to estimate the masses of the black holes (Mediavilla et al. 2018, 2019; Fian et al. 2022). Our understanding of the geometry and kinematics of BLRs in quasars remains limited. To investigate the impact of microlensing on broad emission lines (BELs), it is necessary to compare spectroscopy obtained from multiple observations. Two decades of observations of Q 0957+561 have facilitated a comprehensive examination of its temporal evolution. Despite extensive photometric and spectroscopic monitoring of the quasar components A and B, the history of their magnification ratios remains a mystery. Goicoechea et al. (2005) attempted to explain the observed magnification ratios through either a dust system located between the quasar and the observer (differential extinction), or a population of microlenses in the deflector. They found that the flux ratios are consistent with both alternatives, and even a mixed scenario (extinction + microlensing) is possible. Motta et al. (2012) calculated the dust-extinction curve using BEL ratios and were able to differentiate it from microlensing. Thus, microlensing has become the favored explanation for the anomalous optical continuum ratios, supported by clear evidence of microlensing in the r-band light curves of the system (see, e.g., Fian et al. 2021a; Cornachione et al. 2020a). This concept was initially proposed by Chang & Refsdal (1979) soon after the discovery of the quasar. In this study, we undertake a thorough analysis of the time-variable magnification ratios in the wings of the BELs of C IV, C III, and Mg II. Our examination of gravitational microlensing and intrinsic variability involves estimating the magnitude differences between images (after correcting for the time delay), and the amplitude of variation in a single image over multiple epochs of observation. The paper is structured as follows. In Section 2, we present the spectra obtained from the literature. Section 3 outlines the analysis of extrinsic and intrinsic variability in the BELs. Our method for constraining the size of the broad-line emitting regions is presented in Section 4. Finally, in Section 5, we provide conclusions based on our findings. ## 2 Data and observations In this study, we analyzed rest-frame UV spectra of images A and B in the lensed quasar Q 0957+561. Our data set consists of 15 epochs of observation spanning a period of 18 years (1999-2017). The initial spectra of the lens system were acquired by Goicoechea et al. (2005) using the 2.4m Hubble Space Telescope (HST). Motta et al. (2012) later conducted observations with the 6.5m Multiple Mirror Telescope (MMT) in 2008. Most of the spectra were provided by the Gravitational LENses and DArk MAtter (GLENDAMA1) project of the University of Cantabria (see Gil-Merino et al. 2018). The GLENDAMA observations were carried out with the 2.0m Liverpool Telescope (LT) and the 2.6m Nordic Optical Telescope (NOT). Additionally, we obtained spectra in March 2016 using the 4.2m William Herschel Telescope (WHT) located at the Roque de los Muchachos in La Palma, Canary Islands. The data from the literature were already fully reduced. The emission lines for the quasar components A and B are displayed in Figure 1, with observing information and references listed in Table 1. Footnote 1: [https://grupos.unican.es/glendama/](https://grupos.unican.es/glendama/) ## 3 Methods The estimation of microlensing signals in lensed quasars through the analysis of BELs can be complicated by the presence of intrinsic variability that is time-delayed between different images. Intrinsic variability can alter the shape of the BELs and mimic microlensing, leading to erroneous source-size estimates. To accurately measure magnitude differences between images, we select spectra that are separated in time approximately by the time delay between the images. Time delays in gravitationally lensed quasars are believed to be unique numbers that can be measured with high precision given good-quality light curves and models for the contaminating effects of gravitational microlensing. In the absence of microlensing, the time delay for both the continuum and broad-line emitting regions should be identical. However, as demonstrated by Liao (2020), gravitational microlensing can lead to variations in the time delays measured at different wavelengths. Recent work by Tie & Kochanek (2018) shows that gravitational microlensing \begin{table} \begin{tabular}{c c c c c} \hline \hline Epoch & Date & Observed BEL & Facility & Reference \\ \hline 1a & 15-04-1999 & C IV, C III], Mg II & HST & Goicoechea et al. 2005 \\ 1b & 02-06-2000 & C IV, C III], Mg II & HST & Goicoechea et al. 2005 \\ 2 & 12-01-2008 & C IV, C III], Mg II & MMT & Motta et al. 2012 \\ 3 & 29-01-2009\({}^{(*)}\) & C IV, C III] & NOT & GLENDAMA \\ 4 & 03/2010\({}^{(*)}\) & C IV, C III] & NOT & GLENDAMA \\ 5 & 10/2010\({}^{(*)}\) & Mg II & LT & GLENDAMA \\ 6 & 03/2011\({}^{(*)}\) & Mg II & LT & GLENDAMA \\ 7 & 04/2011\({}^{(*)}\) & Mg II & LT & GLENDAMA \\ 8 & 12/2011\({}^{(*)}\) & Mg II & LT & GLENDAMA \\ 9 & 18-12-2011\({}^{(*)}\) & C IV, C III] & NOT & GLENDAMA \\ 10 & 14-03-2013\({}^{(*)}\) & C IV, C III] & NOT & GLENDAMA \\ 11 & 05-03-2015 & C III], Mg II & LT & GLENDAMA \\ 12 & 19-11-2015\({}^{(*)}\) & C III], Mg II & LT & Gil-Merino et al. 2018 \\ 13 & 12-03-2016\({}^{(*)}\) & C IV, C III], Mg II & WHT & Fian et al. 2021b \\ 14 & 17-01-2017 & C III], Mg II & LT & Gil-Merino et al. 2018 \\ \hline \end{tabular} 1 \end{table} Table 1: Spectroscopic data. produces changes in the actual time delays on the order of days. In this study, we account for this effect by incorporating an additional lag of +/- 20 days, which corresponds to the scale of the light-crossing time of the accretion disk in this system, as estimated in Fian et al. (2021a) and Cornachione et al. (2020a). Of the available spectra, five pairs fulfill this criterion, although not all emission lines have been observed in all the selected epochs. This results in two epoch pairs (1a-1b, 3-4) covering the C IV line, three epoch pairs (1a-1b, 3-4, 12-14) including the C III] line, and three epoch pairs (1a-1b, 5-8, 12-14) are also detected in the 10-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-1000-10000-1000-10000-10000-10000-10000-10000-10000-10000-10000-10000-10000-10000-10000-10000-10000-100000-10000-10000-10000-10000-10000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-100000-1000000-100000-100000-100000-1000000-100000-100000-1000000-1000000-1000000-100000-100000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000-10000000-1000000-10000000-10000000-10000000-10000000-100000000-100000000-1 containing the Mg II line. The scale of the microlensing pattern caused by stars and compact objects in a lens galaxy (or cluster) is set by the Einstein radius, \(r_{E}\). Significant microlensing fluctuations occur when the source size, \(r_{s}\), is comparable to or smaller than \(r_{E}\). The amplitude of these fluctuations will then be controlled by \(r_{s}/r_{E}\), with smaller ratios leading to larger amplitudes. If the observational epochs are separated in time by more than the microlensing event timescale, also known as the Einstein radius crossing time, \(t_{E}\), microlensing measurements are likely to be independent. This timescale is determined by the effective transverse velocity of the source, \(v\), and \(r_{E}\), with \(t_{E}=r_{E}/v\)(Paczynski, 1986). Mosquera et al. (2011) report values of \(r_{E}=3.25\times 10^{16}\sqrt{M/0.3M_{\odot}}\) cm and \(t_{E}=12.39\) years for the lens system Q 0957+561. The adoption of transverse peculiar velocity estimates (\(\sigma_{pec}\sim 640\) km s\({}^{-1}\) for \(z_{l}\sim 0.5\)) from Mediavilla et al. (2016) results in an approximately 50% reduction in \(t_{E}\). Therefore, the criterion of independence is presumed to be met for microlensing measurements obtained from the selected pairs of observations. As an initial step, the continuum for each image and each emission line is removed by fitting a straight line to the continuum regions adjacent to the emission line. To account for varying line widths, we use windows of varying widths to estimate the continuum, avoiding regions of known emission features. The line cores are believed to be produced by material spread over a large region (narrow-line region and outer regions of the BLR) and are therefore thought to be less susceptible to microlensing. Adopting the conclusion from Fian et al. (2018) that emission line cores are relatively insensitive to microlensing by solar mass objects, we establish a baseline of no microlensing by using the cores as a reference. To investigate whether the same holds true for the core of the high-ionization line C IV in Q 0957+561, we employed a method that involves fitting multiple straight lines to the continuum in the wavelength range between C IV and C III], subtracting them, and subsequently normalizing the cores of the lower-ionization line C III]. We opted for this approach since C III] is expected to be less prone to microlensing and instrumental/calibration issues than C IV, and also to mitigate the larger extinction effects that may arise from a direct comparison with Mg II. Our analysis revealed that, on average, the C IV core exhibits minor variations of \((B/A)_{core}=-0.02\pm 0.08\) mag (68% confidence interval), indicating that there appears to be little core variability in this system. Therefore, it is reasonable to conclude that microlensing effects are unlikely to have a significant impact on the BEL cores, which lends support to the reliability of the microlensing results presented in this work. To utilize the line cores as a reference for no microlensing, we defined the flux within a narrow interval (\(\pm\) 6A) centered on the peak of the line and normalized the emission line cores of images A and B accordingly. Normalizing the line cores by multiplying the spectrum of image B to match the core flux of image A also effectively removes the effects of macro magnification and differential extinction (see, e.g., Guerras et al., 2013). After subtracting the continuum and matching the line cores, we isolate the line cores from the wings by a buffer of a few Angstroms to accurately assess variability in the wings. We then estimate the differential microlensing in the BELs by determining the average wing emission in velocity intervals of \(\sim\)5000 km s\({}^{-1}\) on either side of the line. In those cases in which absorption lines affect the emission line wing, a narrower integration window was chosen (blue wing of C IV) or the estimation was omitted (blue wing of Mg II) to mitigate their impact. The magnitude differences (caused by intrinsic variability and/or microlensing variability) between different observational epochs for a given image can be estimated in a similar way. The results are listed in Tables 2 and 3. The analysis of Figure 1 and Tables 2-3 (with special attention being paid to the scatter) reveals that the red wing of the C IV emission line is affected by strong intrinsic variability and microlensing (variability), as evidenced by the temporal changes in the wing. The red wing of C III] is also subject to substantial intrinsic variability, while both the blue wings of C IV and C III] display moderate variations. In the blue wings of C IV and C III], intrinsic variability affects both images to the same extent, while in the red wings, variability is slightly more pronounced in image B compared to image A, possibly due to microlensing variability. Mg II exhibits only limited signs of intrinsic variability and the effects of microlensing on this line are weak. \begin{table} \begin{tabular}{c c c c} \hline \hline Emission Line & Wing & \(\Delta\)m (mag) & Scatter (mag) \\ (1) & (2) & (3) & (4) \\ \hline C IV & blue & \(-0.07\) & \(0.00\) \\ & red & \(-0.33\) & \(0.32\) \\ \hline C III] & blue & \(0.00\) & \(0.14\) \\ & red & \(+0.09\) & \(0.33\) \\ \hline Mg II & blue & – & – \\ & red & \(-0.09\) & \(0.07\) \\ \hline \end{tabular} 1 \end{table} Table 2: Differential microlensing (B–A) in the BEL wings. \begin{table} \begin{tabular}{c c c c c} \hline \hline Emission Line & Wing & Image & \(\Delta\)m (mag) & Scatter (mag) \\ (1) & (2) & (3) & (4) & (5) \\ \hline \multirow{4}{*}{C IV} & blue & A & \(+0.06\) & \(0.23\) \\ & B & \(+0.05\) & \(0.30\) \\ \cline{2-5} & red & A & \(-0.05\) & \(0.72\) \\ & B & \(-0.13\) & \(0.66\) \\ \hline \multirow{4}{*}{C III]} & blue & A & \(+0.02\) & \(0.16\) \\ & B & \(+0.02\) & \(0.18\) \\ \cline{2-5} & red & A & \(+0.02\) & \(0.36\) \\ & B & \(+0.04\) & \(0.47\) \\ \hline \multirow{4}{*}{Mg II} & blue & A & – & – \\ & B & – & – \\ \cline{2-5} & red & A & \(0.00\) & \(0.09\) \\ \cline{1-1} & red & B & \(-0.01\) & \(0.11\) \\ \hline \end{tabular} 1 \end{table} Table 3: Variability in the BEL wings across different observational epochs. ## 4 Bayesian size estimates Based on the differential microlensing estimates between images A and B in the BEL wings, we can constrain the size of their emission region and provide insights into the BLR structure of the lensed quasar Q 0957+561. To obtain an average estimate of the size, we treat each microlensing measurement as a single-epoch event and compute the joint microlensing probability, \(P(r_{s})\), from all (time-delay-corrected and presumably independent) epochs of observation. Our procedure closely follows the methodology described in Guerras et al. (2013). Our simulations are based on 2000 \(\times\) 2000 pixel2 magnification maps, with a pixel size of approximately 0.3 light-days, covering a total area of 627\(\times\)627 light-days2 on the source plane. The maps were generated using the Fast Multipole Method - Inverse Polygon Mapping algorithm2 (FMM-IPM) described by Jimenez-Vicente & Mediavilla (2022). The local convergence and shear values, \(\kappa\) and \(\gamma\), are used to determine the magnification at each image position. These dimensionless values were obtained by fitting a singular isothermal sphere with external shear (SIS+\(\gamma_{e}\)) to the image coordinates (Mediavilla et al. 2009). Although there are more sophisticated models available (see Fadely et al. 2010), this simple model was chosen to maintain consistency with prior publications on this source, including Paper I and II by Fian et al. (2021, 2022). A mean stellar mass of \(M=0.3M_{\odot}\) was assumed, and the size scale is proportional to the square root of the microlens mass (\(r_{s}\propto\sqrt{M/M_{\odot}}\)). The microlensing amplitude also depends on the local surface mass density fraction in compact objects (such as stars) compared to that of dark matter at the lensed image location (see, e.g., Schechter & Wambsganss 2002). In the case of Q 0957+561, the two lensed images are located at very different radii from the center of the lens galaxy, with image B appearing close to the center (\(\sim\)1''), and the light of component A passing far away (\(\sim\)5''). As a result, the surface mass density of the lens galaxy is much lower at the position of component A. In this study, we adopt the \(\alpha\) values reported by Jimenez-Vicente et al. (2015), with \(\alpha\sim 0.3\) at the position of image B and \(\alpha\sim 0\) for image A, which is equivalent to not using any magnification map for this image. To investigate how variations in \(\kappa\) and \(\gamma\) affect the simulated magnification distribution of image B, we altered these parameters by \(\pm 0.1\). Our analysis reveals that varying \(\kappa\) within this range does not result in significant changes to the microlensing distribution. When varying \(\gamma\), we observe modest displacements of approximately 0.1 mag. Footnote 2: [https://gloton.ugr.es/microlensing/](https://gloton.ugr.es/microlensing/) To model the impact of extended sources on our simulations, we employ a Gaussian luminosity profile, \(I(r_{s})\propto exp(-R^{2}/2r_{s}^{2})\), to represent the emission region of the BEL wings. We note that Mortonson et al. (2005) showed that the microlensing magnification statistics are largely insensitive to the radial profile of the source (see also Munoz et al. 2016). The magnifications experienced by a source of size \(r_{s}\) are then obtained by convolving the magnification maps with 2D Gaussian profiles of sigma \(r_{s}\). The probability of reproducing the observed microlensing is estimated by randomly placing the Gaussian source on our microlensing magnification maps. To cover a wide range of source sizes, we employ a linear grid ranging from \(\sim\)1 to 150 light-days. These sizes can be converted to half-light radii with the relation \(R_{1/2}=1.18\,r_{s}\). Using our sample of spectroscopic measurements, we roughly determine the structure of the BLR in Q 0957+561. Adopting a Bayesian approach, we estimate the probability of \(r_{s}\) given the observed microlensing magnifications. The resulting joint likelihood functions for the C IV, C III], and Mg II emission lines are displayed in Figure 2, and allow us to calculate 68% confidence size estimates. We infer sizes, expressed in \(\sqrt{M/0.3M_{\odot}}\), of \(R_{1/2}=16.0^{+5.3}_{-6.8}\), \(R_{1/2}=44.3^{+15.3}_{-16.0}\), and \(R_{1/2}=49.9^{+15.0}_{-16.4}\) lt-days for the regions emitting the C IV, C III], and Mg II lines, respectively. To roughly examine the implications of covariance on our size estimates, we conducted an additional study for Mg II, excluding the epoch pair 5-8, which had less temporal separation (\(\sim 5\) years) than the other image pairs considered (\(\sim\)7 to \(\sim\)11 years). As a result, the uncertainties in the size estimate increases to \(\pm 18.3\) lt-days, reflecting a 17% increase in the error compared to the initial measurement. As mentioning above, the amplitude of microlensing is affected by the relative density of the local stellar surface mass compared to the dark matter at the image location, as reported by Schechter & Wambsganss (2002). As a consequence, the emission region sizes are also sensitive to the stellar mass fraction present, but the precise value of this latter (\(\alpha\sim 0.3\)) remains uncertain. To address this uncertainty, we studied the impact of \(\alpha\) by varying it between 0.2 and 0.4. We recomputed the magnification maps and repeated the calculations, finding Figure 2: Probability distributions of the half-light radii \(R_{1/2}\) for the regions emitting the C IV (left), C III] (middle), and Mg II line (right). The vertical dashed lines indicate the expected size of the emission region, while the gray-shaded regions represent the one-sigma intervals. that the estimated sizes of the BEL emitting regions undergo a change of approximately 22% on average. Specifically, we observe that larger values of \(\alpha\) are associated with larger source sizes, whereas lower values of \(\alpha\) lead to smaller size estimates. To explore the sensitivity of our size estimates to the width of the core region, we tested different intervals (1000, 2000, and 3000 km s\({}^{-1}\)) to normalize the emission line cores. We then re-calculated the sizes and evaluated the uncertainties introduced by this procedure. Our analysis shows that the size estimates vary by less than 10%. We note that the separation of the line emission into two parts can be supported by the idea that the BLR is formed by a flat inner region (likely related to the accretion disk giving rise to the line wings), surrounded by a larger three-dimensional structure that produces the line core (see e.g., Popovic et al., 2004). Therefore, the microlensing-based size estimates for the regions emitting the line wings should be regarded as approximate lower bounds rather than a precise estimate of the size of the BEL region. It is important to mention that the blue wing of the semi-forbidden line C III] is contaminated by several underlying emission lines (Al III, Si III], and Fe III], which might bias the size estimates toward smaller sizes. Additionally, the high-velocity ends of the Mg II line wings may be affected by the pseudo-continuum formed by thousands of UV Fe II blends, and so caution should be taken when interpreting the results. ## 5 Conclusions Through optical spectroscopy, we conducted a search for microlensing and intrinsic variability in 15 epochs of observation of the gravitationally lensed quasar Q 0957+561. Our analysis of the flux ratios between images A and B in the three most frequently observed BELs provides insight into the inner structure of the quasar. Our main findings are as follows: 1. _Extrinsic and intrinsic variability._ The techniques applied in this study enabled us to separately examine the effects of microlensing between the lensed quasar images A and B, and intrinsic variability within a single image across different observational epochs. Our findings indicate that Q 0957+561 is a hybrid case with both moderate microlensing and strong intrinsic variability present. Consistent with prior studies (Guerras et al., 2013; Fian et al., 2018, 2021b), our results show that high- and low-ionization lines are differently microlensed, with higher magnification seen in the high-ionization lines, indicating a more compact emission region. The measured microlensing in the red wing of C IV, averaging around 0.3 mag, is comparable to the microlensing (approximately 0.35 mag) determined from analyzing the 21year r-band light curves in this system (as per Fian et al., 2021a). In general, intrinsic variability affects the same spectral features as microlensing (such as the red wing of C IV and to some extent the line wings of C III]), with varying levels of intensity depending on ionization degree. We find the impact of intrinsic variability or microlensing on the Mg II wings to be weak. 2. _BLR size._ Our analysis of the magnitude differences between the lensed images A and B suggests emission region sizes of \(\gtrsim 44\) and \(\gtrsim 50\) lt-days for C III] and Mg II, respectively, in line with previous studies (Guerras et al., 2013; Fian et al., 2018a), as well as with findings from reverberation mapping campaigns (Homayouni et al., 2020). Our inferred size for the region emitting the C IV line wings (\(\gtrsim 16.0\) lt-days) is comparable, within the errors, to that of the r-band continuum in this system (\(\sim 17.6\) lt-days; see Cornachione et al., 2020a; Fian et al., 2021a), and to that of the UV Fe III blend (\(\sim 15\) lt-days; see Fian et al., 2022), supporting the notion that the line partly originates from the accretion disk. By varying the stellar mass fraction, the lower limit of the C IV emission region changes to 14.0 lt-days (\(\alpha=0.2\)) and 17.9 lt-days (\(\alpha=0.4\)), respectively. 3. _BLR geometry._ The impact of microlensing on the BLR of lensed quasars depends on the geometry and kinematics of the BLR. Our spectroscopic analysis of the lensed quasar Q 0957+561 reveals microlensing effects on the broad-line components in the system. The extent of microlensing varies depending on the line, appearing in either the blue or red wing, or in both wings with equal intensity. This observation suggests that the BLR is not generally spherical in shape, in agreement with previous studies such as Sluse et al. (2012), and recent findings by Fian et al. (2018, 2021b) and Hutsemekers & Sluse (2021). An anisotropic geometry or velocity field is required to produce asymmetrical deformations in an emission line, as demonstrated by Schneider & Wambsganss (1990), Abajas et al. (2002), and Lewis & Ibata (2004). These latter authors showed that microlensing of a spherical BLR (in both geometry and velocity field) leads to symmetrical variations in the emission lines, while spatially separated approaching and receding parts of the velocity field in Keplerian disks can cause asymmetrical microlensing and possible shifts in the line centroid (as seen in Braibant et al., 2016). The asymmetry observed in the red wing of C IV might support the idea that this component originates from a compact region (a few light-days in size) with a nonspherical geometry, most likely following the motion of the accretion disk. ###### Acknowledgements. We thank the anonymous referee for the valuable comments and suggestions. We gratefully thank Luis J. Goicoechea and Vyacheslav N. Shalyapin for kindly providing most of the spectroscopic data listed in Table 1. This research was supported by the grants PD2020-118687GB-C31, PID2020-118687GB-C32, and PID2020-118687GB-C33, financial by the Spanish Ministerio de Ciencia e Innovacion through MCIN/AE/1/10.13039/501100011033. J.A.M. is also supported by the Generalit Valenciana with the project of excellence Promete?2020/002808. J.V. is also supported by projects FQM-108, P20_00334, and A-FQM-510-UGR20/FEDER, financed by Junta de Andalucia. V.M. acknowledges support from the project ANDI FONDECYT Regular grant number 1231418, partial support from the Centro de Astrofisica de Valparaiso, and from Fonderciemento del Silementa de Investigacion e Innovacion de la Universidad de Valparaiso (UVA2099). D.C. is financially supported by the DFG grant HA3555-14/1 to let Fax Wittursity and University of Haifa, and by the Israeli Science Foundation grant no. 2398/19.
2310.06165
CAW-coref: Conjunction-Aware Word-level Coreference Resolution
State-of-the-art coreference resolutions systems depend on multiple LLM calls per document and are thus prohibitively expensive for many use cases (e.g., information extraction with large corpora). The leading word-level coreference system (WL-coref) attains 96.6% of these SOTA systems' performance while being much more efficient. In this work, we identify a routine yet important failure case of WL-coref: dealing with conjoined mentions such as 'Tom and Mary'. We offer a simple yet effective solution that improves the performance on the OntoNotes test set by 0.9% F1, shrinking the gap between efficient word-level coreference resolution and expensive SOTA approaches by 34.6%. Our Conjunction-Aware Word-level coreference model (CAW-coref) and code is available at https://github.com/KarelDO/wl-coref.
Karel D'Oosterlinck, Semere Kiros Bitew, Brandon Papineau, Christopher Potts, Thomas Demeester, Chris Develder
2023-10-09T21:32:49Z
http://arxiv.org/abs/2310.06165v2
# CAW-coref: Conjunction-Aware Word-level Coreference Resolution ###### Abstract State-of-the-art coreference resolutions systems depend on multiple LLM calls per document and are thus prohibitively expensive for many use cases (e.g., information extraction with large corpora). The leading word-level coreference system (WL-coref) attains 96.6% of these SOTA systems' performance while being much more efficient. In this work, we identify a routine yet important failure case of WL-coref: dealing with conjoined mentions such as _Tom and Mary_. We offer a simple yet effective solution that improves the performance on the OntoNotes test set by 0.9% F1, shrinking the gap between efficient word-level coreference resolution and expensive SOTA approaches by 34.6%. Our Conjunction-Aware Word-level coreference model (CAW-coref) and code is available at [https://github.com/KarelDO/wl-coref](https://github.com/KarelDO/wl-coref). ## 1 Introduction Coreference resolution (or simply _coref_) is the task of clustering mentions in a text, grouping those that refer to the same entity. Coref acts as a fundamental step in many classical NLP pipelines, such as information extraction. Today, however, state-of-the-art (SOTA) coref systems use multiple forward passes of a Large Language model (LLM) _per input document_, making them expensive to train and deploy. This results in limited practical use for classical NLP pipelines, which typically require efficient (and sometimes latency-sensitive) methods. The most computationally efficient yet competitive neural coref architecture is word-level coref (WL-coref; Dobrovolskii, 2021). This method operates by (i) first producing embeddings for each word using one forward pass of a (rather small) LM, then (ii) predicting if pairs of words are coreferent using a lightweight scoring architecture and (iii) finally extracting the spans in the input text associated with these coreferent words. Given a text of \(n\) words, this incurs a computational complexity of \(O(n^{2})\), since the method operates on pairs of words. However, SOTA methods typically perform _multiple_ forward passes of a (Large) LM per input document, making them unwieldy for many practical applications. Furthermore, these techniques suffer both from high infrastructure costs and latency-issues associated with these large models. While significantly less complex, WL-coref attains 96.6% of the performance of the current best coreference model (80.7% F1 out of 83.3% F1)1, as measured on the English split of the OntoNotes dataset Pradhan et al. (2012). What makes this even more impressive is that WL-coref uses one forward pass of a 355M parameter roberta-large encoder Liu et al. (2019), while the state-of-the-art method Bohnet et al. (2023) uses multiple forward passes of a 13B parameter mT5-XXL model Xue et al. (2021). Thus, WL-coref is the go-to architecture for efficiency-sensitive or long-document coref. Footnote 1: Dobrovolskii (2021) reports a performance of 81.0% F1 for WL-coref as best performance on the OntoNotes test set. To avoid selecting the best model on the test set, we instead report the test score achieved by our first rerun of WL-coref using their code. In this work, we describe a fundamental weakness of the WL-coref model in its original formulation, stemming from how the word-level coref step Figure 1: We identify two types of failure cases for WL-coref when processing conjoined mentions. Our simple solution, CAW-coref, addresses these errors. was trained. In particular, starting from a dataset that is annotated at the span-level, a word-level dataset is created by using dependency parsing information to select one head-word per span. This causes ambiguity when mentions are conjoined: two spans representing distinct entities can share the same head-word. For example, the span _Tom and Mary_ is analyzed as containing three entity mentions (_Tom_, _Mary_, and _Tom and Mary_), and both _Tom and Mary_ and _Tom_ share the same head-word. When the model at inference time tries to refer both to entity _Tom_ and entity _Tom and Mary_, two conflicting links to the span _Tom_ are predicted. This causes the model to always drop one of the links, degrading performance (Figure 1). We resolve this by defining the coordinating conjunction (e.g. _and_, _or_, _plus_) as head-word when faced with these types of mentions, which is a common approach in linguistics (Zoerner III, 1995; Progovac, 1998). Now, the model can learn to systematically link to this conjunction when something is coreferent with _Tom and Mary_, without producing conflicting links. We train a new WL-coref model, called Conjunction-Aware Word-level coreference (CAW-coref), and find that this simple fix achieves a significant improvement on the OntoNotes test set: the error difference with the state-of-the-art method shrinks by 34.6% (i.e. CAW-coref improves the absolute performance of WL-coref from 80.7% to 81.6%). Given that this fix incurs no additional model complexity, this gain is an important step forward for efficient coref. ## 2 Related Work The main competitive approaches to end-to-end coref can be classified into three broad categories: span-based, word-level, and autoregressive coref. **Span-based coreference**Lee et al. (2017) introduce e2e-coref, the first end-to-end span-based coref architecture. Starting from word embeddings, the model first predicts which spans are likely a mention. In the second step, coreferent links are predicted between such span-pairs to form coreference clusters. Given a text of \(n\) words, this approach incurs \(O(n^{4})\) computations. Thus, pruning is required to contain the complexity, both for mention prediction and coreference prediction. Many follow-up works improved upon this architecture by introducing contextualized embeddings (Lee et al., 2018; Kantor and Globerson, 2019), LMs for better span representations (Joshi et al., 2020), ensembling different models for coreference link scoring (LingMess; Otmazgin et al., 2023), and distilling the LM backbone for more efficient inference (Otmazgin et al., 2022). Still, the theoretical complexity of these approaches remains \(O(n^{4})\), requiring pruning and leading to poor scaling on long documents. **Word-level coreference**Given an input text, Dobrovolskii (2021) proposes to first predict coreference links between words and subsequently extract the spans surrounding words that are found to be coreferent. This lowers the computational cost of the coref architecture to \(O(n^{2})\). In turn, less aggressive pruning is needed, which resulted in better performance over conventional span-based techniques.2 Dobrovolskii (2021) uses one forward pass of a 355M roberta-large encoder model to form the contextualized word embeddings needed. Footnote 2: LingMess(Otmazgin et al., 2023) is the only span-based method that outperforms WL-coref, using a lightweight ensembling technique. This technique could be directly applied to WL-coref for potentially a similar performance boost. **Autoregressive coreference**Autoregressive methods iteratively build the coreference structure by running multiple forward passes of an LM backbone. Bohnet et al. (2023) introduce a 13B parameter mT5-xxl model called link-append: they run multiple forward passes of the LM over increasingly large chunks of the input text and iteratively predict how to grow the coreference structure. This results in the current state-of-the art model on OntoNotes (+2.6% F1 over WL-coref). Similarly, Liu et al. (2022) utilize an 11B parameter Flan-T5-xxl model (Chung et al., 2022) and predict a sequence of structure-building actions when regressing over the input text (ASP). Wu et al. (2020) introduce corefqa, formulating coref as a series of question-answering tasks, run multiple forward passes of an LM to build the coreference structure and use extra QA data for augmentation. In general, the autoregressive methods outperform span-based and word-level coreference, but at great computational cost. All these methods require at least \(O(n)\) forward passes of an LM per input document, while span-based or word-level techniques require only one. While some of these computations could be parallelized, running \(O(n)\) LM forward passed _per input document_ is exceedingly expensive. Additionally, the mT5-xxl and T0 models used by SOTA methods contain many more parameters compared to the roberta-large model used by WL-coref (13B and 11B respectively, compared to 355M), making these models less accessible to train and deploy. Liu et al. (2022) show that when using an LM comparable in size to the one used by WL-coref, their performance using autoregressive coreference is actually worse. Thus, word-level coreference is the most efficient method in terms of memory requirements and computational scaling. **Error analysis of coreference models** Porada et al. (2023) investigate types of errors in recent coref models, including WL-coref. Based on the hypothesis that distinct datasets operationalize the task of coreference differently, they perform generalization experiments between multiple datasets and analyze different types of model error. One of their findings suggests that coref for nested mentions is still hard in general. In this work, we highlight a failure case of WL-coref, namely, coreference with conjoined entities (i.e. coordinated noun phrases). We propose and empirically validate a simple yet effective solution. ## 3 The WL-coref model We briefly summarize the architecture used by Dobrovolskii (2021) and refer to the original publication for a full overview. **Step 1 - Word Representations:** First, contextualized word representations are created using one forward pass of an LM backbone and a learned averaging over constituent toMars. **Step 2 - Word-Level Coreference:** To create word-level links, a first _coarse antecedent scoring_ is constructed between all pairs of words using a learned bilinear function. For each word, the top \(k\) coarse antecedents are considered in a _fine antecedent scoring step_, using a trained feed forward neural network. The final antecedent scores are given by the sum of the coarse and fine scores. These antecedent scores between pairs of words are used to infer the most likely word-level coreference clustering. The words found to be part of a coreference cluster are passed on to Step 3. **Step 3 - Span Extraction:** For each coreferent word, the mention span surrounding it is extracted. This is done using a small feed-forward neural network applied to the contextualized word embeddings, followed by a convolutional layer which predicts probabilities for start and end span boundaries. This step is applied individually for each coreferent word and thus is not directly aware of the global clustering produced in Step 2. **Creating word-level data:** To train both steps, Dobrovolskii (2021) uses syntactic information to decompose the span-based OntoNotes dataset into a word-level version and a word-to-span dataset. The crucial step in this decomposition is selecting one head-word per span. Clearly, these head-words need to be as representative as possible of the entity mentioned in the span, so as to allow the word-level linking to perform well. Additionally, the head-words should be systematically picked so that the span extraction step has an easy time learning to extract the correct span surrounding a coreferent head-word. Dobrovolskii (2021) picks head-words using dependency parsing information already present in the OntoNotes dataset. Given a span, the method selects the head-word as the word in the span which depends on a word outside of the span. If none or multiple of such words are found, the right-most word of the span is selected as head-word. ## 4 Failure Modes of WL-coref We describe the two failures cases of WL-coref outlined in Figure 1 and propose a simple solution. **Entity Conjunction:** WL-coref is unable to fully solve routine examples where the conjunction of two or more mentions (e.g. via the use of the coordinating conjunction _and_) forms a new mention in the discourse. Consider the first example from Figure 1: _Tom and Mary are playing. He is 7 years old. They are siblings_. Following how head-words were defined in Dobrovolskii 2021, both the head-word for the mention _Tom and Mary_ and the mention _Tom_ coincide. At inference time, the word-level coreference step will thus predict both the coreferent links _Tom_ - _He_ and _Tom_ - _They_. Since the model does not predict a link _He_ - _They_, one of these two predicted links must be dropped in order to arrive at a consistent clustering. Thus, the model is unable to correctly output both coreferent clusters in this trivial example. **Nested Span Extraction:** Given a coreferent head-word, WL-coref sometimes struggles to extract the correct span boundaries surrounding this head-word when multiple valid options are possible. Consider the second example from Figure 1: _Tom and Mary are talking. They are talking._ WL-coref correctly predicts the word-level link between _Tom_ - _They_, but fails to extract the span _Tom and Mary_ in the subsequent step. This is most likely caused by the span extraction step operating independently on every coreferent head-word: no explicit information about the _Tom_ - _They_ link is taMary into account when deciding between _Tom_ and _Tom_, and this decision is thus ambiguous. **Proposed Solution:** Both failure modes are rooted in the same fundamental problem: there is no unique one-to-one relation between head-words and spans. This causes issues both when predicting word-level links and when performing span extraction, specifically when dealing with nesting. We propose to solve this by changing how head-words are defined on conjoined mentions. When creating the word-level training data, we use part-of-speech tags supplied in the OntoNotes dataset to detect if a coordinating conjunction (e.g. _and_, _or_, _plus_) is present in a span. Then we check the relative depth of the conjunction in the dependency parse of the span. If it is less than two steps away from the head-word of the span, it is selected as new head-word. This selects _and_ as head-word in the span _Tom and Ann_, but not in the span _David, whose children are called Tom and Ann_. Thus, we have defined a systematic way of picking head-words for conjoined mentions, in a way that they do not conflict with any of the head-words for the nested mentions. ## 5 Experiments and Results We use our new word-level dataset to train CAW-coref, a new instance of the WL-coref architecture. Using our altered notion of head-words, we train and evaluate this model on the English OntoNotes dataset without changing any hyperparameters compared to the default WL-coref run. We immediately find an absolute performance increase of 0.9% F1, setting the performance of CAW-coref at 81.6% F1. This shrinks the relative gap between efficient coref and expensive SOTA approaches by 34.6%, which is certainly not trivial since gains on OntoNotes have been hard to come by in recent years. The full breakdown of the results in function of the official evaluation metrics Vilain et al. (1995); Bagga and Baldwin (1998); Luo (2005); Pradhan et al. (2012) is given in Table 1. CAW-coref even outperforms LingMess, the best span-based method, which uses ensembling to achieve a significant performance boost. Potentially, such an ensembling technique could be applied to further boost CAW-coref performance as well. In total, we found that 1.17% of spans across the English OntoNotes train and development split were such conjoined entities. Supplementary to our empirical analysis, we show the qualitative improvement of CAW-coref on a list of simple examples in Appendix A. ## 6 Conclusion Neural coreference resolution techniques should be efficient in order to maximize real-word impact. In this work, we outlined two failure cases of the efficient word-level coreference resolution architecture and addressed them with one simple fix. Our new model, Conjunction-Aware Word-level coreference (CAW-coref), shrinks the performance gap between efficient and state-of-the-art coreference by 34.6%, and is currently the most performant efficient neural coreference model. \begin{table} \begin{tabular}{l|l l|l|l l l|l l l|l l|l} \hline \hline & \multicolumn{3}{c|}{**LM**} & \multicolumn{3}{c|}{**Link**} & \multicolumn{3}{c}{**MUC**} & \multicolumn{3}{c}{**B\({}^{3}\)**} & \multicolumn{3}{c}{**CEAF\({}_{\phi 4}\)**} & \multicolumn{1}{c}{**Avg.**} \\ & calls & params. & compl. & P & R & F1 & P & R & F1 & P & R & F1 & **F1** \\ \hline link-append & \(O(n)\) & 13B & / & **87.4** & 88.3 & **87.8** & 81.8 & **83.4** & **82.6** & 79.1 & **79.9** & **79.5** & **83.3** \\ corefqa & \(O(n^{2})\) & 340M & / & 88.6 & 87.4 & 88.0 & **82.4** & 82.0 & 82.2 & **79.9** & 78.3 & 79.1 & 83.1 \\ ASP & \(O(n)\) & 11B & / & 86.1 & **88.4** & 87.2 & 80.2 & 83.2 & 81.7 & 78.9 & 78.3 & 78.6 & 82.5 \\ \hline LingMess & 1 & 355M & \(O(n^{4})\) & 85.1 & 88.1 & **86.6** & **78.3** & **82.7** & **80.5** & 76.1 & 78.5 & 77.3 & 81.4 \\ s2e & 1 & 355M & \(O(n^{4})\) & **85.2** & 86.6 & 85.9 & 77.9 & 80.3 & 79.1 & 75.4 & 76.8 & 76.1 & 80.3 \\ CAW (ours) & 1 & 355M & \(O(n^{2})\) & 85.1 & **88.2** & **86.6** & 77.0 & 78.0 & 77.5 & **78.0** & **83.2** & **80.6** & **81.6** \\ WL\({}^{\dagger}\) & 1 & 355M & \(O(n^{2})\) & 84.8 & 87.5 & 86.1 & 76.1 & 76.7 & 76.6 & 77.1 & 82.1 & 79.5 & 80.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the OntoNotes 5.0 English test set. Scores calculated with official score Pradhan et al. (2014) or taken from original publication if available. **Avg. F1** is the main metric. We report the amount of LM calls and parameters of the LM used, as well as the coreference linking complexity if applicable. \({\dagger}\) Dobrovolskii (2021) reports an Avg. F1 of 81.0 as the best WL-coref run on the test set, while we report the result of our first run for both WL-coref and CAW-coref. ### Limitations There are always more distinct spans than words in a text, thus it is not always possible to uniquely pick a head-word per span. For example, our proposed solution can't fully handle sequential conjunctions such as _Tom and Mary and David_, since this span contains only 5 words but 6 mentions: _Tom_, _Tom and Mary_, _Mary_, _Mary and David_, _David_, and _Tom and Mary and David_. Luckily, we did not observe any such dense references in the dataset. Our procedure of selecting a new head-word for conjunctions relies on syntactic information in the form of part-of-speech tags and dependency parses. OntoNotes features several instances where conjunctions are formed using commas or hyphens, such as in the span _Tom, Mary_ or _Tom - Mary_. Here, the comma and hyphen should take on the role as head-word of the conjunction, but this is much harder to detect using the syntactic information present. Future work could focus on resolving both these issues to further boost the performance of efficient Conjunction-Aware Word-level coreference resolution. ## Acknowledgements We are grateful to our anonymous reviewers for their meticulous reading and valuable comments. Karel D'Oosterlinck is funded by an FWO Fundamental Research PhD Fellowship (11632223N).
2301.08609
Approximate Quantum Compiling for Quantum Simulation: A Tensor Network based approach
We introduce AQCtensor, a novel algorithm to produce short-depth quantum circuits from Matrix Product States (MPS). Our approach is specifically tailored to the preparation of quantum states generated from the time evolution of quantum many-body Hamiltonians. This tailored approach has two clear advantages over previous algorithms that were designed to map a generic MPS to a quantum circuit. First, we optimize all parameters of a parametric circuit at once using Approximate Quantum Compiling (AQC) - this is to be contrasted with other approaches based on locally optimizing a subset of circuit parameters and "sweeping" across the system. We introduce an optimization scheme to avoid the so-called ``orthogonality catastrophe" - i.e. the fact that the fidelity of two arbitrary quantum states decays exponentially with the number of qubits - that would otherwise render a global optimization of the circuit impractical. Second, the depth of our parametric circuit is constant in the number of qubits for a fixed simulation time and fixed error tolerance. This is to be contrasted with the linear circuit Ansatz used in generic algorithms whose depth scales linearly in the number of qubits. For simulation problems on 100 qubits, we show that AQCtensor thus achieves at least an order of magnitude reduction in the depth of the resulting optimized circuit, as compared with the best generic MPS to quantum circuit algorithms. We demonstrate our approach on simulation problems on Heisenberg-like Hamiltonians on up to 100 qubits and find optimized quantum circuits that have significantly reduced depth as compared to standard Trotterized circuits.
Niall F. Robertson, Albert Akhriev, Jiri Vala, Sergiy Zhuk
2023-01-20T14:40:29Z
http://arxiv.org/abs/2301.08609v6
# Approximate Quantum Compiling for Quantum Simulation: A Tensor Network based approach ###### Abstract The simulation of quantum spin chains is a promising candidate for the demonstration of quantum advantage. One of the main obstacles to achieving this is the noise that arises from implementing the deep circuits that appear in standard quantum time evolution algorithms. Compiling these deep circuits into shallower ones is thus a key issue that we address in this work. We use a Tensor Network based approach to Approximate Quantum Compiling to produce short depth quantum circuits that simulate the time evolution of the Heisenberg spin chain on up to 100 qubits. Furthermore, we run these short depth circuits on a _ibmq-mumbai_ - a 27 qubit device - and show that the accuracy of the measured observables is significantly improved after applying our Tensor Network compilation scheme. \({}^{1}\) IBM Quantum, IBM Research Europe - Dublin, IBM Technology Campus, Dublin 15, Ireland \({}^{2}\) Maynooth University, Maynooth, Ireland \({}^{3}\) Tyndall National Institute, Cork, Ireland ## 1 Introduction The simulation of quantum many-body systems is a task of immense scientific interest. The study of quantum dynamics, in particular, allows for the study of thermalisation, many-body localisation, Hubbard model physics and the applicability of field theory to out-of-equilibrium phenomena. In all of these fields there are many open scientific questions whose answers are likely to demand accurate simulation of quantum dynamics. However, the classical computational requirements of a brute-force approach to quantum dynamical simulations scales exponentially in the size of the system. Approximate techniques such as Tensor Networks are thus often called upon. Tensor Networks represent one of the best set of tools available to simulate time evolution and can also be applied to other problems such as ground state calculations [1] and machine learning [2, 3]. Matrix Product States (MPS) are a particular type of Tensor Network that are particularly suited to describe quantum systems in one dimension. They form a key component of modern implementations of the well known Density Matrix Renormalisation Group (DMRG) algorithm used to find the ground state of local Hamiltonians. The DMRG algorithm was designed many years before [4] it was realised that it could be understood as a variational optimisation algorithm where a Matrix Product State is used as an Ansatz for the ground state [5]. This insight shed light on the reasons behind the spectacular success of DMRG; the ground states of local Hamiltonians are only weakly entangled and so too are Matrix Product States. More precisely, the bipartite entanglement entropy \(S\) of the ground state of a local Hamiltonian satisfies an area law, meaning that the entanglement entropy is proportional to the area of the boundary of the two subsystems in the bipartition. In 1D, this means that the entanglement entropy is independent of the system size [6]. This is in contrast to typical states in Hilbert space whose entanglement structures satisfy a volume law. Matrix Product States are also known to satisfy an area law [5] and thus have the same entanglement structure as the ground state by design. Since the weak entanglement of ground states of local Hamiltonians allows for their efficient storage as Matrix Product States, it is natural to ask if this is also possible for states that are generated by time evolution as these states are no longer necessarily weakly entangled. It turns out that for many physical systems of interest, entanglement entropy increases linearly until it saturates, at which point an MPS will no longer be an efficient representation of the state. However, if the initial state is weakly entangled then the MPS representation can be used to store the state at early times. A paradigmatic example of this scenario is a quantum quench, whereby a quantum system is initially prepared in the ground state of some local Hamiltonian, the parameters of the Hamiltonian are subsequently changed very rapidly and the system then evolves according to Schrodinger's equation. The TEBD algorithm (Time Evolving Block Decimation) can be used to simulate time evolution after a quantum quench; the state is stored as an MPS and this MPS is updated as a function of time. Despite the success of DMRG, TEBD and other Tensor Network algorithms, these approaches are not without limitations. The memory requirements to store an MPS is characterised by the _bond dimension_, given by the dimension of the largest matrix used in the description of the state. For constant approximation error \(\epsilon\) this bond dimension increases exponentially with the entanglement entropy and thus with time. Therefore, for a fixed maximum bond dimension, the error \(\epsilon\) increases exponentially with time. This limits the applicability of Tensor Network algorithms to short time simulations. A quantum algorithm however, does not in principle suffer from this issue - the key difference between a quantum and a classical device being the ability to store highly entangled states. A quantum computer therefore has the potential to simulate quantum many-body systems for long times. The accurate simulation of the time evolution of 1D quantum systems is thus a promising route for the demonstration of quantum advantage in the short term. One such quantum algorithm is Trotterisation, where a discrete time step \(dt\) is used and the time evolution operator is approximated as a quantum circuit with an error that scales polynomially in \(dt\). The depth of the quantum circuit used in such an approach increases with decreasing \(dt\), leading to a trade-off between the noise arising from using deep circuits and the decreasing accuracy of the approximation when \(dt\) is increased. A number of variational quantum algorithms for the simulation of time evolution have therefore been developed that aim to use shallower circuits [7, 8, 9, 10]. Each of these approaches suffer from a number of issues such as convergence, runtime and limited device connectivity. As a result, it has been argued that such variational approaches are not practical for use on near term quantum hardware [11]. One approach that aims to overcome the issue of deep circuits is Approximate Quantum Compiling [12, 13, 14], where one defines a parametric circuit of fixed depth and uses techniques from optimisation to minimise the distance between the parametric circuit and the target circuit of interest - where distance is defined by some carefully chosen metric. In principle, this approach can lead to short depth circuits that implement the target circuit of interest within some error tolerance. In practice, a classical implementation of such an approach [14] is limited to act on a small number of qubits due to the exponential scaling of the Hilbert space with the number of qubits. Here we develop a new approach to quantum simulation that combines Matrix Product States, Approximate Quantum Compiling and Trotterisation to produce short depth quantum circuits that implement the time evolution operator of the Heisenberg spin chain. This approach is scalable thanks to the immense power of Matrix Product States. Figure 1 shows a schematic of our approach: first we apply Trotterisation classically for the maximum length of time for which we can still store the state as an MPS. We then apply a Matrix Product State implementation of Approximate Quantum Compiling to squeeze the circuit (purple box in the figure) to find a much shallower circuit that still reproduces the same state as Trotterisation, up to some small error in the fidelity. We then use the squeezed circuit as the input for the Trotter circuit which can now generate a quantum state beyond what can be stored classically. ## 2 Setup ### The model We will consider the XXX spin-chain - a paradigmatic model for quantum magnetism - defined by the Hamiltonian: \[H_{XXX}=-\sum_{i=0}^{L-1}h_{i,i+1}=-\sum_{i=0}^{L-1}\left(S_{i}^{x}S_{i+1}^{x} +S_{i}^{y}S_{i+1}^{y}+S_{i}^{z}S_{i+1}^{z}\right), \tag{1}\] where \(S^{x}\), \(S^{y}\) and \(S^{z}\) are written in terms of Pauli matrices as \(S^{x}=\frac{\sigma^{x}}{2}\), \(S^{y}=\frac{\sigma^{y}}{2}\) and \(S^{z}=\frac{\sigma^{z}}{2}\). The Hamiltonian in (1) is a prototypical example of an integrable 1D model and its dynamical behaviour has been studied extensively [15], including on a quantum computer [16, 17]. The time evolution of a quantum state \(|\psi(t)\rangle\) is governed by the Schrodinger equation: \[|\psi(t)\rangle=e^{-iH_{XXXt}}\left|\psi(0)\right\rangle \tag{2}\] where \(|\psi(0)\rangle\) is the wavefunction at time \(t=0\). In this work, we will consider the Neel state, written as: \(|\uparrow\downarrow\uparrow\downarrow...\uparrow\downarrow\rangle\) where \(\uparrow\) and \(\downarrow\) represent up and down spins respectively. The Neel state for \(n\) spins is simply implemented on \(n\) qubits as \(|1010...10\rangle\). The time evolution operator \(U(t)\equiv e^{-iHt}\) can be executed as a quantum circuit in a resource efficient way; we first write the Hamiltonian in (1) as \(H_{XXX}=H_{1}+H_{2}\) where \(H_{1}=-\sum\limits_{i\text{ odd}}h_{i,i+1}\) and \(H_{2}=-\sum\limits_{i\text{ even}}h_{i,i+1}\). Note that all operators in a given sum commute with all other operators in their respective sums. We then define the Suzuki-Trotter time evolution operator \(\mathcal{U}_{\text{rot}}(dt)\) in the following way: \[\mathcal{U}_{\text{rot}}^{(1)}(dt)=\prod_{j=0}^{L/2-1}U_{2j,2j+1}(dt)\prod_{j =1}^{L/2-1}U_{2j-1,2j}(dt)=e^{-iH_{XXZ}dt}+O(dt^{2}) \tag{3}\] where \(U_{jk}(dt)=e^{-ih_{jk}dt}\). The exact time evolution operator \(U(t)\) is thus approximated by \(m\) repeated applications of \(\mathcal{U}_{\text{rot}}(dt=\frac{t}{m})\), i.e. \(U(t)\approx\mathcal{U}_{\text{rot}}^{m}(dt=\frac{t}{m})\). As discussed in [17], each \(U_{jk}(dt)\) appearing in (3) can be implemented by the quantum circuit with just three CNOTs as in Figure 2. The full unitary \(\mathcal{U}_{\text{rot}}^{(1)}(dt)\) in equation (3) can then be implemented by the circuit in Figure 3. We can reduce the error in the Trotter formula in equation (3) by using higher order expressions [18]. It turns out that the second order Trotter formula can be implemented on a quantum circuit with only one extra layer in the circuit [17]. We have: \[\mathcal{U}_{\text{rot}}^{(2)}(dt)=\prod_{j=0}^{L/2-1}U_{2j,2j+1} \left(\frac{dt}{2}\right)\prod_{j=1}^{L/2-1}U_{2j-1,2j}\left(dt\right)\prod_{j =0}^{L/2-1}U_{2j,2j+1}\left(\frac{dt}{2}\right) \tag{4}\] \[=e^{-iH_{XXZ}dt}+O(dt^{3})\] which can be implemented on a quantum device by the circuit in Figure 4. Figure 1: Schematic of our approach: Trotterisation is applied classically (purple box) and then a Matrix Product State implementation of Approximate Quantum Compiling is applied to compress the first part of the circuit. Standard Trotterisation is then applied on a quantum device afterwards to simulate longer times, i.e. times which are beyond what is possible classically. Figure 4: Second order Trotter circuit acting on six qubits. Figure 3: First order Trotter circuit acting on six qubits. ### Matrix Product States An arbitrary quantum state on \(n\) qubits can be written in terms of complex variables \(c_{j_{1},...,j_{n}}\), the number of which scales as \(2^{n}\): \[|\psi\rangle=\sum_{\{j_{1},...,j_{n}\}}c_{j_{1},...,j_{n}}\,|j_{1},...,j_{n}\rangle \tag{5}\] where the sum is over all configurations of the binary variables \(j_{1},...,j_{n}\). The bipartite entanglement entropy of an arbitrary quantum state picked at random from Hilbert space satisfies a volume law which, as was discussed in the introduction, is distinct from area law entanglement in which case the entanglement entropy of two regions after the bipartition of the system is proportional to the area of the boundary of the system. A small subset of states in Hilbert space satisfies an area law. The coefficients \(c_{j_{1},...,j_{n}}\) of such states have a certain structure that we can take advantage of to study classically. Any state \(|\psi\rangle\) can be written in the following way: \[c_{j_{1},...,j_{n}}=A^{(1)}_{j_{1}}\cdot A^{(2)}_{j_{2}}...\cdot A^{(n)}_{j_{n }} \tag{6}\] where the \(A_{j}\) are \(\chi_{j}\times\chi_{j+1}\) dimensional matrices. Quantum states of the form (6) are known as Matrix Product States (MPS). The maximum value of \(\chi_{j}\) is referred to as the bond dimension of the MPS. We can represent an MPS graphically as in Figure 5. We associate two matrices, \(A^{(k)}_{\uparrow}\) and \(A^{(k)}_{\downarrow}\), to each qubit. We thus have a total of \(2n\) matrices to keep track of. The bond dimension \(\chi_{j}\) can be seen as a measure of the entanglement between the two subsystems when a bipartition is made at qubit \(j\). Therefore, states in Hilbert space that satisfy an area law - and therefore have a low bond dimension in their MPS representation - can be efficiently stored as Matrix Product States. States that satisfy a volume law will have a bond dimension that is exponential in the number of qubits. We will consider in this work the non-trivial dynamics governed by equation (2). As discussed in the introduction, the bipartite entanglement entropy of a ground state of a one-dimensional Hamiltonian that has a gap between its ground state and its excited state is independent of the size of the subsystems. The ground state of such a system - and hence the initial state in our setup - can be efficiently stored as an MPS. One can then use an algorithm such as TEBD (Time Evolving Block Decimation) [19] to update the MPS as a function of time to study the dynamics of the system. However, the entanglement entropy of the state increases linearly with time, hence the bond dimension \(\chi\) that is required to keep the error constant diverges exponentially with time. To simulate for longer times, a quantum computer would be needed. In section 2.3, we will discuss how Matrix Product States can be leveraged to reduce the resource requirements for this simulation problem when implemented on a quantum device. ### Matrix Product States applied to Approximate Quantum Compiling Figure 5: Graphical representation of an MPS. There are two matrices \(A^{(i)}\) for each qubit at position \(i\). Figure 6: The inner product \(\langle\psi_{1}|\psi_{2}\rangle\) of two Matrix Product States - see equations (11) and (6). Approximate quantum compiling (AQC) involves the design of a parametric quantum circuit with fixed depth - the parameters are then adjusted to bring it as close as possible to the target, where "close" is defined via some carefully chosen metric, see below. As discussed in [12], one can use so-called \(CNOT\) blocks to construct a natural circuit Ansatz. A \(CNOT\) block is a \(CNOT\) gate followed by single qubit rotations (see Figure 7). A block with a \(CNOT\) gate acting on a "control" qubit \(j\) and "target" qubit \(k\) is written as \(\text{CU}_{jk}(\theta_{1},\theta_{2},\theta_{3},\theta_{4})\). For a given hardware connectivity, one can then write down a fully parameterised circuit as: \[\begin{split} V_{\text{ct}}(\mathbf{\theta})=&\text{CU} _{\text{ct}(L)}\left(\theta_{3n+4L-3},\ldots,\theta_{3n+4L}\right)\cdots\text{CU }_{\text{ct}(1)}\left(\theta_{3n+1},\ldots,\theta_{3n+4}\right)\\ &\left[R_{z}\left(\theta_{1}\right)R_{y}\left(\theta_{2}\right) R_{z}\left(\theta_{3}\right)\right]\otimes\cdots\otimes\left[R_{z}\left(\theta_{3n-2} \right)R_{y}\left(\theta_{3n-1}\right)R_{z}\left(\theta_{3n}\right)\right] \end{split} \tag{7}\] The position of the \(CNOT\) blocks in the parameterised circuit can be customised to suit the particular target circuit that one is interested in. Here we are interested in finding a circuit that implements the unitary time evolution operator as in equation (2). We thus consider a structure inspired by the first and second-order Trotter circuits in Figures 3 and 4 respectively. Recall that each block \(U(dt)\) in Figures 3 and 4 represents the 2-qubit sub-circuit with three \(CNOT\)s in Figure 2; it is therefore natural to consider a circuit Ansatz with sub-circuits each with three \(CNOT\) blocks as in Figures 8 and 9, such that the circuit Ansatz mimics the structure of the first and second order Trotter circuits. In the notation of [14], the parameterised circuits in Figures 8 and 9 correspond to \(n=4\) qubits, \(l=2\) layers and \(b=3\)\(CNOT\) blocks in each layer. In both Figure 8 and Figure 9 there are three rotation gates acting on each qubit at the beginning of the circuit. In the examples considered in this work we will take the initial state to be \(\left|0\right\rangle\) - the initial rotation gate \(R_{z}(\theta)\) is redundant for these cases but is necessary for more general initial states. One can define the distance between the target and parameterised circuit via a number of different metrics. Here we use a cost function based on the Hilbert-Schmidt test: \[C_{hs}^{\text{state}}=1-|\bra{0}V^{\dagger}(\theta)\ket{\psi_{target}}|^{2} \tag{8}\] The goal of AQC is to tune the parameters \(\theta\) to minimise the cost function under consideration. Note that here we are considering the application of AQC to state preparation as opposed to full circuit compilation. More precisely, this means that our cost function is designed such that it is minimised when the action of \(V(\theta)\) on the initial state \(\left|0\right\rangle\) produces a state that is as close as possible to a target state \(\left|\psi_{target}\right\rangle\) (up to some global phase). In this work, the target state of interest is given by \(\left|\psi(t)\right\rangle\) in equation (2). This is in contrast to the situation where one starts with some target _circuit_\(U\) and the cost function is designed to bring the full matrix \(V(\theta)\) as close as possible to \(U\). Figure 8: Parameterised circuit inspired by the structure of the first order Trotter circuit in Figure 3. Figure 9: Parameterised circuit inspired by the structure of the second order Trotter circuit in Figure 4. As pointed out in [20], the gradient of the cost function in (8) vanishes exponentially. This observation lead to the distinction between global and local cost functions; local cost functions have only polynomially vanishing gradients in some cases of interest - see [20, 21, 14] for details. As was shown in [14], the Hilbert-Schmidt test - which is a global cost function - can be turned into a local one by adding several "bit-flip" terms which increases the magnitude of the gradient: \[\begin{split} C^{\text{state}}_{Ins}&=1-\left| \left\langle 0\right|V^{\dagger}(\theta)\left|\psi_{0}\right\rangle\right|^{2}- \left(\frac{n-1}{n}\right)\sum_{j=1}^{n}\left|\left\langle 0\right|X_{j}V^{ \dagger}(\theta)\left|\psi_{0}\right\rangle\right|^{2}\\ &\quad-\left(\frac{n-2}{n}\right)\sum_{j<k}\left|\left\langle 0 \right|X_{j}X_{k}V^{\dagger}(\theta)\left|\psi_{0}\right\rangle\right|^{2}-... -\frac{1}{n}\sum_{j<k<l<...}\left|\left\langle 0\right|X_{j}X_{k}X_{l}...V^{ \dagger}(\theta)\left|\psi_{0}\right\rangle\right|^{2}\end{split} \tag{9}\] Convergence of the cost function can be significantly improved by adding these terms, however the computational cost of calculating the gradient becomes prohibitive. It was demonstrated in [14] that this can be overcome by truncating the expression in (9) to get: \[C^{(1)}_{L}(\alpha)=1-\left|\left\langle 0\right|V^{\dagger}(\theta)\left| \psi_{0}\right\rangle\right|^{2}-\alpha\sum_{j=1}^{n}\left|\left\langle 0 \right|X_{j}V^{\dagger}(\theta)\left|\psi_{0}\right\rangle\right|^{2} \tag{10}\] where \(\alpha\) is a parameter that can be tuned throughout the optimisation procedure - a scheme to implement this tuning effectively was demonstrated in [14]. In (10), we have only kept 1 "bit-flip" term, i.e. we have dropped all terms with more than one \(NOT\) operator \(X_{i}\). As discussed in [14], one can obtain higher order expressions \(C^{(k)}_{L}\) with more "bit-flip" terms included - doing so induces a larger gradient in the cost function but increases the computational burden. Note that each term in (9) or (10) is an overlap of quantum states, and since the overlap of two MPS can be calculated very efficiently the architecture of Matrix Product States can be leveraged to calculate the cost function and solve the approximate quantum compilation problem for large numbers of qubits. Consider for example two quantum states \(\left|\psi_{1}\right\rangle\) and \(\left|\psi_{2}\right\rangle\): \[\begin{split}\left|\psi_{1}\right\rangle&=\sum_{ \left\{j_{1},...,j_{n}\right\}}c^{(1)}_{j_{1},...,j_{n}}\left|j_{1},...,j_{n} \right\rangle\\ \left|\psi_{2}\right\rangle&=\sum_{\left\{j_{1},..., j_{n}\right\}}c^{(2)}_{j_{1},...,j_{n}}\left|j_{1},...,j_{n}\right\rangle\end{split} \tag{11}\] As discussed in section 2.2, for weakly entangled states the coefficients \(c^{(1)}_{j_{1},...,j_{n}}\) and \(c^{(2)}_{j_{1},...,j_{n}}\) are not all independent and can be represented efficiently as Matrix Product States - see Figure 5: \[\begin{split} c^{(1)}_{j_{1},...,j_{n}}&=A^{(1)}_{j_ {1}}\cdot A^{(2)}_{j_{2}}...\cdot A^{(n)}_{j_{n}}\\ c^{(2)}_{j_{1},...,j_{n}}&=B^{(1)}_{j_{1}}\cdot B ^{(2)}_{j_{2}}...\cdot B^{(n)}_{j_{n}}\end{split} \tag{12}\] We want to calculate the quantity: \[f(\left|\psi_{1}\right\rangle,\left|\psi_{2}\right\rangle)=\left|\left|\left\langle \psi_{1}\right|\psi_{2}\right\rangle\right|^{2} \tag{13}\] The overlap of two MPS, and hence the fidelity \(f\) in (13) can be calculated efficiently by "contracting" the Tensor Network shown in Figure 6. ## 3 Results We now present the results of our simulations of the Schrodinger equation in equation (2) using the second order Trotter formula in equation (4). First let's define and clarify some notation: * \(\left|a_{1}\right\rangle\): The state generated by the optimised parametric circuit in Figure 9. * \(l\): Number of layers in Ansatz: in Figures 8 and 9 there are \(l=2\) layers. * \(\left|t_{1}\right\rangle\): The state generated by the Trotter circuit in Figure 4. * Number of Trotter steps: the analogue of the number of layers in the Ansatz circuits. In Figure 3 and 4 there are 3 Trotter steps. * \(|t1\_gt\rangle\): the "ground truth" generated by classical Tensor Network simulations of deep Trotter circuits, i.e. extremely small time steps. We take \(dt=0.04\) to generate the ground truth state while \(|t_{1}\rangle\) is generated with a time step of \(dt=0.4\). All circuits considered here take the form of the second-order Trotter structure. More precisely, in each graph we use the labels "Trotter" and "Ansatz" and these circuits have the structure in Figures 4 and 9 respectively. In Figures 10, 11 and 12 we plot fidelity vs evolution time for the 50 qubit XXX Hamiltonian and we compare the result of the Trotter circuit vs the AQC circuit. In Figure 10 we can see that the fidelity of the Trotter circuit decays rapidly while the fidelity of the AQC circuit remains above 0.99. Note that the Trotter circuit and the AQC circuit are of equal depth. In Figures 11 and 12 we compare short depth circuits generated by AQC with deep Trotter circuits. We observe that the AQC circuits can achieve comparable or better fidelities with much shorter depth. In particular, in Figure 12 the two fidelities are almost identical but the AQC circuit is half the depth of the Trotter circuit. We plot the Figure 11: 50 qubits: the maximum number of layers in the parametric circuit is 12 while it is 18 for the Trotter circuit. Despite this, the parametric circuit achieves a higher fidelity. Figure 10: 50 qubits: fidelities of the parametric circuit and the Trotter circuit with the ”ground truth” obtained by Tensor Network simulations. The two circuits are of identical length but the parametric circuit achieves a significantly higher fidelity at late times. same data for 100 qubits in Figures 13 and 14. Now we would like to consider how these results affect the implementation on a real quantum device. We consider a 20 qubit spin-chain on the 27 qubit device _ibmq-mumbai_. First we plot the fidelity results for 20 qubits in Figure 15. We ran the resulting parametric circuit on _ibmq-mumbai_ and in Figure 16 we plot the expectation values \(\langle\psi(t)|S_{6}^{\dagger}|\psi(t)\rangle\) as obtained from the quantum device using the parametric AQC circuit, the Trotter circuit and from a classical Tensor Network simulation. We observe that this observable is more accurate when obtained with the AQC circuit due to its reduced depth. Note that the difference between the results from the simulation and the results from the quantum device would be greatly reduced after applying error mitigation [22, 23]. We have not attempted to apply error mitigation to either circuit as this would be outside the scope of this work. We expect that, since our Tensor Network compilation scheme greatly reduces the noise of the circuit, any error mitigation scheme would be enhanced by our approach. Figure 12: 50 qubits: the maximum number of layers in the parametric circuit is 9 while it is 18 for the Trotter circuit. Both circuits achieve very similar fidelities despite the parametric circuit being half the depth of the Trotter circuit. Figure 13: 100 qubits: The maximum number of layers in the parametric circuit and the Trotter circuit is 18. It can be seen that the fidelity of the Trotter circuit decays rapidly but that of the parametric circuit remains high. ## 4 Discussion In this paper we applied Tensor Network methods to Quantum Compiling and demonstrated their efficacy on the 27 qubit device _ibmq-mumbai_. Our method is similar in spirit to [24] where Matrix Product States were used to prepare the initial state for VQE to find the ground state of some Hamiltonian - here we use Matrix Product States to prepare a short depth quantum circuit that simulates the time evolution of a 1D Hamiltonian. We note that a compilation scheme for Hamiltonian simulation was also considered in [25] on up to 12 qubits. We chose the XXX Hamiltonian in equation (1) because it has been well studied, but we would be particularly interested to apply the compilation methods developed here to non-integrable systems by e.g. adding a random field to the Hamiltonian in (1) and studying phenomena of scientific interest such as many-body localisation. We have shown results of our simulations on up to 100 qubits. In principle we can significantly increase the number of qubits and the length of time to which we apply our MPS compilation scheme; the limiting factor at present seems to be the particular implementation that we apply to SVD and to calculate the gradient. We believe that both of these can be improved significantly, in particular by using an efficient Figure 14: 100 qubits: The maximum number of layers in the parametric circuit is 12 while it is 18 for the Trotter circuit. Figure 15: The maximum depth of the parametric circuit is half that of the Trotter circuit - there are 9 and 18 layers respectively. These 20 qubit circuits were implemented on _ibmq-mumbai_ - see Figure 16. parallel implementation - this is the subject of ongoing work. In our current framework we use the Qiskit MPS package which is designed for generic situations in which long range connectivity may be required and thus does not take advantage of the short range structure of the circuits in Figures 8 and 9. ## 5 Acknowledgements We would like to thank Merav Aharoni for useful discussions. Furthermore, we are grateful to Michael Lubasch for pointing us to his work in [25]. This work was funded by the Disruptive Technologies Innovation Fund (DTIF), by Enterprise Ireland, under project number DTIF2019-090 (project QCoIR) and also supported by IBM Quantum. Figure 16: The expectation value of \(S_{0}^{z}\) vs time for a chain of 20 qubits as measured on the 27 qubit quantum device _ibmq-mumbai_. The circuit produced from our MPS implementation of AQC uses is shallower than the Trotter circuit, and thus produces an expectation value that is much closer to the true value plotted in the blue curve, obtained by classical Tensor Network simulations.
2308.04538
Dispersion of run-and-tumble microswimmers through disordered media
Understanding the transport properties of microorganisms and self-propelled particles in porous media has important implications for human health as well as microbial ecology. In free space, most microswimmers perform diffusive random walks as a result of the interplay of self-propulsion and orientation decorrelation mechanisms such as run-and-tumble dynamics or rotational diffusion. In an unstructured porous medium, collisions with the microstructure result in a decrease in the effective spatial diffusivity of the particles from its free-space value. Here, we analyze this problem for a simple model system consisting of non-interacting point particles performing run-and-tumble dynamics through a two-dimensional disordered medium composed of a random distribution of circular obstacles, in the absence of Brownian diffusion or hydrodynamic interactions. The particles are assumed to collide with the obstacles as hard spheres and subsequently slide on the obstacle surface with no frictional resistance while maintaining their orientation, until they either escape or tumble. We show that the variations in the long-time diffusivity can be described by a universal dimensionless hindrance function $f(\phi,\mathrm{Pe})$ of the obstacle area fraction $\phi$ and P\'eclet number $\mathrm{Pe}$, or ratio of the swimmer run length to the obstacle size. We analytically derive an asymptotic expression for the hindrance function valid for dilute media ($\mathrm{Pe}\,\phi\ll 1$), and its extension to denser media is obtained using stochastic simulations.
David Saintillan
2023-08-08T19:11:15Z
http://arxiv.org/abs/2308.04538v1
# Dispersion of run-and-tumble microswimmers through disordered media ###### Abstract Understanding the transport properties of microorganisms and self-propelled particles in porous media has important implications for human health as well as microbial ecology. In free space, most microswimmers perform diffusive random walks as a result of the interplay of self-propulsion and orientation decorrelation mechanisms such as run-and-tumble dynamics or rotational diffusion. In an unstructured porous medium, collisions with the microstructure result in a decrease in the effective spatial diffusivity of the particles from its free-space value. Here, we analyze this problem for a simple model system consisting of non-interacting point particles performing run-and-tumble dynamics through a two-dimensional disordered medium composed of a random distribution of circular obstacles, in the absence of Brownian diffusion or hydrodynamic interactions. The particles are assumed to collide with the obstacles as hard spheres and subsequently slide on the obstacle surface with no frictional resistance while maintaining their orientation, until they either escape or tumble. We show that the variations in the long-time diffusivity can be described by a universal dimensionless hindrance function \(f(\phi,\mathrm{Pe})\) of the obstacle area fraction \(\phi\) and Peclet number \(\mathrm{Pe}\), or ratio of the swimmer run length to the obstacle size. We analytically derive an asymptotic expression for the hindrance function valid for dilute media (\(\mathrm{Pe}\,\phi\ll 1\)), and its extension to denser media is obtained using stochastic simulations. ## I Introduction Self-propelled particles, from motile microorganisms to synthetic microswimmers, perform random walks in space that allow them to explore their environment, for instance in their quest for oxygen or nutrients. These random dynamics result from the interplay of self-propulsion and orientational fluctuations, which cause stochastic changes in their swimming direction. One classic example is the case of run-and-tumble bacteria, which perform straight runs in a given direction alternating with random reorientation events known as tumbles that are driven by the rapid unbundling and rebundling of their flagella. As first explained by Berg [1], the resulting random walks lead to diffusive spreading at long times, with a mean squared displacement growing linearly with time as \(\left\langle|\Delta\mathbf{r}|^{2}\right\rangle\sim 2dD_{0}t\), where \(d\) is the spatial dimension and \(D_{0}\) is an effective diffusivity. Under the assumptions of instantaneous and uncorrelated tumbles and of exponentially distributed run times, a simple random walk model predicts \(D_{0}=v_{0}^{2}\overline{\tau}/3\), where \(v_{0}\) and \(\overline{\tau}\) are the constant run speed and mean run time, respectively. These stochastic dynamics play a key role in various transport strategies such as chemotaxis, where bacteria can bias their tumbling frequency based on the local concentration of a chemical, resulting in a net drift along the chemical gradient. While synthetic microswimmers do not perform run-and-tumble dynamics, they typically experience rotational Brownian motion, which also leads to correlated random walks and diffusive spreading on long time scales [2]. Motile bacteria and other microorganisms often reside in complex environments such as soils or tissues, where their frequent interactions and collisions with the microstructure strongly affect their motions. Understanding active dispersion in such systems is key to a variety of problems in soil ecology, biofouling and bioremediation, as well as in medicine where it affects the spread of bacterial infections. Additionally, the potential of engineered active particles lies in their ability to navigate complex geometries, be it in lab-on-a-chip devices or inside living organisms for drug-delivery applications. Our fundamental understanding of basic transport properties of active particles in heterogeneous random media remains, however, incomplete [3]. Recent microfluidic experiments using either living microorganisms or synthetic self-propelled particles have started to shed light on the physics of active transport in these complex environments [4; 5]. The ability to fabricate model porous media of controlled porosity and microstructure provides a useful tool for probing the role of geometry and crowding in determining long-time dispersion. In both random [6; 7; 8; 9; 10; 11; 12; 13; 14] and periodic [15; 16; 17; 18; 19; 20] media, the leading effect of the porous microstructure is to hinder particle transport as a result of frequent collisions between microswimmers and obstacles, resulting in a decrease in the effective diffusivity with the volume fraction of the medium. While the precise nature of the scattering dynamics occurring at obstacles is found to depend on the type of microswimmer [21; 22; 23; 24; 25; 26] and potential role of hydrodynamic interactions [27; 28], all self-propelled particles in confinement have a tendency to accumulate at boundaries [29; 30; 31; 32; 33; 34], with the effect of reducing their run length thereby impeding transport. In strongly confined environments (low-porosity media), motile bacteria have even been observed to abandon run-and-tumble dynamics in favor of other more efficient transport strategies [11; 12]. The role of obstacle shape has also been considered, with asymmetric obstacles potentially giving rise to rectified motion [35]. Finally, a few experiments have considered the role of an externally applied flow [16; 36], which has a strong effect on mean transport and dispersion by reorienting the swimmers in the fluid shear generated by the microstructure [37; 38; 39]. Modeling efforts aimed at predicting dispersion in complex media have been more limited, due in part to challenges in accounting for details of the scattering dynamics and porous medium geometry. On the computational side, various numerical simulations have been performed based on the active Brownian particle (ABP) model in porous media described as random distributions of obstacles [14; 40; 41] as well as in periodic post arrays [42; 43; 44], including in the presence of hydrodynamic interactions [45]. Analytical predictions, however, have been very scarce with a few exceptions. Theoretical models have been proposed for transport of active particles in cubic lattices in the presence of obstacles [46; 47]: while these models allow for analytical predictions, their underlying assumptions make them difficult to compare with real systems. In periodic geometries, generalized Taylor dispersion theory has been applied to estimate effective transport coefficients such as the mean velocity and long-time swim diffusivity of ABPs [44]. Very recently, the case of random media was also addressed using a continuous random walk approach modeling the effect of interactions with the porous microstructure as random trapping events [48]. Yet, a general theoretical framework able to yield closed-form expressions for the diffusivity in a random medium remains lacking, even under the most basic assumptions. Here, we propose a minimal theoretical model for the dispersion of microswimmers through a disordered medium. We consider point-like run-and-tumble microswimmers traveling in two dimensions through the interstices of a random distribution of circular obstacles in the absence of Brownian diffusion or hydrodynamic interactions. Simple interaction rules are adopted whereby a swimmer colliding with an obstacle simply slides on its surface without friction while maintaining its orientation, until it either tumbles or escapes by swimming away tangentially to the surface. A related model was proposed by Jakuszeit _et al._[43] to analyze transport through periodic arrays; we apply it to the case of random disordered media. As we show below, the effect of collisions with the microstructure on the diffusivity can be captured by a dimensionless hindrance function \(f(\mathrm{Pe},\phi)\), which is a function of the Peclet number \(\mathrm{Pe}=v_{0}\overline{\tau}/a\), or ratio of the mean run length \(v_{0}\overline{\tau}\) to the obstacle radius \(a\), and of the mean area fraction \(\phi\) of the obstacles. The objective of the paper is to determine \(f\), which we calculate analytically in the dilute limit defined as \(\mathrm{Pe}\,\phi\ll 1\), and numerically for arbitrary values of \(\mathrm{Pe}\) and \(\phi\). The paper is organized as follows. Details of the problem formulation and diffusivity calculation are provided in Sec. II and III, respectively. The limit of dilute media is analyzed theoretically in Sec. IV, and results from the theory are discussed and compared to numerical simulations with varying porosities in Sec. V. We conclude in Sec. VI. ## II Problem definition We analyze the dispersion of non-interacting run-and-tumble microswimmers traveling through the interstices of a random porous medium in two dimensions. The medium is composed of identical non-overlapping circular pillars of radius \(a\), with area fraction \(\phi=N_{t}\pi a^{2}/L^{2}\) where \(L\) is the linear dimension of the square domain and \(N_{t}\) is the total number of pillars. The assumption of identical pillars is convenient for theoretical analysis but will be relaxed in some of the simulations of Sec. V. The system is assumed to be large enough that swimmers remain far away from any domain boundaries at all times; in simulations, we will make use of periodic boundary conditions. In free space (no pillars), the microswimmers perform simple run-and-tumble dynamics as depicted in Fig. 1(a): straight runs with constant velocity \(v_{0}\) and run time \(\tau\) alternate with instantaneous reorientation events. The run time is a random variable governed by a probability density function \(p(\tau)\) with mean value \(\overline{\tau}\). We will consider Figure 1: Typical trajectories of run-and-tumble particles in free space (a) and in a two-dimensional porous medium (b), for a duration of 30 runs. In each case, the run time is exponentially distributed and pre- and post-tumble orientations are uncorrelated. In (b), the porous medium has a pillar area fraction of \(\phi=0.62\) with a random Gaussian distribution of pillar radii with standard deviation \(\sigma_{a}/\overline{a}=0.5\), and the Péclet number based on the mean radius is \(\mathrm{Pe}=\overline{\ell}/\overline{a}=2.0\). two cases: \[p(\tau)=\begin{cases}\delta(\tau-\overline{\tau})&\text{constant run time},\\ \overline{\tau}^{-1}\exp(-\tau/\overline{\tau})&\text{exponential distribution}.\end{cases} \tag{1}\] The exponential distribution provides a good approximation to the distribution of run times for _E. coli_[49] and has been widely used in models of bacterial run-and-tumble. More detailed measurements, however, have shown deviations from the exponential model [50] and have highlighted strong temporal variability in single cells [51, 52]; we neglect these effects here. Given \(v_{0}\) and \(\tau\), we define the run length \(\ell=v_{0}\tau\), or distance traveled by the swimmer between two tumbles in the absence of pillars, with mean value \(\overline{\ell}=v_{0}\overline{\tau}\). In a porous medium [Fig. 1(b)], microswimmers can collide with pillars, and these collisions alter their trajectories leading to scattering. We propose a minimal model for collisions based on the following assumptions: 1. Swimmers are point particles that interact with pillars via a hard-sphere potential. 2. When a swimmer collides with a pillar, its orientation and run time remain unchanged. 3. After impact, the swimmer slides along the pillar surface with the tangential component of its swimming velocity, and no resistance to sliding. 4. If the swimmer's orientation becomes tangent to the surface, it escapes from the pillar and continues its run in a straight line, possibly encountering additional pillars before the end of the run. 5. If the run time elapses before the swimmer is able to escape, the run ends on the pillar surface where the next tumble takes place. The four types of runs (no collision, collision with no escape, collision with escape, and multiple collisions) are depicted graphically in Fig. 2. When a collision occurs, we denote by \(\tau_{c}\) the time to collision from the start of the run, and by \(\tau_{r}\) the remaining time in the run after collision, so that \(\tau_{c}+\tau_{r}=\tau\). Runs with multiple collisions can be recursively modeled as sequences of single-collision runs with reduced run times. Any of the runs depicted in Fig. 2 can either start with the swimmer in the bulk or on the surface of a pillar. Note that \(\tau_{c}=0\) in cases where a run starts on the surface of a pillar with the swimmer pointing into the pillar. While \(\tau\) is assumed to be unaffected by collisions, note that the actual distance traveled by a swimmer colliding with a pillar is in fact shorter than \(v_{0}\tau\). In this case, we will continue to use the variable \(\ell\) to denote the _unimpeded_ run length \(v_{0}\tau\). In a porous medium, system properties are entirely governed by two dimensionless numbers: the area fraction \(\phi\) introduced above, as well as \(\text{Pe}=v_{0}\overline{\tau}/a=\overline{\ell}/a\), which compares the persistence length of swimming trajectories to the pillar size and can be interpreted as a swimming Peclet number. The assumptions made here greatly idealize the dynamics of real microswimmers near walls, which are usually more complex. In particular, assumptions (i)-(iv) are incompatible with hydrodynamic interactions, which can lead to a long-ranged coupling between swimmers and pillars and reorient swimmers during collisions as seen in various experiments [7, 22, 27, 53, 54] and models [28, 55]. In experimental systems, other effects can also impact orientation dynamics, including direct steric contacts especially in the case of flagellated swimmers [21, 22, 55] and rodlike swimmers [54], as well as chemical interactions in the case of phoretic swimmers [6, 57, 24, 56]. This reorientation at boundaries in turn leads to scattering at angles that are non-tangent with the surface. The assumption of frictionless sliding is also an approximation, as either lubrication layers or surface roughness would come into play and affect tangential motion in experiments. Nevertheless, this minimal model provides a simple baseline for understanding the effect of collisions on average transport properties. ## III Diffusivity As they travel through the medium, perform tumbles and collide with pillars, the microswimmers execute random walks leading to a diffusive behavior at long times [1]. We denote by \(\mathbf{r}_{0}\) the position of a swimmer at \(t=0\), assumed to coincide with a tumble, and by \(\mathbf{r}_{N}\) the location of its \(N\)th tumble at time \(t_{N}\): \[\mathbf{r}_{N}=\mathbf{r}_{0}+\sum_{i=1}^{N}\Delta\mathbf{r}_{i},\qquad t_{N}= \sum_{i=1}^{N}\tau_{i}. \tag{2}\] At the start of run \(i\), the swimmer selects a new run time \(\tau_{i}\) following the distribution of Eq. (1), and as Figure 2: Types of possible displacements during a single run of total duration \(\tau\). If the swimmer collides with a pillar (point \(C\)), it can either escape (point \(E\)) or end its run on the pillar. The time to collision is \(\tau_{c}\), whereas \(\tau_{r}=\tau-\tau_{c}\) is the remaining time in the run after collision. sumes a new random orientation \(\mathbf{p}_{i}=[\cos\theta_{i},\sin\theta_{i}]\) where \(\theta_{i}\in[0,2\pi)\) follows a uniform distribution. The displacement \(\Delta\mathbf{r}_{i}=\mathbf{r}_{i}-\mathbf{r}_{i-1}\) during step \(i\) is a random variable expressed as \[\Delta\mathbf{r}_{i} =v_{0}\tau_{i}\,\mathbf{p}_{i}+\delta\mathbf{r}_{i}, \tag{3}\] \[=(v_{0}\tau_{i}+\delta\mathbb{r}_{i}^{\parallel})\mathbf{p}_{i}+ \delta r_{i}^{\perp}\mathbf{p}_{i}^{\perp}. \tag{4}\] Here, \(v_{0}\tau_{i}\,\mathbf{p}_{i}\) denotes the displacement in the absence of any collision. If one or more collision(s) take place during the run, this displacement is modified by a correction \(\delta\mathbf{r}_{i}\), which is decomposed into longitudinal (along \(\mathbf{p}_{i}\)) and transverse (perpendicular to \(\mathbf{p}_{i}\)) contributions in Eq. (4), where \(\mathbf{p}_{i}^{\perp}=[-\sin\theta_{i},\cos\theta_{i}]\). The displacements \(\delta r_{i}^{\parallel}\) and \(\delta r_{i}^{\perp}\) are random variables that depend on the collision incidence angle \(\alpha\) (to be defined more precisely later) and collision time \(\tau_{c}\), in addition to \(v_{0}\), \(\tau_{i}\) and \(a\). We explain their calculation in detail in Sec. IV. Given Eq. (2), we can estimate the mean squared displacement after \(N\) runs as \[\langle|\mathbf{r}_{N}-\mathbf{r}_{0}|^{2}\rangle=\sum_{i=1}^{N}\sum_{j=1}^{N }\langle\Delta\mathbf{r}_{i}\cdot\Delta\mathbf{r}_{j}\rangle, \tag{5}\] where brackets \(\langle\cdot\rangle\) denote an ensemble average over all possible run outcomes (random variables \(\tau_{i}\), \(\mathbf{p}_{i}\), as well as \(\alpha\) and \(\tau_{c}\) for any collisions). Assuming successive runs are uncorrelated and using Eq. (4), we obtain \[\langle|\mathbf{r}_{N}-\mathbf{r}_{0}|^{2}\rangle=N\langle(v_{0}\tau)^{2}+2v_ {0}\tau\,\delta r_{\parallel}+\delta r_{\parallel}^{2}+\delta r_{\perp}^{2}\rangle. \tag{6}\] At long times, the mean squared displacement grows linearly, allowing us to define the effective diffusivity \(D\) as \[D=\lim_{N\to\infty}\frac{1}{4}\frac{\langle|\mathbf{r}_{N}-\mathbf{r}_{0}|^{ 2}\rangle}{\langle t_{N}\rangle}, \tag{7}\] i.e., using Eq. (6) and \(\langle t_{N}\rangle=N\overline{\tau}\), \[D=\frac{v_{0}^{2}\langle\tau^{2}\rangle}{4\overline{\tau}}+\frac{2v_{0} \langle\tau\delta r_{\parallel}\rangle+\langle\delta r_{\parallel}^{2}+ \delta r_{\perp}^{2}\rangle}{4\overline{\tau}}. \tag{8}\] In free space (no collisions, \(\delta\mathbf{r}_{i}=\mathbf{0}\)), this expression reduces to the well known value [1] \[D_{0}=\frac{v_{0}^{2}\langle\tau^{2}\rangle}{4\overline{\tau}}=\begin{cases} \frac{1}{4}v_{0}^{2}\overline{\tau}&\text{constant run time},\\ \frac{1}{3}v_{0}^{2}\overline{\tau}&\text{exponential distribution}.\end{cases} \tag{9}\] We can then rewrite the diffusivity of Eq. (8) as \[D=D_{0}[1-f(\text{Pe},\phi)], \tag{10}\] where the expected decrease in diffusivity due to collisions with pillars is entirely captured by a dimensionless hindrance function \[f(\text{Pe},\phi)=-\frac{2v_{0}\langle\tau\delta r_{\parallel}\rangle+\langle \delta r_{\parallel}^{2}+\delta r_{\perp}^{2}\rangle}{v_{0}^{2}\langle\tau^{2 }\rangle}. \tag{11}\] The main of objective of this paper is to determine the function \(f(\text{Pe},\phi)\) governing the dependence of the diffusivity on Peclet number and area fraction. We first present a theoretical model for \(f(\text{Pe},\phi)\) in dilute media in Sec. IV, and generalize it to the case of arbitrary area fractions using stochastic simulations in Sec. V. ## IV Theory for dilute media ### Collision probabilities and time to collision We develop an asymptotic theory for the hindrance function \(f(\text{Pe},\phi)\) valid in dilute media where collisions are rare. In this section, we assume that the pillar size \(a\) is uniform and that the run length \(\tau\) is constant; these assumptions will be relaxed in the numerical simulations of Sec. V. For the sake of discussion, we first analyze a single run and seek to estimate the probability that a swimmer will collide with at least one pillar during that run. As mentioned in Sec. II, a run can either start with the swimmer pointing into the bulk, or with the swimmer on a pillar and pointing towards its surface. For reasons that will become clear later, we need to treat these two cases separately as they have distinct collision probabilities and distinct probability density functions for the incidence angle \(\alpha\). #### iv.1.1 Collision of type A: \(\tau_{c}>0\) We denote by type A a collision that occurs during a run that started with a swimmer pointing into the bulk. Note that as long as the swimmer points into the bulk, it is irrelevant whether its initial position is actually in the bulk or on the surface of a pillar. Since the initial part of the run will take place in the bulk, any collision of type A will have a strictly positive collision time \(\tau_{c}>0\). The probability for a collision of type A to occur in any given run can be estimated graphically as shown in Fig. 3(a): given that the swimmer points into the bulk, at least one pillar should have its center inside the shaded region with area \(2a\ell\). In sufficiently dilute media, pillars are distributed randomly inside that region according to Poisson statistics. For a given pillar number density \(n=\phi/\pi a^{2}\), the mean number of pillars inside the shaded region is \[\langle N\rangle=2a\ell n=\frac{2}{\pi}\frac{\ell}{a}\phi=\frac{2}{\pi}\text{ Pe}\,\phi. \tag{12}\] The probability \(P_{\text{A}}^{c}\) for a collision of type A is then estimated as the probability of there being at least one pillar inside the collision region: \[P_{\text{A}}^{c}=P(N\geq 1)=1-\exp\left(-\langle N\rangle\right). \tag{13}\] Expanding for \(\text{Pe}\,\phi\ll 1\), \[P_{\text{A}}^{c}\approx\langle N\rangle=\frac{2}{\pi}\text{Pe}\,\phi. \tag{14}\] In the theoretical analysis presented here, we will assume that no more than one collision can occur during a given run. To quantify the validity of this assumption, we can estimate the probability of there being two or more pillars inside the collision area: \[P(N\geq 2) =1-P(N=0)-P(N=1)\] \[=1-\exp(-\langle N\rangle)-\langle N\rangle\exp(-\langle N\rangle) \tag{15}\] \[\approx\langle N\rangle^{2}.\] The assumption of no more than one collision per run is therefore valid so long as \(\mathrm{Pe}\,\phi=(\ell/a)\phi\ll 1\). Note that this condition involves the current run length \(\ell\) in addition to the pillar area fraction: a swimmer might collide with multiple pillars even in dilute media if its run length is very long. Note that, in the case where \(\tau\) is exponentially distributed, events will inevitably occur for which the run time is long enough that the assumption of no more than one collision breaks down. This effect will be quantified more precisely in the simulations of Sec. V.3. Assuming a collision takes place, whether the swimmer ends its run on the pillar or is able to escape depends on the time \(\tau_{r}\) remaining in the run after impact. We recall that \(\tau_{r}=\tau-\tau_{c}\), where \(\tau\) is the current run time and \(\tau_{c}\) is the time to collision. For a given value of \(\tau\), the location of the pillar is uniformly distributed in the shaded region of Fig. 3(a), which implies a uniform distribution for the collision time: \[p_{\mathrm{A}}(\tau_{c})=\frac{1}{\tau},\qquad\tau_{c}\in(0,\tau]. \tag{16}\] Since \(\tau_{r}=\tau-\tau_{c}\), the remaining time after collision follows the same distribution: \[p_{\mathrm{A}}(\tau_{r})=\frac{1}{\tau},\qquad\tau_{r}\in[0,\tau). \tag{17}\] #### iv.1.2 Collision of type B: \(\tau_{c}=0\) A collision of type B is defined as an event where the swimmer begins its run on the surface of a pillar with a new post-tumble orientation that points into the pillar [Fig. 3(b)]. For a collision of type B to occur, the previous run must have involved a collision (of either type A or B) in which the swimmer did not escape the pillar and thus ended its run on the surface. In that case, the new run starts with a collision with \(\tau_{c}=0\). Estimating the probability \(P_{\mathrm{B}}^{c}\) for a collision of type B is slightly more subtle, as it involves information about the previous run. We can obtain it as \[P_{\mathrm{B}}^{c}=\frac{1}{2}\left[P_{\mathrm{A}}^{c}(1-P_{\mathrm{A}}^{esc })+P_{\mathrm{B}}^{c}(1-P_{\mathrm{B}}^{esc})\right], \tag{18}\] where \(P_{\mathrm{A}}^{esc}\) and \(P_{\mathrm{B}}^{esc}\) denote the probabilities of a swimmer escaping the pillar before the end of its run during a collision of either type A or B; the calculation of these probabilities involves consideration of the dynamics during collision and is deferred to Sec. IV.3. The factor of \(1/2\) in Eq. (18) comes from the fact that a swimmer tumbling on the surface of a pillar has equal probabilities of selecting a new orientation pointing into the pillar (leading to a collision of type B) or into the bulk. Solving for \(P_{\mathrm{B}}^{c}\) in Eq. (18) yields \[P_{\mathrm{B}}^{c}=\left(\frac{1-P_{\mathrm{A}}^{esc}}{1+P_{\mathrm{B}}^{esc }}\right)P_{\mathrm{A}}^{c}, \tag{19}\] where \(P_{\mathrm{A}}^{c}\) was obtained in Eq. (13). Since collisions of type B are such that \(\tau_{c}=0\), the corresponding probability density functions for the collision and remaining times are trivial: \[p_{\mathrm{B}}(\tau_{c})=\delta(\tau_{c}),\qquad p_{\mathrm{B}}(\tau_{r})= \delta(\tau_{r}-\tau). \tag{20}\] ### Dynamics during collision We now turn to the dynamics during a collision, and analyze swimmer motion after it first impacts with the pillar and still has time \(\tau_{r}\) remaining before its next tumble. A schematic of a collision is shown in Fig. 4. For the purpose of calculating the displacement \(\delta\mathbf{r}\), we lose no generality by choosing a Cartesian coordinate system with the \(x\) axis aligned with the current swimming direction \(\mathbf{p}\) and the origin at the center of the pillar. We denote by \(C\) the position of the collision point, which forms an angle \(\alpha\in[-\pi/2,\pi/2]\) with the negative \(x\) axis. Due to the symmetry \(\alpha\leftrightarrow-\alpha\), we can restrict our attention to collisions for which \(\alpha\geq 0\). Note that the incidence angle is a random variable, whose probability density function depends on the type of collision. For a collision of type A, the normal coordinate \(y_{c}=a\sin\alpha\) is uniformly distributed over \([-a,a]\) since the pillar location is uniformly distributed in the shaded region of Fig. 3(a), and therefore \[p_{\mathrm{A}}(\alpha)=\cos\alpha,\qquad\alpha\in[0,\pi/2]. \tag{21}\] Figure 3: (a) Collision of type A: for a swimmer initially pointing into the bulk, a collision will occur if the shaded region, of area \(2a\ell\), contains at least one pillar. Dotted circles show the envelope of pillar positions with which a collision can occur. (b) Collision of type B: a swimmer performing a tumble on the surface of a pillar such that its new orientation points into the pillar will start its new run with a collision. However, for collisions of type B, the angle \(\alpha\) itself is uniformly distributed, i.e., \[p_{\rm B}(\alpha)=\frac{2}{\pi},\qquad\alpha\in[0,\pi/2]. \tag{22}\] As the swimmer moves along the pillar surface, its orientation \(\mathbf{p}\) does not change by assumption. Instead, the swimmer slides with tangential velocity \(v_{0}(\mathbf{I}-\hat{\mathbf{n}}\hat{\mathbf{n}})\cdot\mathbf{p}\), where \(\hat{\mathbf{n}}\) is the unit normal on the surface. This translates into the angular velocity \[\frac{\mathrm{d}\theta}{\mathrm{d}t}=\frac{v_{0}}{a}\sin\theta, \tag{23}\] where the angle \(\theta(t)\) defines the angular position of the swimmer on the pillar as shown in Fig. 4. This can be integrated as \[\int_{\alpha}^{\theta}\frac{\mathrm{d}\theta}{\sin\theta}=\int_{0}^{t}\frac{v _{0}}{a}\,\mathrm{d}t, \tag{24}\] i.e., \[\log\left[\frac{\tan(\theta/2)}{\tan(\alpha/2)}\right]=\frac{v_{0}}{a}t, \tag{25}\] where we have chosen the origin of time \(t=0\) as the instant when contact first takes place: \(\theta(0)=\alpha\). There are two possible outcomes to a collision. If \(\theta\) reaches \(\pi/2\) before the end of the run, the swimmer escapes the pillar at point \(E\) in Fig. 3(b) and finishes its run in a straight line. Otherwise, the current run will end at some location \(\theta_{f}\in[\alpha,\pi/2)\) where the next tumble will take place. The time for the swimmer to reach \(E\), or escape time \(t_{e}\), is found by setting \(\theta=\pi/2\) in Eq. (25): \[t_{e}(\alpha)=-\frac{a}{v_{0}}\log\tan(\alpha/2). \tag{26}\] The escape time is plotted in Fig. 5(a) and shows a strong dependence on incidence angle \(\alpha\), with \(t_{e}(\alpha)\to\infty\) as \(\alpha\to 0\). Indeed, a swimmer hitting a pillar nearly head-on (\(\alpha\gtrsim 0\)) initially slides very slowly as its tangential velocity goes as \(\sin\alpha\), whereas a swimmer hitting a pillar nearly tangentially (\(\alpha\lesssim\pi/2\)) is able to escape after a short time. For the swimmer to escape before the end of the current run, the remaining time \(\tau_{r}\) after contact should exceed the escape time: \[\tau_{r}\geq t_{e}(\alpha). \tag{27}\] For a given value of \(\tau_{r}\), this gives a condition on the incidence angle: the swimmer will escape if \(\alpha\geq\alpha_{c}\) where \[\alpha_{c}(\tau_{r})=2\tan^{-1}[\exp(-v_{0}\tau_{r}/a)], \tag{28}\] but will finish the current run on the surface of the pillar otherwise; see Fig. 5(b). If the swimmer escapes, it continues its run in the \(x\) direction after leaving the surface of the pillar at point \(E\), for a duration of \(\tau_{r}-t_{e}(\alpha)\). We can now estimate the longitudinal and transverse displacements incurred by the collision with the pillar. We first consider the case where the swimmer escapes the pillar at point \(E\), i.e., \(\alpha\geq\alpha_{c}\) or \(\tau_{r}\geq t_{e}\). In the \(x\) direction, the swimmer undergoes a displacement of \(a\cos\alpha\) over the course of the collision, while it would have travelled a distance of \(v_{0}t_{e}(\alpha)\) during the same amount of time, had there been no collision. Therefore, \[\Delta x=a\left[\cos\alpha+\log\tan(\alpha/2)\right]. \tag{29}\] In the transverse direction, the displacement is easily obtained as \[\Delta y=a(1-\sin\alpha). \tag{30}\] On the other hand, if the run time elapses before the swimmer escapes, i.e., \(\alpha<\alpha_{c}\) or \(\tau_{r}<t_{e}\), the swimmer will finish the current run at angular position \(\theta_{f}\) on the pillar surface, where \[\theta_{f}(\alpha,\tau_{r})=2\tan^{-1}\left[\frac{\tan(\alpha/2)}{\exp(-v_{0} \tau_{r}/a)}\right]. \tag{31}\] Figure 4: Collision dynamics: we choose a Cartesian coordinate system as shown, with the \(x\) direction aligned with \(\mathbf{p}\). The swimmer collides at point \(C\) (incidence angle \(\alpha\)) and slides on the surface of the pillar according to the projection of \(\mathbf{p}\) in the tangent direction (blue arrows), where \(\theta(t)\) denotes the instantaneous angle between the position vector and the negative \(x\) axis. If the run is long enough, the swimmer can escape as it reaches point \(E\) where \(\mathbf{p}\) becomes tangent with the surface. Figure 5: (a) Escape time \(t_{e}(\alpha)\) as a function of incidence angle. The swimmer will escape if \(\tau_{r}\geq t_{e}(\alpha)\). (b) Critical angle for escape: for the swimmer to escape, its incidence angle must fall outside of a wedge of angle \(2\alpha_{c}\). In the longitudinal direction, the displacement over the course of the collision is \(a[\cos\alpha-\cos\theta_{f}]\), whereas it would have been \(v_{0}\tau_{r}\) in the absence of collision. Therefore \[\Delta x=a(\cos\alpha-\cos\theta_{f})-v_{0}\tau_{r}, \tag{32}\] while the transverse displacement is simply given by \[\Delta y=a(\sin\theta_{f}-\sin\alpha). \tag{33}\] In summary, the longitudinal and transverse displacements incurred by a collision are expressed as \[\frac{\delta r_{\parallel}}{a}=\begin{cases}\cos\alpha+\log\tan(\alpha/2)& \alpha\geq\alpha_{c},\\ \cos\alpha-\cos\theta_{f}-v_{0}\tau_{r}/a&\alpha<\alpha_{c},\end{cases} \tag{34}\] and \[\frac{|\delta r_{\perp}|}{a}=\begin{cases}1-\sin\alpha&\alpha\geq\alpha_{c}, \\ \sin\theta_{f}-\sin\alpha&\alpha<\alpha_{c},\end{cases} \tag{35}\] where \(\theta_{f}\) is given by Eq. (31). Note that \(\delta r_{\parallel}\leq 0\), whereas \(r_{\perp}\) is of either sign by symmetry: collisions hinder longitudinal transport but induce transverse motion of either sign. The displacements \(\delta r_{\parallel}\) and \(|\delta r_{\perp}|\) are plotted vs incidence angle \(\alpha\) in Fig. 6. As expected, collisions have the greatest effect on transport at vanishing incidence angles (\(\alpha\to 0\)), for which \(\delta r_{\parallel}\to v_{0}\tau_{r}\) and \(|\delta r_{\perp}|\to a\) for large \(v_{0}\tau_{r}/a\). ### Probability of escape We are now in a position to calculate the escape probabilities \(P_{\rm A}^{esc}\) and \(P_{\rm B}^{esc}\) for each type of collision, which are needed to estimate the collision probability \(P_{\rm B}^{\rm c}\) in Eq. (19). For a given collision, escape will occur if the condition of Eq. (27) is met. Therefore, taking into account all possible incidence angles, \[P^{esc} =\int_{0}^{\pi/2}\left[1-P(\tau_{r}\leq t_{e}(\alpha))\right]\,p (\alpha)\,\mathrm{d}\alpha, \tag{36}\] \[=\int_{0}^{\pi/2}\left[1-\int_{0}^{t_{e}}p(\tau_{r})\,d\tau_{r} \right]p(\alpha)\,\mathrm{d}\alpha. \tag{37}\] Inserting the probability density functions \(p(\tau_{r})\) and \(p(\alpha)\) for each type of collision, as provided in Eqs. (17), (20), (21) and (22), we obtain after simplifications \[P_{\rm A}^{esc} =1-\frac{1}{\mathrm{Pe}}\left(\alpha_{0}-\frac{\pi}{2}\right), \tag{38}\] \[P_{\rm B}^{esc} =1-\frac{2}{\pi}\alpha_{0}, \tag{39}\] where \(\alpha_{0}=\alpha_{c}(\tau)=2\tan^{-1}[\exp(-\mathrm{Pe})]\) is the critical angle for escape for a collision with \(\tau_{c}\to 0\). The two escape probabilities \(P_{\rm A}^{esc}\) and \(P_{\rm B}^{esc}\) only depend on the Peclet number and are plotted in Fig. 7. For both types of collisions, the escape probability \(P^{esc}\) increases monotonically with \(\mathrm{Pe}\), vanishes in the limit of short runs (\(\mathrm{Pe}\to 0\)) and tends to \(1\) in the limit of long runs (\(\mathrm{Pe}\to\infty\)). Collisions of type B are more likely to lead to an escape than collisions of type A as they have maximum remaining time \(\tau_{r}=\tau\). ### Displacement statistics and hindrance function In the case of constant run time \(\tau\), the hindrance function introduced in Eq. (11) simplifies to \[f(\mathrm{Pe},\phi)=-\frac{2\langle\delta r_{\parallel}\rangle}{\mathrm{Pe}} -\frac{\langle\delta r_{\parallel}^{2}\rangle+\langle\delta r_{\perp}^{2} \rangle}{\mathrm{Pe}^{2}}. \tag{40}\] We obtained analytical expressions for the displacements \(\delta r_{\parallel}\) and \(\delta r_{\perp}\) in Eqs. (34)-(35). The ensemble average in Eq. (40) is evaluated over all possible outcomes of a run: \[\langle\chi\rangle =P_{\rm A}^{c}\int_{0}^{\tau}\int_{0}^{\pi/2}\chi\,p_{\rm A}( \alpha)p_{\rm A}(\tau_{r})\,\mathrm{d}\alpha\,\mathrm{d}\tau_{r} \tag{41}\] \[+P_{\rm B}^{c}\int_{0}^{\tau}\int_{0}^{\pi/2}\chi\,p_{\rm B}( \alpha)p_{\rm B}(\tau_{r})\,\mathrm{d}\alpha\,\mathrm{d}\tau_{r},\] where the various probability density functions are given in Eqs. (17)-(20) and (21)-(22). Note that \(\tau=\mathrm{Pe}\) in dimensionless variables. The only dependence on area fraction \(\phi\) in Eq. (41) is through the prefactors of \(P_{\rm A}^{c}\) and \(P_{\rm B}^{c}\), which are both proportional to \(1-\exp[-(2/\pi)\mathrm{Pe}\,\phi]\). Figure 6: (a) Longitudinal displacement \(\delta r_{\parallel}/a\) and (b) transverse displacement \(|\delta r_{\perp}|/a\) as functions of incidence angle \(\alpha\), for different values of \(v_{0}\tau_{r}/a\) where \(\tau_{r}\) is the remaining time in the run after collision. Figure 7: Escape probability \(P^{esc}\) for a collision of type A or B as a function of Péclet number, as obtained in Eqs. (38)–(39). In the limit of low volume fraction and small Peclet number, \(\mathrm{Pe},\phi\to 0\), asymptotic expansions of the average displacements can be obtained, with leading-order contributions given by: \[\langle\delta_{\parallel}\rangle \approx-\frac{5}{3\pi}\mathrm{Pe}^{2}\phi, \tag{42}\] \[\langle\delta_{\parallel}^{2}\rangle \approx\frac{199}{180\pi}\mathrm{Pe}^{3}\phi,\] (43) \[\langle\delta_{\perp}^{2}\rangle \approx\frac{61}{180\pi}\mathrm{Pe}^{3}\phi, \tag{44}\] from which the hindrance function is obtained as \[f(\mathrm{Pe},\phi)\approx\frac{17}{9\pi}\mathrm{Pe}\,\phi. \tag{45}\] At arbitrary values of \(\mathrm{Pe}\) and \(\phi\), the integrals in Eq. (41) can be evaluated using numerical quadrature. We discuss results from this calculation in Sec. V.2, where we compare the dilute theory predictions to event-based stochastic simulations valid for a wide range of \(\mathrm{Pe}\) and \(\phi\). ## V Results and Discussion ### Event-based stochastic simulations We perform event-based stochastic simulations of run-and-tumble microswimmer trajectories through randomly generated porous geometries. \(N_{t}\) non-overlapping pillars are distributed at random inside a square periodic box to achieve the desired area fraction. The pillars can be either of uniform size or polydisperse (see Sec. V.3). The simulations track the positions of non-interacting run-and-tumble swimmers whose kinematics follow the assumptions of Sec. II. At the start of each run, the next run time and a new random orientation are selected, potential collisions are detected, and the swimmer position is advanced until the end of the run, where the location of potential collision and escape points is obtained analytically based on the calculations of Sec IV.2. Multiple collisions can occur during one run. For each swimmer trajectory, the simulation records the times and locations Figure 8: Event-based stochastic simulations in random polydisperse media. (a) Single swimmer trajectories consisting of 100 runs of constant run time, for various combinations of \(\mathrm{Pe}\) (columns) and \(\phi\) (rows). Red, yellow and green symbols show the location of tumbles, collisions and escape points. Also see movies in the Supplemental Material [58]. (b) Locations of 5000 random tumbles in simulations with \(\phi=0.65\) for two different values of \(\mathrm{Pe}\), where the locations of tumbles occurring in the bulk or on the surface of a pillar are highlighted in red and blue, respectively. In all simulations shown, pillar radii were drawn from a Gaussian distribution with mean \(\overline{a}=1\) and standard deviation \(\sigma_{a}/\overline{a}=0.5\), and periodic boundary conditions are used at the edges of the square domain marked by a dotted line. of all tumbles, collisions and escape points. The simulation box is typically chosen to be significantly larger than the mean run length, so that the statistics are unaffected by the periodic boundary conditions. Typical trajectories showing the locations of these points in simulations with constant run time but varying pillar size are plotted in Fig. 8(a) for different combinations of Peclet number and area fraction (also see movies in the Supplemental Material [58]). Expectedly, the most efficient dispersion occurs in dilute media at large Pe (long runs that are largely unimpeded by the medium), and increasing area fraction strongly hinders dispersion for all Peclet numbers. As Pe increases, the swimmers spend a greater fraction of their time sliding on the surface of pillars. This is illustrated in Fig. 8(b), showing the locations of 5000 tumbles for two values of Pe: as Peclet number increases and swimmer trajectories become more persistent, a larger fraction of tumbles occurs on the surface of pillars. We quantify some of these trends further in the following sections. The calculation of the diffusivity from simulation data is illustrated in Fig. 9, showing the growth of the mean squared displacement for 10 individual trajectories, as well as an average over an ensemble of 1000 trajectories. ### Constant run time and pillar size We center the following discussion on results in systems with constant run time and uniform pillar size, which are the assumptions of the theoretical model of Sec. IV. The effects of variable run time and pillar size will be briefly considered in numerical simulations in Sec. V.3. #### v.2.1 Collision probabilities We first analyze collision probabilities in Fig. 10, where we compare results from stochastic simulations with theoretical predictions. Figure 10(a) shows the probability \(P_{c}=P_{\rm A}^{c}+P_{\rm B}^{c}\) of having at least one collision (of either type A or B) within a given run. The dilute theory of Sec. IV provides the expression \[P_{c}=\frac{2-P_{\rm A}^{esc}+P_{\rm B}^{esc}}{1+P_{\rm B}^{esc}}\left[1- \exp\left(-\frac{2}{\pi}\mathrm{Pe}\,\phi\right)\right], \tag{46}\] where the escape probabilities \(P_{\rm A}^{esc}\) and \(P_{\rm B}^{esc}\) are functions of Pe only and were obtained in Eqs. (38)-(39). Remarkably, the dilute theory provides an excellent quantitative estimate of \(P_{c}\) over a wide range of area fractions and Peclet numbers, well beyond its expected range of validity. In very sparse media (\(\phi\ll 1\)), the collision probability \(P_{c}\) increases linearly with both \(\phi\) and Pe, while it Figure 10: (a) Probability \(P_{c}\) of having at least one collision (of either type A or B) within a given run, scaled by \(\phi\) and plotted as a function of Pélet number for various area fractions. Symbols show results from stochastic simulations with uniform pillar size and constant run time, and lines show the theoretical prediction of Eq. (46). (b)–(c) Mean numbers of collisions of type A (b) or type B (c) in any given run as functions of Pélet number for various area fractions, from stochastic simulations. Figure 9: Mean squared displacement as a function of time in a typical simulation with uniform pillar size and constant run time. Gray curves show the square displacement \(|\mathbf{r}_{N}-\mathbf{r}_{0}|^{2}\) for 10 individual stochastic simulations with distinct random seeds. The blue curve the mean squared displacement \(\langle|\mathbf{r}_{N}-\mathbf{r}_{0}|^{2}\rangle\) obtained as an average over 1000 trajectories. A linear fit is used to obtain the diffusivity \(D\) as the quarter slope. is found to saturate with respect to Pe in denser media. In the limit of \(\mathrm{Pe}\to\infty\), every run will incur at least one collision, so that \(P_{c}\to 1\). Note that while the dilute theory assumes that at most one collision can take place during one run, such is not the case in simulations. To quantify this further, we plot in Fig. 10(b,c) the mean numbers \(\langle N_{\mathrm{A}}^{c}\rangle\) and \(\langle N_{\mathrm{B}}^{c}\rangle\) of collisions of type A and B in any given run, from stochastic simulations. Multiple collisions of type A can occur in a run, especially in dense media at high Peclet numbers. Indeed, we find that \(\langle N_{\mathrm{A}}^{c}\rangle\) increases nearly linearly with both \(\phi\) and Pe, and exceeds 1 at sufficiently large values of either \(\phi\) or Pe. We expect the dilute theory of Sec. IV to be inaccurate in those regimes, since it assumes that at most one collision occurs per run. On the other hand, there cannot be more than one collision of type B in a given run: \(N_{\mathrm{B}}^{c}\in\{0,1\}\) and therefore \(\langle N_{\mathrm{B}}^{c}\rangle<1\) as seen in Fig. 10(c). For all area fractions, \(\langle N_{\mathrm{B}}^{c}\rangle\) first increases with Pe to reach a plateau for \(\mathrm{Pe}\gtrsim 2\), with the value of the plateau displaying a linear dependence on \(\phi\). #### iv.2.2 Displacement statistics and hindrance function Next, we turn to displacement statistics, focusing on the limit of low area fraction and Peclet number. Figure 11 shows the relevant statistics entering the calculation of the hindrance function in Eq. (40) as functions of Peclet number for various area fractions: panel (a) shows the mean longitudinal displacement \(\langle\delta_{\parallel}\rangle\) scaled by \(\mathrm{Pe}\,\phi\), whereas panels (b) and (c) show the variances of the longitudinal and transverse displacements, \(\langle\delta_{\parallel}^{2}\rangle\) and \(\langle\delta_{\perp}^{2}\rangle\), respectively, both scaled by \(\mathrm{Pe}^{2}\phi\). At low Peclet number, all the displacements collapse and are very well captured by the asymptotic results of Eqs. (42)-(44), which predict a linear dependence on \(\phi\), as well as a linear dependence on \(\mathrm{Pe}\) upon rescaling. As the Peclet number is increased, the growth of the displacement statistics with Pe slows down and ultimately saturates, yet the collapse with respect to area fraction persists. The dilute theory of Sec. IV is found to provide excellent quantitative predictions for \(\phi\lesssim 0.01\) over the range of Peclet numbers considered here. Departures are observed at larger volume fractions when \(\mathrm{Pe}\gtrsim 1\), beyond which the dilute theory underpredicts displacements: this can be attributed to the fact that the dilute theory assumes at most one collision per run, whereas multiple collisions of type A typically occur in that regime in simulations, as previously found in Fig. 10(b). Finally, we note that the magnitude of \(\langle\delta_{\perp}^{2}\rangle/\mathrm{Pe}^{2}\phi\) is notably smaller than \(\langle\delta_{\parallel}\rangle/\mathrm{Pe}\,\phi\) and \(\langle\delta_{\parallel}^{2}\rangle/\mathrm{Pe}^{2}\phi\), indicating that the leading contribution to the hindrance function comes from the reduction in longitudinal displacements. The hindrance function \(f(\mathrm{Pe},\phi)\) is analyzed in Fig. 12, where we compare results from stochastic simulations (symbols) with the predictions from the dilute theory (lines). The dependence on area fraction is shown in Fig. 12(a), showing \(f\) as a function of \(\phi\) for various Peclet numbers. The hindrance is found to grow nearly linearly with \(\phi\) for all values of Pe considered here, as expected from the collapse of the displacement statistics upon scaling by \(\phi\) in Fig. 11. Good agreement with the theoretical prediction is observed, especially at low \(\phi\) and Pe, consistent with the assumptions of the theory; departures are observed as \(\phi\) increases, where the theory systematically underpredicts the hindrance function. The dependence on Peclet number is illustrated in Fig. 12(b), where we show \(f\) scaled by \(\phi\) as a function of Pe. At low Peclet number, the simulation data matches the theoretical model very well and collapses onto the asymptotic prediction of Eq. (45), which predicts a linear dependence on Pe. Upon increasing the Peclet number, the growth of \(f/\phi\) slows down and ultimately saturates, reaching a plateau whose value depends weakly on \(\phi\), with larger values attained at lower area fractions. Consistent with the observations in Fig. 11, the dilute theory for the hindrance function is found to provide an excellent fit to the data in dilute media (\(\phi\lesssim 0.01\)) even when the Peclet Figure 11: Displacement statistics at low area fraction and Péclet number: (a) \(\langle\delta_{\parallel}\rangle/\mathrm{Pe}\,\phi\), (b) \(\langle\delta_{\parallel}^{2}\rangle/\mathrm{Pe}^{2}\phi\), and (c) \(\langle\delta_{\perp}^{2}\rangle/\mathrm{Pe}^{2}\phi\), plotted as functions of Péclet number for various area fractions \(\phi\). In each case, symbols show results from stochastic simulations with uniform pillar size and constant run time, whereas full lines show theoretical predictions based on the dilute theory of Sec. IV. Dotted grey lines show the theoretical asymptotes of Eqs. (42)–(44) in the limit of \(\mathrm{Pe},\phi\to 0\). number is large, but it significantly underpredicts \(f\) at larger values of \(\phi\), due to the preponderance of runs with multiple collisions. ### Variable run time and pillar size The previous results have exclusively considered the case of constant run time and monodisperse media--two assumptions that are convenient for theoretical analysis but unlikely to be met in many experimental systems of interest. Here, we relax these assumptions and analyze the effects of varying run time and pillar size using stochastic simulations. We first consider the effect of obstacle polydispersity on the hindrance function in Fig. 13(a). Porous media of increasing polydispersity were generated by drawing pillar radii from Gaussian distributions of increasing widths (while rejecting negative values). The generated distributions were then rescaled affinely to have mean \(1\), and their measured standard deviations \(\sigma_{a}\) are reported in the figure. Weak polydispersity (\(\sigma_{a}/a=0.1\)) has only a negligible effect on dispersion. The hindrance function, however, is reduced by up to \(\sim 20\%\) in highly polydisperse media (\(\sigma_{a}/a=0.5\) and \(0.8\)), with the strongest effect occurring for intermediate Peclet numbers (\(\mathrm{Pe}\sim 2-6\)). That dispersion is easier in a polydisperse medium is, perhaps, an intuitive result, for the same reason that it is easier to pack polydisperse particles than monodisperse ones. The decrease in \(f\) can simply be explained by a decrease in the mean number of collisions per run, \(\langle N_{c}\rangle=\langle N_{\mathrm{A}}^{c}+N_{\mathrm{B}}^{c}\rangle\), as polydispersity becomes significant; see inset of Fig. 13(a). The effect of variable \(\tau\) is analyzed in Fig. 13(b), comparing the hindrance function for constant and exponentially distributed run times, in a system with uniform pillars and \(\phi=0.2\). In this case, variations in run time cause an increase in the value of \(f\), especially at low to Figure 12: (a) Hindrance function \(f\) as a function of area function \(\phi\) for various values of Péclet number Pe. (b) Hindrance function \(f\), scaled by \(\phi\), as a function of Pe for various values of \(\phi\). In both panels, symbols show results from stochastic simulations with uniform pillar size and constant run time, and full lines show theoretical predictions from the dilute theory of Sec. IV. Dotted line in (b) shows the low-Pe asymptote of Eq. (45). Figure 13: Scaled hindrance function \(f/\phi\) as a function of Péclet number, for: (a) simulations with constant run time in several polydisperse media, where \(\sigma_{a}\) is the standard deviation of the pillar radius distribution; and (b) simulations with uniform pillar size and either constant or exponentially distributed run times. In both panels, \(\phi=0.2\). The insets show the average number of collisions of any type per run, \(\langle N_{c}\rangle=\langle N_{\mathrm{A}}^{c}+N_{\mathrm{B}}^{c}\rangle\), for the same conditions. intermediate Peclet numbers (\(\mathrm{Pe}\sim 1-4\)). The reason for this difference is less intuitive: indeed, the mean number of collisions per run is nearly unaffected by variations in run time, as shown in the inset. Instead, we attribute it to a change in the relative magnitude of the averages appearing in Eq. (11), and the effect on the hindrance is most pronounced at low Peclet numbers, where the displacement statistics are most sensitive to variations in \(\mathrm{Pe}\). ## VI Concluding remarks We have presented a minimal theoretical model for the dispersion of run-and-tumble microswimmers in disordered porous media composed of randomly distributed circular pillars in two dimensions. The effect of the microstructure on the long-time spatial dispersion was shown to be entirely captured by a scalar dimensionless hindrance function \(f(\phi,\mathrm{Pe})\) of the medium area fraction \(\phi\) and swimming Peclet number \(\mathrm{Pe}\), which compares the persistence length of swimmer trajectories to the size of the solid inclusions. Under simple assumptions for the interaction of the microswimmers with the microstructure, we were able to obtain an analytical expression for the hindrance function in the dilute limit of \(\mathrm{Pe}\,\phi\ll 1\), and stochastic simulations were performed to extend this result to the case of denser media. The hindrance function was shown to depend nearly linearly on area fraction over a wide range of parameter values--an intuitive result since the number of collisions incurred during a run increases linearly with \(\phi\). The dependence on Peclet number was also found to be linear at low values of \(\mathrm{Pe}\), but to saturate at larger values of \(\mathrm{Pe}\). While the analytical prediction captured the data very well for \(\mathrm{Pe}\lesssim O(1)\), it was found to underestimate the hindrance function at moderate to high Peclet numbers in relatively dense media, where multiple collisions can occur during a given run. Because of its relative simplicity and ease of analysis, the framework proposed here provides a basis for the interpretation and analysis of experimental data and for the benchmarking of more complex models. We emphasize that the model we developed here relies on strong simplifying assumptions that may not be satisfied in many experimental systems. We only considered two-dimensional systems composed of circular non-overlapping pillars: while such geometries have indeed been analyzed in microfluidic experiments [15; 16; 36; 20], natural disordered media typically involve three-dimensional microstructures that are significantly more complex. Extending our model to three dimensions is tedious but relatively straightforward; allowing for overlapping or non-circular occlusions, however, is significantly more involved and unlikely to be tractable analytically. The role of obstacle shape is expected to be of particular interest: non-convex obstacles may indeed result in trapping of microswimmers with a strong effect on dispersion [59], whereas asymmetric shapes can induce a net drift by a rectification mechanism [44; 35]. Note also that our model assumed point-sized microswimmers, which are able to pass through arbitrarily thin gaps. In reality, finite-sized swimmers may get trapped when attempting to travel through thin gaps, forcing them to reverse direction as has been observed in experiments on bacteria in dense media [11; 12]; accounting for this motility strategy requires distinct modeling choices [60; 61] easily incorporated in a framework such as ours. Another major assumption of our model is that of frictionless sliding during collisions, with no change to the swimmer orientation. In particular, this assumes that interactions are purely steric and that hydrodynamic effects are negligible. Experiments on various systems have shown that hydrodynamic interactions can reorient and trap microswimmers near circular obstacles [7; 27], as can chemical interactions in the case of self-phoretic particles [24; 56]. Other types of active particles, e.g., Quincke rollers, may also undergo more complex scattering dynamics [26]. Accounting for such effects in our model is possible in principle. Understanding the role of external fields, such as applied flows [16; 36] or chemical gradients [62; 63], is also an open problem of great interest, which would require solving for the local velocity or chemical field in the porous matrix, for instance using the boundary element method. Finally we note that our model has focused on the transport of dilute non-interacting swimmer suspensions: the case of semi-dilute to dense suspensions, which can undergo spontaneous flow transitions in confinement [64], has been considered in a few experimental [65; 66; 67] and computational [68] studies in periodic porous media, but remains an open area of investigation. Some of these open questions will be addressed in future work. ###### Acknowledgements. The author thanks Can Yang and Antoine Beringer for help with preliminary simulations, and Tanumoy Dhar for useful conversations. This work was funded by National Science Foundation Grant CBET-1934199.
2310.11419
A note on an effective bound for the gonality conjecture
The gonality conjecture, proved by Ein--Lazarsfeld, asserts that the gonality of a nonsingular projective curve of genus $g$ can be detected from its syzygies in the embedding given by a line bundle of sufficiently large degree. An effective result obtained by Rathmann says that any line bundle of degree at least 4g-3 would work in the gonality theorem. In this note, we improve the degree bound to 4g-4 with two exceptional cases.
Alexander Duncan, Wenbo Niu, Jinhyung Park
2023-10-17T17:27:12Z
http://arxiv.org/abs/2310.11419v1
# A note on an effective bound for the gonality conjecture ###### Abstract. The gonality conjecture, proved by Ein-Lazarsfeld, asserts that the gonality of a nonsingular projective curve of genus \(g\) can be detected from its syzygies in the embedding given by a line bundle of sufficiently large degree. An effective result obtained by Rathmann says that any line bundle of degree at least \(4g-3\) would work in the gonality theorem. In this note, we improve the degree bound to \(4g-4\) with two exceptional cases. Key words and phrases:gonality conjecture, syzygies of an algebraic curve, symmetric product of an algebraic curve 2020 Mathematics Subject Classification: 14Q20, 13A10 W. Niu was supported by the Simons Collaboration Grants for Mathematicians J. Park was partially supported by the National Research Foundation (NRF) funded by the Korea government (MSIT) (NRF-2021R1C1C1005479). where \(\operatorname{gon}(C)\) is the gonality of \(C\) which by definition is the minimal degree of pencils on \(C\). As pointed out in [10], although the degree bound above is not expected to be optimal, there is an example of a plane quartic curve showing the degree bound \(4g-4\) does not work. In this short note, we investigate the failure of the gonality conjecture when \(\deg L=4g-4\). The main result is the following. **Theorem 1.1**.: _Let \(C\) be a nonsingular projective curve of genus \(g\geq 2\), and \(L\) be a line bundle on \(C\) with \(\deg L\geq 4g-4\). Then_ \[K_{p,1}(C;L)\neq 0\Longleftrightarrow 1\leq p\leq\deg L-g-\operatorname{gon}(C)\] _unless \(L=\omega_{C}^{2}\) and either \(g=2\) or \(C\) is a plane quartic curve. In the exceptional cases, \(K_{\deg L-g-\operatorname{gon}(C)+1,1}(C;L)\neq 0\) but \(K_{\deg L-g-\operatorname{gon}(C)+2,1}(C;L)=0\)._ An easy application of the theorem gives a uniform picture of syzygies of pluricanonical embedding of curves, especially the second power of the canonical divisor. It has been a long standing interest to understand the syzygies of canonical curves. The shape of the minimal free resolution of \(R(C;\omega_{C})\) was predicted in Green's conjecture [11, Conjecture 5.1]. It was verified by Voisin [12] for general curves, but it is still widely open in general. For pluricanonical embedding \(C\subseteq\mathbf{P}(H^{0}(\omega_{C}^{k}))\), the picture of syzygies turns out to be complete, and we give a summary here. Let \(C\) be a curve of genus \(g\geq 2\) and gonality \(\operatorname{gon}(C)\). Put \(L:=\omega_{C}^{k}\) and write \(r:=h^{0}(L)-1\). For \(k\geq 3\), Greens's \((2g+1+p)\)-theorem and Rathmann's effective gonality theorem give the result that \[K_{p,1}(C;\omega_{C}^{k})\neq 0\Longleftrightarrow 1\leq p\leq r-\operatorname{ gon}(C).\] For \(L=\omega_{C}^{2}\) and \(r=3g-3\), Green's \((2g+1+p)\)-theorem and Theorem 1.2 gives us the following two cases 1. If either \(g=2\) (\(r=\operatorname{gon}(C)=2\)), or \(C\) is a plane quartic curve (\(r=5\) and \(\operatorname{gon}(C)=3\)), then \[K_{p,1}(C;\omega_{C}^{2})\neq 0\Longleftrightarrow 1\leq p\leq r- \operatorname{gon}(C)+1.\] 2. Otherwise, \[K_{p,1}(C;\omega_{C}^{2})\neq 0\Longleftrightarrow 1\leq p\leq r- \operatorname{gon}(C).\] In the setting of Theorem 1.1, Green-Lazarsfeld's nonvanishing theorem [11, Appendix] shows that \(K_{p,1}(C;L)\neq 0\) for \(1\leq p\leq\deg L-g-\operatorname{gon}(C)\). To prove the theorem, it is sufficient to prove that \(K_{\deg L-g-\operatorname{gon}(C)+1,1}(C;L)=0\). By the duality theorem [11, Theorem 2.c.6], \[K_{\deg L-g-\operatorname{gon}(C)+1,1}(C;L)=K_{\operatorname{gon}(C)-2,1}(C, \omega_{C};L)^{\vee}.\] Notice that \(\omega_{C}\) is \((\operatorname{gon}(C)-2)\)-very ample. Recall that \(B\) is \(p\)-very ample if the restriction map on global sections \(H^{0}(B)\to H^{0}(B|_{\xi})\) is surjective for every effective divisor \(\xi\) of degree \(p+1\), (in other words, \(\xi\) imposes independent conditions on the global sections of \(B\)). As in [14] and [10], it is natural to study more generally vanishing of \(K_{p,1}(C,B;L)\) when \(B\) is a \(p\)-very ample line bundle and \(\deg L\geq\deg B+2g-2\). The main result of [10] says that if \(H^{1}(C,L\otimes B^{-1})=0\), then \(K_{p,1}(C,B;L)=0\). For our purpose, we only need to consider the case that \(L=B\otimes\omega_{C}\). Theorem 1.1 can be deduced from the following: **Theorem 1.2**.: _Let \(C\) be a nonsingular projective curve of genus \(g\geq 0\), \(B\) be a \(p\)-very ample line bundle on \(C\), and \(L:=B\otimes\omega_{C}\)._ 1. _If_ \(h^{0}(B)\geq p+3\)_, then_ \(K_{p,1}(C,B;L)=0\)_._ 2. _If_ \(h^{0}(B)=p+2\)_, then_ \(K_{p,1}(C,B;L)=S^{p}H^{0}(\omega_{C})\)_._ 3. _If_ \(h^{0}(B)=p+1\)_, then_ \(K_{p,1}(C,B;L)=0\) The idea to prove the theorem is to use the kernel bundles on the symmetric products of the curve. We follow the approach introduced by Voisin [21, 22] and then used by Ein-Lazarsfeld [1], Rathmann [19], and many others to conduct a computation of Koszul cohomology groups on the symmetric products of the curve. To be concrete, in our case, \[K_{p,1}(C,B;L)=H^{1}(C_{p+1},M_{p+1,B}\otimes N_{p+1,L}),\] where \(M_{p+1,B}\) is the kernel bundle of the evaluation map \(H^{0}(C,B)\otimes\mathscr{O}_{C_{p+1}}\to E_{p+1,B}\) of the tautological bundle \(E_{p+1,B}\) and \(N_{p+1,L}\) is a line bundle on \(C_{p+1}\). More generally, we establish the following vanishing: \[H^{i}(C_{p+1},\wedge^{k}M_{p+1,B}\otimes N_{p+1,L})=0\ \ \text{for}\ i>0\] when \(h^{0}(B)\geq p+k+2\). We hope that our results and methods may shed lights on the similar problems for higher dimensional varieties. _Acknowledgments._ The authors would like to thank Lawrence Ein for suggestions and comments. ## 2. Preliminaries Let us start with setting up notations used throughout the paper. Let \(C\) be a nonsingular projective curve of genus \(g\). For any \(p\geq 0\), denote by \(C_{p+1}\) the \((p+1)\)-th symmetric product. Write \(U_{p+1}=C_{p}\times C\) to be the universal family over \(C_{p+1}\). One has a commutative diagram in which \(\pi_{p+1}\) is the projection map, \(j\) is an embedding defined by \(j(\xi,x)=(\xi+x,x)\), and \(\sigma_{p+1}=\pi_{p+1}|_{U_{p+1}}\) so that \(\sigma_{p+1}(\xi,x)=\xi+x\). Write \(pr\colon U_{p+1}\to C\) to be the projection map to \(C\). **Definition 2.1**.: Let \(B\) be a line bundle on \(C\). For \(p\geq 0\), define \[E_{p+1,B}=\sigma_{p+1,*}(pr^{*}B)\ \text{and}\ N_{p+1,B}=\det E_{p+1,B}.\] **Remark 2.2**.: For basic properties of the vector bundles \(E_{p+1,B}\) and the line bundle \(N_{p+1,B}\), we refer the reader to the paper [10]. Here we mention that \(N_{p+1,B}=S_{p+1,B}(-\delta_{p+1})\), where \(S_{p+1,B}\) is the invariant descend of \[B^{\boxtimes p+1}=\underbrace{\underline{B}\boxtimes\cdots\boxtimes\underline{B }}_{p+1\ \text{times}}\] on \(C^{p+1}\) to \(C_{p+1}\) under the action of the permutations group \(\mathfrak{S}_{p+1}\) on \(C^{p+1}\) and \(\mathscr{O}_{C_{p+1}}(-\delta_{p+1})=N_{p+1,\mathscr{O}_{C}}\). Let \(B\) be a \(p\)-very ample line bundle on \(C\). As the fiber of \(E_{p+1,B}\) over \(\xi\in C_{p+1}\) is \(H^{0}(B|_{\xi})\), the evaluation map \(H^{0}(B)\otimes\mathscr{O}_{C_{p+1}}\to E_{p+1,B}\) on global sections is surjective. Define \(M_{p+1,B}\) to be the kernel bundle of the evaluation map. We obtain a short exact sequence \[0\longrightarrow M_{p+1,B}\longrightarrow H^{0}(B)\otimes\mathscr{O}_{C_{p+1} }\longrightarrow E_{p+1,B}\longrightarrow 0.\] The following vanishing theorem about the kernel bundle \(M_{p+1,B}\) is an immediate consequence of Ranthmann's vanishing theorem on Cartesian products of the curve. **Proposition 2.3**.: _Let \(B\) be a \(p\)-very ample line bundle on \(C\), and \(L\) be a globally generated line bundle on \(C\) such that \(h^{1}(L)=h^{1}(L\otimes B^{-1})=0\). Then one has_ \[H^{k}(C_{p+1},\wedge^{m}M_{p+1,B}\otimes N_{p+1,L})=0,\ \text{for all}\ k>0,m>0.\] Proof.: By [11, Theorem 3.1], one has the vanishing \[H^{k}(C^{p+1},q^{*}(\wedge^{m}M_{p+1,B}\otimes N_{p+1,L}))=0\text{ for all }k>0,m>0,\] where \(q\colon C^{p+1}\to C_{p+1}\) is the quotient map. Since \(q\) is finite, \(\mathscr{O}_{C_{p+1}}\) is a direct summand of \(q_{*}\mathscr{O}_{C^{p+1}}\). Thus by projection formula, \(\wedge^{m}M_{p+1,B}\otimes N_{p+1,L}\) is a direct summand of \(q_{*}(q^{*}(\wedge^{m}M_{p+1,B}\otimes N_{p+1,L}))\), from which the result follows. Next we prove a crucial property of kernel bundle \(M_{p+1,B}\), which is important for us to use the inductive argument. **Proposition 2.4**.: _Let \(B\) be a \(p\)-very ample line bundle. There is a short exact sequence_ \[0\longrightarrow\sigma_{p+1}^{*}M_{p+1,B}\longrightarrow M_{p,B}\boxtimes \mathscr{O}_{C}\longrightarrow(\mathscr{O}_{C_{p}}\boxtimes B)(-U_{p}) \longrightarrow 0.\] Proof.: Denote by \(\alpha\colon M_{p,B}\boxtimes\mathscr{O}_{C}\rightarrow(\mathscr{O}_{C_{p}} \boxtimes B)(-U_{p})\) the morphism appeared on the right hand side of the sequence. We first show that it is surjective. Indeed, choose any \(\xi\in C_{p}\), and consider the fiber \(C=\{\xi\}\times C\subseteq C_{p}\times C\) over \(\xi\). Restricting \(\alpha\) to this fiber yields the evaluation map \[\alpha_{\xi}:H^{0}(B(-\xi))\otimes\mathscr{O}_{C}\longrightarrow B(-\xi).\] Since \(B\) is \(p\)-very ample and \(\xi\) has degree \(p\), it follows that \(B(-\xi)\) is \(0\)-very ample and thus globally generated. Hence \(\alpha_{\xi}\) is surjective. This means that \(\alpha\) is surjective. Next we consider the following fiber product diagram \[\begin{CD}C_{p}\times C\times C@>{\bar{\sigma}}>{}>C_{p+1}\times C\supseteq U _{p+1}\\ @V{\pi}V{}V@V{}V{\pi_{p+1}}V\\ U_{p+1}=\text{ }C_{p}\times C@>{}>{\sigma_{p+1}}>C_{p+1}.\end{CD}\] On \(C_{p}\times C\times C\), we have two divisors \(D_{0}\) and \(D_{1}\) defined in the way that \(D_{0}\) is the image of \[C_{p}\times C\longrightarrow C_{p}\times C\times C,\quad(\xi,x)\longmapsto( \xi,x,x),\] and \(D_{1}\) is the image of \[C_{p-1}\times C\times C\longrightarrow C_{p}\times C\times C,\quad(\xi,y,x) \longmapsto(\xi+x,y,x).\] Observe that \[\bar{\sigma}^{*}U_{p+1}=D_{0}+D_{1}\text{ and }D_{0}\cap D_{1}=C_{p-1}\times C.\] It is easy to check that \[\sigma_{p+1}^{*}M_{p+1,B}=\bar{\pi}_{*}(pr^{*}B(-D_{0}-D_{1})\text{ and }M_{p,B} \boxtimes\mathscr{O}=\bar{\pi}_{*}pr^{*}B(-D_{1}),\] where \(pr:C_{p}\times C\times C\to C\) is the projection to the right hand side component \(C\). Now we can form a short exact sequence on \(C_{p}\times C\times C\), \[0\longrightarrow\mathscr{O}(-D_{0}-D_{1})\longrightarrow\mathscr{O}(-D_{1}) \longrightarrow\mathscr{O}_{D_{0}}(-D_{1})\longrightarrow 0.\] Note that \(\mathscr{O}_{D_{0}}(-D_{1})=\mathscr{O}_{C_{p}\times C}(-U_{p})\). Tensoring the short exact sequence with \(pr^{*}B\) and then pushing it down to \(C_{p}\times C\), we obtain the desired short exact sequence. **Remark 2.5**.: The proof above shows that for any line bundle \(B\) (not necessarily \(p\)-very ample), one has a short exact sequence \[0\longrightarrow pr^{*}B(-U_{p})\longrightarrow\sigma_{p+1}^{*}E_{p+1,B} \longrightarrow E_{p,B}\boxtimes\mathscr{O}_{C}\longrightarrow 0\] on the universal family \(U_{p+1}\). ## 3. Proofs of Main Results In this section, we prove the main results of the paper - Theorems 1.1 and 1.2. We keep using the notations introduced in Section 2. On the universal family \(U_{p+1}\), consider the short exact sequence \[0\longrightarrow\mathscr{O}_{U_{p+1}}\longrightarrow\mathscr{O}_{U_{p+1}}(U_{ p})\longrightarrow\mathscr{O}_{U_{p}}(U_{p})\longrightarrow 0 \tag{3.0.1}\] associated to the divisor \(U_{p}\). The normal sheaf \(\mathscr{O}_{U_{p}}(U_{p})\) of \(U_{p}\) in \(U_{p+1}\) can be expressed as \[\mathscr{O}_{U_{p}}(U_{p})\cong(\mathscr{O}_{C_{p-1}}\boxtimes\omega_{C}^{-1}) (U_{p-1}).\] Let \(L\) be a line bundle on \(C\). Tensoring \(pr^{*}L\) with the short exact sequence (3.0.1), we obtain a short exact sequence \[0\longrightarrow\mathscr{O}_{C_{p}}\boxtimes L\longrightarrow(\mathscr{O}_{C _{p}}\boxtimes L)(U_{p})\longrightarrow(\mathscr{O}_{C_{p-1}}\boxtimes L \otimes\omega_{C}^{-1})(U_{p-1})\longrightarrow 0\] on \(U_{p+1}\). Pushing it down to \(C_{p}\) by the projection map \(\pi_{p}:U_{p+1}\to C_{p}\) yields a connecting map \(\delta\) in the associated long exact sequence \[0\longrightarrow H^{0}(L)\otimes\mathscr{O}_{C_{p}} \longrightarrow\pi_{p,*}((\mathscr{O}_{C_{p}}\boxtimes L)(U_{p})) \longrightarrow\sigma_{p,*}((\mathscr{O}_{C_{p-1}}\boxtimes L\otimes\omega_{C }^{-1})(U_{p-1}))\stackrel{{\delta}}{{\longrightarrow}}\cdots\] \[\cdots\stackrel{{\delta}}{{\longrightarrow}}H^{1}(L) \otimes\mathscr{O}_{C_{p}}\longrightarrow R^{1}\pi_{p,*}((\mathscr{O}_{C_{p} }\boxtimes L)(U_{p}))\longrightarrow 0,\] where \(\sigma_{p}\) is the restriction of \(\pi_{p}\) onto the divisor \(U_{p}\). To understand the connecting map \(\delta\), we consider its dual map \(\delta^{\vee}\) by applying \(\mathscr{H}\mathrm{om}(-,\mathscr{O}_{C_{p+1}})\). It is easy to calculate that \[\big{(}\sigma_{p,*}((\mathscr{O}_{C_{p-1}}\boxtimes L\otimes\omega_{C}^{-1}) (U_{p-1}))\big{)}^{\vee}=\sigma_{p,*}(\mathscr{O}_{C_{p-1}}\boxtimes L^{-1} \otimes\omega_{C})=E_{p,L^{-1}\otimes\omega_{C}}.\] Then the dual map \(\delta^{\vee}\) turns out to be the evaluation map \[H^{0}(L^{-1}\otimes\omega_{C})\otimes\mathscr{O}_{C_{p}}\stackrel{{ \delta^{\vee}}}{{\longrightarrow}}E_{p,L^{-1}\otimes\omega_{C}}.\] We shall only need the special case that \(L=\omega_{C}\). In this case, the map \(\delta^{\vee}\) splits \(E_{p,\mathscr{O}_{C}}\) by the trace map. As a consequence of the splitting, we have \[(\sigma_{p,*}\mathscr{O}_{U_{p}})^{\vee}\cong\sigma_{p,*}(\mathscr{O}_{U_{p}} (U_{p-1}))\cong\mathscr{O}_{C_{p}}\oplus\mathscr{K}_{p},\] where the direct summand \(\mathscr{K}_{p}\) is the kernel sheaf of the connecting map \(\delta\) fitting into a short exact sequence \[0\longrightarrow H^{0}(\omega_{C})\otimes\mathscr{O}_{C_{p}} \longrightarrow\pi_{p,*}((\mathscr{O}_{C_{p}}\boxtimes\omega_{C})(U_{p})) \longrightarrow\mathscr{K}_{p}\longrightarrow 0.\] **Theorem 3.1**.: _Let \(B\) be a \(p\)-very ample line bundle on \(C\). Consider a line bundle \(L:=B\otimes\omega_{C}\). Suppose that \(h^{0}(B)\geq p+k+2\) for \(k\geq 1\). Then_ \[H^{i}(U_{p+1},\sigma_{p+1}^{*}(\wedge^{k}M_{p+1,B})\otimes(N_{p,L}\boxtimes L ))=0\quad\text{ for }i>0. \tag{3.1.1}\] _As a consequence, one has_ \[H^{i}(C_{p+1},\wedge^{k}M_{p+1,B}\otimes N_{p+1,L})=0\quad\text{ for }i>0.\] Proof.: First observe that by [1, Lemma 3.5], \(\mathscr{O}_{C_{p+1}}(-\delta_{p+1})\) is a direct summand of the vector bundle \(\sigma_{p+1,*}(\mathscr{O}_{C_{p}}(-\delta_{p})\boxtimes\mathscr{O}_{C})\). Thus the bundle \[\sigma_{p+1,*}(\sigma_{p+1}^{*}(\wedge^{k}M_{p+1,B})\otimes(N_{p,L}\boxtimes L ))\cong\wedge^{k}M_{p+1,B}\otimes S_{p+1,L}\otimes\sigma_{p+1,*}(\mathscr{O}_ {C_{p}}(-\delta_{p})\boxtimes\mathscr{O}_{C})\] contains \(\wedge^{k}M_{p+1,B}\otimes N_{p+1,L}\) as a direct summand. Since \(\sigma_{p+1}\) is a finite map, the second vanishing statement in the theorem would follow from the first one. Thus in the sequel, it suffices to show the first vanishing statement (3.1.1). To this end, we use the short exact sequence in Lemma 2.4 to yield a locally free resolution of \(\sigma_{p+1}^{*}(\wedge^{k}M_{p+1,B})\) as follows: \[\cdots\longrightarrow(\wedge^{k+2}M_{p,B}\boxtimes B^{-2})(2U_{p}) \longrightarrow(\wedge^{k+1}M_{p,B}\boxtimes B^{-1})(U_{p})\longrightarrow \sigma_{p+1}^{*}(\wedge^{k}M_{p+1,B})\longrightarrow 0.\] Tensoring it with \(N_{p,L}\boxtimes L\) gives rise to a resolution \[\cdots\longrightarrow(\wedge^{k+2}M_{p,B}\otimes N_{p,L}) \boxtimes(L\otimes B^{-2})(2U_{p})\longrightarrow(\wedge^{k+1}M_{p,B}\otimes N_{ p,L})\boxtimes(L\otimes B^{-1})(U_{p})\longrightarrow\cdots\] \[\cdots\longrightarrow\sigma_{p+1}^{*}(\wedge^{k}M_{p+1,B}) \otimes(N_{p,L}\boxtimes L)\longrightarrow 0.\] We make the following claim: **Claim 3.1.2.** One has \[R^{t}pr_{*}\Big{(}(\wedge^{k+j}M_{p,B}\otimes N_{p,L})\boxtimes(L\otimes B^{-j })(jU_{p})\Big{)}=0\ \ \text{for}\ t\geq 1,j\geq 2,\] where \(pr\colon U_{p+1}\to C\) is the projection map. _Proof of Claim._ For a point \(x\in C\), the restriction of the sheaf \((\wedge^{k+j}M_{p,B}\otimes N_{p,L})\boxtimes(L\otimes B^{-j})(jU_{p})\) onto the fiber \(pr^{-1}(x)\cong C_{p}\) equals \(\wedge^{k+j}M_{p,B}\otimes N_{p,L(jx)}\), and \(H^{t}(\wedge^{k+j}M_{p,B}\otimes N_{p,L(jx)})=0\) for \(t>0\) by Proposition 2.3. Thus the claimed vanishing holds by base change. By the claim above and using Larry spectral sequence \[H^{s}(R^{t}pr_{*}((\wedge^{k+j}M_{p,B}\otimes N_{p,L})\boxtimes(L\otimes B^{- j})(jU_{p})))\Rightarrow H^{s+t}(U_{p+1},(\wedge^{k+j}M_{p,B}\otimes N_{p,L}) \boxtimes(L\otimes B^{-j})(jU_{p})),\] we see that \[H^{i}(U_{p+1},(\wedge^{k+j}M_{p,B}\otimes N_{p,L})\boxtimes(L\otimes B^{-j})( jU_{p}))=0,\ \text{for}\ i\geq 2,j\geq 2.\] Thus chasing through the resolution of \(\sigma_{p+1}^{*}(\wedge^{k}M_{p+1,B})\otimes(N_{p,L}\boxtimes L)\), in order to prove the vanishing (3.1.1), the only left thing is to show the case when \(j=1\), i.e., to show \[H^{i}(U_{p+1},(\wedge^{k+1}M_{p,B}\otimes N_{p,L})\boxtimes\omega_{C}(U_{p})) =0, \tag{3.1.3}\] where we use the fact \(L\otimes B^{-1}\cong\omega_{C}\). To do this, we tensor \((\wedge^{k+1}M_{p,B}\otimes N_{p,L})\boxtimes\omega_{C}\) with the short exact sequence (3.0.1). Pushing down the resulting sequence to \(C_{p}\) by the projection map \(\pi_{p}:U_{p+1}\to C_{p}\), we obtain a long exact sequence \[0\longrightarrow\wedge^{k+1}M_{p,B}\otimes N_{p,L}\otimes H^{0}(\omega_{C}) \longrightarrow\wedge^{k+1}M_{p,B}\otimes N_{p,L}\otimes\pi_{p,*}(\mathscr{O} _{C_{p}}\boxtimes\omega_{C}(U_{p}))\longrightarrow\ldots\] \[\ldots\longrightarrow\wedge^{k+1}M_{p,B}\otimes N_{p,L}\otimes\sigma_{p,*}( \mathscr{O}_{U_{p}}(U_{p-1}))\stackrel{{\delta}}{{\longrightarrow}} \wedge^{k+1}M_{p,B}\otimes N_{p,L}\longrightarrow\cdots\] \[\cdots\longrightarrow R^{1}\pi_{p,*}(\wedge^{k+1}M_{p,B}\otimes N_{p,L} \boxtimes\omega_{C}(U_{p}))\longrightarrow 0.\] As in the discussion located before the theorem, the connecting map \(\delta\) splits. This means that \(R^{1}\pi_{p,*}(\wedge^{k+1}M_{p,B}\otimes N_{p,L}\boxtimes\omega_{C}(U_{p}))=0\) and \(\wedge^{k+1}M_{p,B}\otimes N_{p,L}\) is a direct summand of \(\wedge^{k+1}M_{p,B}\otimes N_{p,L}\otimes\sigma_{p,*}(\mathscr{O}_{U_{p}}(U_{p -1}))\). Thus we reduce the vanishing (3.1.3) to showing the vanishing \[H^{i}(C_{p},\wedge^{k+1}M_{p,B}\otimes N_{p,L}\otimes\sigma_{p,*}(\mathscr{O}_{ U_{p}}(U_{p-1})))=0. \tag{3.1.4}\] Observe that \[N_{p,L}\otimes\sigma_{p,*}(\mathscr{O}_{U_{p}}(U_{p-1}))=\sigma_{p,*}(N_{p-1,L} \boxtimes L).\] By projection formula, the vanishing (3.1.4) would follow from the following vanishing \[H^{i}(U_{p},\sigma_{p}^{*}(\wedge^{k+1}M_{p,B})\otimes(N_{p-1,L}\boxtimes L))=0. \tag{3.1.5}\] Repeating this argument and noticing that \(B\) is \((p-1)\)-very ample with \(h^{0}(B)\geq(p-1)+(k+1)+2\), we finally reduce the problem to showing the vanishing \[H^{i}(C,\wedge^{k+p}M_{B}\otimes L)=0,\] Here we write \(M_{B}=M_{1,B}\) for simplicity. The only nontrivial case is when \(i=1\). Write \(b=\operatorname{rank}M_{B}\) and notice that \(\det M_{B}^{\vee}=B\). By Serre duality, \[H^{1}(C,\wedge^{k+p}M_{B}\otimes L)\cong H^{0}(C,\omega_{C}\otimes\det M_{B}^{ \vee}\otimes\wedge^{b-1-k-p}M_{B}\otimes L^{-1})^{\vee}=H^{0}(C,\wedge^{b-1-k-p} M_{B})^{\vee}.\] Now as \(\wedge^{b-1-k-p}M_{B}\) is a direct summand of \(\otimes^{b-1-k-p}M_{B}\) and the latter has no global sections, we conclude \(H^{1}(C,\wedge^{k+p}M_{B}\otimes L)=0\) as desired. This completes the proof. **Proposition 3.2**.: _Let \(B\) be a \(p\)-very ample line bundle on a curve \(C\). Consider a line bundle \(L=B\otimes\omega_{C}\)._ 1. _If_ \(h^{0}(B)=p+k+1\) _for_ \(k\geq 1\)_. Then_ \[H^{i}(C_{p+1},\wedge^{k}M_{p+1,B}\otimes N_{p+1,L})=H^{i}(C_{p+1},S_{p+1,\omega _{C}})=S^{p+1-i}H^{0}(\omega_{C}).\] 2. _If_ \(h^{0}(B)=p+k\) _for_ \(k\geq 1\)_, then_ \(\wedge^{k}M_{p+1,B}=0\)_, and therefore_ \[H^{i}(C_{p+1},\wedge^{k}M_{p+1,B}\otimes N_{p+1,L})=0.\] Proof.: For (1), since \(M_{p+1,B}\) has rank \(k\) and \(\wedge^{k}M_{p+1,B}\cong N_{p+1,B}^{-1}\cong S_{p+1,B^{-1}}(\delta_{p+1})\), we compute \[\wedge^{k}M_{p+1,B}\otimes N_{p+1,L}\cong S_{p+1,L\otimes B^{-1}}\cong S_{p+1, \omega_{C}}.\] The result then follows from [1, Lemma 3.7]. For (2), since \(M_{p+1,B}\) has rank \(k-1\), the result follows immediately. We will only need Theorem 3.1 and Proposition 3.2 for the case \(k=1\). In the following proposition, we classify when a \(p\)-very ample line bundle \(B\) can have \(h^{0}(B)\leq p+2\). **Proposition 3.3**.: _Let \(B\) be a \(p\)-very ample line bundle on \(C\), and \(p\geq 0\)._ 1. \(h^{0}(B)=p+1\) _if and only if either_ \(p=0\) _and_ \(B=\mathscr{O}_{C}\) _or_ \(p\geq 1\)_,_ \(C=\mathbf{P}^{1}\) _and_ \(B=\mathscr{O}_{\mathbf{P}^{1}}(p)\)_._ 2. \(h^{0}(B)=p+2\) _if and only if one of the following cases holds._ 1. \(g=0\)_,_ \(p\geq 0\) _and_ \(B=\mathscr{O}_{\mathbf{P}^{1}}(p+1)\)_._ 2. \(g=1\)_,_ \(p\geq 0\) _and_ \(\deg B=p+2\)_._ 3. \(g\geq 2\)_, either_ \(p=0\) _and_ \(B\) _is a base point free pencil, or_ \(p=1\) _and_ \(C\subseteq\mathbf{P}(H^{0}(B))\) _is a plane curve of degree_ \(\geq 4\)_._ Proof.: (1) If \(p=0\), then \(B\) is a globally generated line bundle with \(H^{0}(C,B)=1\). Then \(B=\mathscr{O}_{C}\) since the only section of \(B\) is nowhere vanishing. Assume \(p\geq 1\), so \(B\) is very ample and gives an embedding of \(C\) into the space \(\mathbf{P}^{p}=\mathbf{P}(H^{0}(B))\). As \(B\) is \(p\)-very ample, any \(p+1\) points of \(C\) will span the whole space \(\mathbf{P}^{p}\), which means that the degree of \(C\) would be smaller than \(p\). But \(C\) is also nondegenerate in \(\mathbf{P}^{p}\) and thus has degree \(\geq p\). Hence \(C\) has degree exactly \(p\), and therefore, it is a rational normal curve. (2) Since (i) and (ii) are obvious, we only need to prove (iii). If \(p=0\), then \(B\) is a base point free pencil. Assume that \(p\geq 2\). Take \(p-1\) points \(x_{1},\ldots,x_{p-1}\) of \(C\), and put \(D:=x_{1}+\cdots+x_{p-1}\). Since \(B\) is a \(p\)-very ample, we see that \(B(-D)\) is very ample with \(h^{0}(B(-D))=3\) and \(h^{1}(B(-D+x_{1}))=h^{1}(B(-D))=h^{1}(B)\). This means \(C\) is a plane curve of some degree \(d\geq 4\) embedded by \(B(-D)\) into \(\mathbf{P}^{2}\), and thus, the canonical line bundle \(\omega_{C}\) has the form \(\omega_{C}=(B(-D))^{d-3}\) by the adjunction formula. By duality, the equality \(h^{1}(B(-D+x_{1}))=h^{1}(B(-D))\) is the same as the equality \(h^{0}((B(-D))^{d-4}(-x_{1}))=h^{0}((B(-D))^{d-4})\), which is impossible because \(B(-D)\) is very ample. Thus we conclude \(p=1\) and \(C\subseteq\mathbf{P}(H^{0}(B))\) is a plane curve of degree \(\geq 4\). Recall that the gonality of \(C\) captures the positivity of the canonical line bundle \(\omega_{C}\). More precisely, \(\operatorname{gon}(C)\geq p+2\) if and only if \(\omega_{C}\) is \(p\)-very ample. In particular, \[\operatorname{gon}(C)=\max\{p+2\mid\omega_{C}\text{ is $p$-very ample}\}.\] We can compare the gonality with the genus. The following proposition may be well-known. **Corollary 3.4**.: _Assume that \(g\geq 2\). Then \(g\geq\operatorname{gon}(C)\), and the equality holds if and only if either \(g=2\) or \(C\) is a plane quartic curve._ Proof.: Since \(g\geq 2\), it follows that \(\operatorname{gon}(C)\geq 2\). Write \(\operatorname{gon}(C)=p+2\). Then \(\omega_{C}\) is \(p\)-very ample. Applying Proposition 3.3 to the case \(B=\omega_{C}\), we see that \(g\geq p+2\) and the equality holds if either \(g=2\) (i.e., \(g=\operatorname{gon}(C)=2\)), or \(C\) is a plane curve of \(g=3\) which is a plane quartic curve (i.e., \(g=\operatorname{gon}(C)=3\)). Proof of Theorem 1.2.: In (1) and (2), \(B\) is ample and thus \(h^{1}(L)=0\). This implies \(h^{1}(C_{p+1},N_{p+1,L})=0\) and thus [1, Lemma 1.1] yields \[K_{p,1}(C,B;L)=H^{1}(C_{p+1},M_{p+1,B}\otimes N_{p+1,L}).\] So the assertion (1) follows from Theorem 3.1 by taking \(k=1\), and the assertion (2) follows from Proposition 3.2 by taking \(k=1\). For the assertion (3), if \(p=0\), then \(B=\mathscr{O}_{C}\) and then \(K_{0,1}(C;\omega_{C})=0\) by definition of Koszul cohomology group. If \(p\geq 1\), then by Proposition 3.3, \(C=\mathbf{P}^{1}\) and \(B=\mathscr{O}_{\mathbf{P}^{1}}(p)\). Then \(K_{p,1}(\mathbf{P}^{1},\mathscr{O}_{\mathbf{P}^{1}}(p);\mathscr{O}_{\mathbf{ P}^{1}}(p-2))=0\) by a direct computation. **Corollary 3.5**.: _Assume that \(g\geq 2\). Let \(B\) be a \(p\)-very ample line bundle on \(C\), and \(L\) be a line bundle on \(C\). Suppose that \(\deg(L\otimes B^{-1})\geq 2g-2\). Then one has_ \[K_{p,1}(C,B;L)=0\] _unless \(L=B\otimes\omega_{C}\) and either_ (1)_\(p=0\) and \(B\) is a base point free pencil, or_ (2)_\(p=1\) and \(C\subseteq\mathbf{P}(H^{0}(B))\) is a plane curve. In the exceptional cases, \(K_{p,1}(C,B;L)\neq 0\) but \(K_{p-1,1}(C,B;L)=0\)._ Proof.: If \(L\otimes B^{-1}\neq\omega_{C}\), then \(h^{1}(L\otimes B^{-1})=0\) so that one can use Rathmann's theorem [1, Theorem 1.1] to get the desired result. Assume that \(L\otimes B^{-1}=\omega_{C}\). By Theorem 1.2, \(K_{p,1}(C,B;L)=0\) if \(h^{0}(B)\neq p+2\), and \(K_{p,1}(C,B;L)\neq 0\) if \(h^{0}(B)=p+2\). In the latter case, \(K_{p-1,1}(C,B;L)=0\) by Theorem 1.2 since \(B\) is \((p-1)\)-very ample and \(h^{0}(B)=(p-1)+3\). However, if \(h^{0}(B)=p+2\), then Proposition 3.3 shows that either (1) \(p=0\) and \(B\) is a base point free pencil, or (2) \(p=1\) and \(C\subseteq\mathbf{P}(H^{0}(B))\) is a plane curve. Proof of Theorem 1.1.: By Green-Lazarsfeld's nonvanishing theorem [1, Appendix] and the duality theorem [1, Theorem 2.c.6], we only need to know when \(K_{\operatorname{gon}(C)-2,1}(C,\omega_{C};L)=0\) vanishes. As \(\omega_{C}\) is \((\operatorname{gon}(C)-2)\)-very ample, the theorem follows from Corollary 3.5.
2302.06374
Statistical modeling of diabetic neuropathy: Exploring the dynamics of nerve mortality
Diabetic neuropathy is a disorder characterized by impaired nerve function and reduction of the number of epidermal nerve fibers per epidermal surface. Additionally, as neuropathy related nerve fiber loss and regrowth progresses over time, the two-dimensional spatial arrangement of the nerves becomes more clustered. These observations suggest that with development of neuropathy, the spatial pattern of diminished skin innervation is defined by a thinning process which remains incompletely characterized. We regard samples obtained from healthy controls and subjects suffering from diabetic neuropathy as realisations of planar point processes consisting of nerve entry points and nerve endings, and propose point process models based on spatial thinning to describe the change as neuropathy advances. Initially, the hypothesis that the nerve removal occurs completely at random is tested using independent random thinning of healthy patterns. Then, a dependent parametric thinning model that favors the removal of isolated nerve trees is proposed. Approximate Bayesian computation is used to infer the distribution of the model parameters, and the goodness-of-fit of the models is evaluated using both non-spatial and spatial summary statistics. Our findings suggest that the nerve mortality process changes behaviour as neuropathy advances.
Konstantinos Konstantinou, Farnaz Ghorbanpour, Umberto Picchini, Adam Loavenbruck, Aila Särkkä
2023-02-13T14:08:36Z
http://arxiv.org/abs/2302.06374v2
# Mathematical modeling of nerve mortality caused by diabetic neuropathy ###### Abstract Diabetic neuropathy is a disorder characterized by impaired nerve function and reduction of the number of epidermal nerve fibers per epidermal surface. Additionally, as neuropathy related nerve fiber loss and regrowth progresses over time, the two-dimensional spatial arrangement of the nerves becomes more clustered. These observations suggest that with development of neuropathy, the spatial pattern of diminished skin innervation is defined by a thinning process which remains incompletely characterized. We regard samples obtained from healthy controls and subjects suffering from diabetic neuropathy as realisations of planar point processes consisting of nerve entry points and nerve endings, and propose point process models based on spatial thinning to describe the change as neuropathy advances. Initially, the hypothesis that the nerve removal occurs completely at random is tested using independent random thinning of healthy patterns. Then, a dependent parametric thinning model that favors the removal of isolated nerve trees is proposed. Approximate Bayesian computation is used to infer the distribution of the model parameters, and the goodness-of-fit of the models is evaluated using both non-spatial and spatial summary statistics. Our findings suggest that the nerve mortality process changes behaviour as neuropathy advances. Keywords: Approximate Bayesian computation; Dependent thinning; Epidermal nerve fibers; Random thinning; Reactive territory; Spatial point process ## 1 Introduction Epidermal nerve fibers (ENFs) are dendroidal thin sensory nerve fibers in the outermost layer of the human skin, called epidermis. They enter, grow and branch in the epidermis until they terminate. Throughout the paper, the entry points will be referred to as base points and the termination points as end points. The nerve fibers transfer signals such as heat and pain recorded by the end points to the central nervous system. Diabetic neuropathy is a disorder that in which elevated blood sugar and related processes in the body damage ENFs all along their course, from the dorsal root ganglion near the spinal cord to the skin, negatively affects their functionality, and over time causes attrition of ENFs. Progression of ENF dysfunction and loss is characterized, respectively, by neuropathic pain and loss of sensation [14]. While damaged nerves may heal and regrow with sustained improvement of blood sugar, this regrowth is very slow and often incomplete. It is therefore important to detect the neuropathy at the earliest stage possible, to prevent ENF damage before it occurs. The diagnostic capabilities of the ENFs have been established in several studies. More specifically, neuropathy progression decreases the ENFs spatial intensity and total coverage of the ENFs in the epidermis [14, 1, 11]. In addition, the two-dimensional spatial structure of the base and end points of subjects suffering from diabetic neuropathy tend to be more clustered than the structure of healthy controls [34, 23, 1, 25]. Furthermore, the nerve fibers in subjects with diabetic neuropathy tend to branch fewer times before terminating than the nerve fibers in healthy subjects [2]. A considerable amount of earlier research on the spatial structure of ENFs has concentrated on modelling the spatial structure. The planar locations of the base and end points are treated as realisations of two-dimensional spatial point processes, and point process models have been developed for the end points conditioned on the empirical base point locations. For instance, the non-orphan cluster (NOC) model [25] and the uniform cluster centre (UCC) model [1] are point process models of this nature. In the UCC model, the direction of the end point clusters with respect to their corresponding base points is uniformly distributed, while in the NOC model the clusters are constructed towards open space. To capture possible interactions between the entire nerve trees, a sequential marked point process model was proposed in Ghorbanbour et al. [11]. Furthermore, a continuous time birth-and-death process that allows interactions between the base points and within the points in each end point cluster was proposed in Garcia et al. [10]. In addition, some models for the three-dimensional spatial structure have recently been suggested in [15, 16] In this paper, we will focus on the underlying process that guides the morphological changes in the spatial structure of the nerve trees as diabetic neuropathy advances. We have skin samples from healthy subjects and subjects suffering from either mild or moderate diabetic neuropathy. The mild point patterns consisting of the base and end points of ENFs are treated as spatial thinnings of the healthy point patterns, and the moderate patterns as thinnings of the mild patterns. For this purpose, different spatial thinning schemes are proposed. Since for such thinning models we do not have a likelihood function readily available, we suggest an approximate Bayesian computation (ABC) approach to estimate the parameters of the model. Finally, the models are evaluated using Ripley's \(K\) function, mark correlation function, and some non-spatial summary statistics. Our findings indicate that nerve mortality does not occur completely at random. To the best of our knowledge, this is the first study investigating nerve loss due to diabetic neuropathy using spatial thinning models. The paper is organised as follows. In Section 2, the ENF data set is described and a brief introduction to point processes and spatial thinning operations is given in Section 3. In Section 4, the proposed thinning schemes are described. Our findings are presented in Section 5 and further discussed in Section 6. ## 2 Data The epidermal nerve fiber data we have available are obtained using suction induced skin biopsies, a medical procedure where a skin sample is taken, mounted on a slide and stained for imaging [35, 26]. Then, confocal microscopy is used to manually trace the base points, which are the entry locations of the ENFs in the epidermis, the branching points,which are the locations where the nerve branches within epidermis, and the end points, which are the locations where the nerve fibers terminate. Two skin blister specimens were taken from different body parts from each subject in the study resulting in three to six images (usually four) per subject and body part. The degree of diabetic neuropathy, i.e. healthy, mild, moderate, or severe, is known for each subject. The original spatial point patterns are three dimensional. However, since we are interested in the coverage of the ENFs on the skin, we concentrate on the two dimensional projections of the patterns. Here, we limit our analysis to the data collected from the feet of 32 healthy subjects, 8 subjects with mild diabetic neuropathy, and 5 subjects with moderate diabetic neuropathy. The choice of the body part is motivated by the observation that changes in the ENFs morphology occur at the earliest stage in distant body parts such as feet [13]. We have left out the group with severe diabetic neuropathy as those samples contain very few nerves. The data consisting of the base and end point locations are treated as realisations of spatial point processes in \(\mathbb{R}^{2}\) observed in the window \(W\) with area of approximately 330 \(\times\) 432 microns. In total, 112 healthy, 28 mild diabetic and 13 moderate diabetic skin samples are included in the analysis. From now on, we refer to the three groups as healthy, mild, and moderate, respectively. An example of an ENF sample is displayed in Figure 1, where the different types of points are represented with different colours. Examples for ENF samples from mild and moderate diabetic samples are shown in Figure 15 in Appendix. The area of the skin that the ENFs cover can be described by reactive territories introduced in Andersson et al. [1]. The reactive territory of a nerve tree is defined as the convex hull determined by the locations of the projected end points and base points belonging to the same nerve tree. An example of a reactive territory for a healthy sample is presented in Figure 2. Note that a nerve tree has to have at least two end points to have a positive reactive territory. Figure 1: An illustration of the structure of the nerve trees in a healthy sample. Red points represent the base and blue points the end points of the nerve fibres. It is well established that the degree of neuropathy and the area of positive reactive territory are negatively correlated, i.e. as the degree of neuropathy increases the area of the skin covered by the ENFs decreases [14, 1]. This is illustrated for our data in Figure 3. The decrease in ENF coverage is translated into neuropathic pain and loss of sensation, the main symptoms of the neuropathy. ## 3 Methods for spatial point processes Point patterns consisting of ENF base and end points projected into the plane are regarded as realisation of two dimensional spatial (marked) point processes. In this section, we give some definitions and notations (mainly from Ilian et al. [12]) for spatial unmarked and marked point processes and introduce some summary functions. Furthermore, we recall some thinning operations for point pro Figure 3: The total area of the skin covered by the reactive territories in the samples from the healthy subjects and the subjects with mild or moderate diabetic neuropathy. Figure 2: Reactive territory of a skin sample obtained from the same healthy subject as in Figure 1. Red points represent the base and blue points the end points of the nerve fibres. cesses. For more rigorous treatment of the topic, the reader is referred to Illian et al. [12], Diggle [7], Moller and Waagepetersen [21], and Chiu et al. [4]. ### _Spatial point processes_ Spatial point processes are mathematical models for point patterns. A spatial point process \(X\) is defined as the random set of locations in a spatial domain \(D\) where the events of interest occur. Usually, the point process is observed in an observation window \(W\subset D\). We refer to this as a point pattern in the observation window \(W\). In this work, the locations, where the nerves enter the epidermis and where they terminate, are treated as realisations of spatial point processes in a rectangular observation window \(W\subset\mathbb{R}^{2}\). The point processes are assumed to be _locally finite_, that is for every bounded subset \(B\) in the Borel set \(\mathcal{B}(\mathbb{R}^{2})\), the number of points of the process that lie in \(B\) is finite. Further, the point processes are assumed to be _simple_, that is at any location there is at most one point of the process. Lastly, the point processes are assumed to be stationary (translation invariant) and isotropic (rotation invariant). Sometimes additional characteristics, marks, are attached to each point in the point pattern. Such marked point patterns often provide a deeper insight into the underlying physiological processes [12]. Let \(M\subset\mathbb{R}\) be a mark space with the mark \(m_{i}\in M\) attached to the point \(x_{i}\in X\). A realisation of the corresponding marked point process \(X_{M}\) is then \[\{(x_{i},m_{i}),i=1,...,n\}\subset W\times M,\] where \(n\) is the observed number of points. We assume that \(X_{M}\) is stationary and isotropic, i.e. invariant under translations and rotations of the point locations, respectively. ### _Ripley's \(K\) function_ Here, we review Ripley's \(K\) function that is used to describe the second-order properties of a point process [29]. For stationary and isotropic point processes in \(\mathbb{R}^{2}\), Ripley's \(K\) function has a straightforward interpretation. In particular, \(\lambda K(r)\), \(\lambda\) being the intensity (the mean number of points per unit area) of the process, gives the expected number of further points of the process in the disc with radius \(r\) centred at an arbitrary point of the process. For the Poisson point process, \[K(r)=\pi r^{2},\qquad r\geq 0.\] The homogeneous Poisson point process corresponds to complete spatial randomness, and therefore it is often treated as a reference model [12, 20]. Observed values that are smaller or larger than their theoretical values under complete spatial randomness indicate regularity or clustering, respectively. Since there are points of the process outside the observation window that may interact with the points inside the window, an edge correction term is needed when estimating the \(K\) function. An approximately unbiased estimator of the \(K\) function is given by \[\hat{K}(r)=\frac{|\,W\,|}{n^{2}}\sum_{i=1}^{n}\sum_{j\neq i}w(x_{i},x_{j}) \mathbf{1}\{||\,\,x_{i}-x_{j}\,||\leq r\},\quad r\geq 0,\] where \(n\) is the total number of the observed points in \(W\), \(||x_{i}-x_{j}||\) denotes the Euclidean distance between the points \(x_{i}\) and \(x_{j}\), \(\mathbf{1}\{A\}\) is the indicator function equal to \(1\) when event \(A\) is true and zero otherwise, and \(w(x_{i},x_{j})\) is an edge correction term. We used the translation correction \(w(x_{i},x_{j})=\frac{1}{|W_{x_{i}}\cap W_{x_{j}}|}\) where \(W_{x_{i}}\) is the translated window \(W_{x_{i}}=\{z+x_{i}:z\in W\}\) and \(|\cdot|\) denotes the two dimensional Lebesgue measure. In this work, we use the variance stabilized and centred variant of the \(K\) function [12], defined by \[L(r)-r=\sqrt{\frac{K(r)}{\pi}}-r,\quad r\geq 0, \tag{1}\] which for the Poisson process equals zero. Therefore, positive values of this centred function indicate clustering and negative values regularity. Our data are hierarchically structured into groups (healthy, mild, moderate), subjects within the groups, and samples from the subjects. Since we are interested in the average spatial structure of the ENFs in each group, the overall group-wise \(K\) and \(L\) functions need to be estimated. This can be achieved as follows. For group \(g\), we initially estimate the sample-wise summary functions \(K_{ij}\) for sample \(j\in\{1,...,m_{i}\}\) of subject \(i\), \(i\in\{1,...,N_{g}\}\). Let \(\hat{K}_{ij}\) denote the corresponding estimator. Then, the subject specific mean function \(\widehat{K}_{i}\) can be obtained as a weighted mean of the functions \(K_{ij}\). An unbiased estimator for the subject-wise \(\widehat{K}_{i}\) function for each subject \(i\) is given by \[\widehat{K}_{i}(r)=\sum_{j=1}^{m_{i}}w_{ij}\hat{K}_{ij}(r).\] Similarly, the subject-wise functions are weighted to obtain the group-wise function \(\tilde{K}_{g}\) for the group \(g\). A group-wise estimator for \(\tilde{K}_{g}\) is given by \[\widehat{K}_{g}(r)=\sum_{i=1}^{N_{g}}w_{i}\hat{K}_{i}(r). \tag{2}\] Square point number weights are used to compute the subject-wise and group-wise estimates, since the point patterns from different samples and subjects cannot be assumed to have the same intensity [8, 23, 15]. Let \(n_{ij}\) denote the number of points in sample \(j\) of subject \(i\), and let \(n_{i}=\sum_{j=1}^{m_{i}}n_{ij}\) be the total number of points in the samples from subject \(i\). Then, the square point number weights for the group-wise and subject-wise \(K\) functions are given by \[w_{i}=\frac{n_{i}^{2}}{\sum_{k=1}^{N}n_{k}^{2}},\quad w_{ij}=\frac{n_{ij}^{2} }{\sum_{k=1}^{m_{i}}n_{ik}^{2}}.\] Second-order properties of point processes with marks can also be investigated. The mark correlation function for marked point processes is discussed in the following section. ### Mark correlation function The mark correlation function describes the second-order characteristics for point processes with quantitative marks. It can be defined as the (conditional) expectation \[k(r)=\frac{\mathbb{E}[f(m_{i},m_{j})|\|x_{i}-x_{j}\|=r]}{\mu^{2}},\] where \(\mu\) is the mean of the considered mark distribution and \(f(m_{i},m_{j})\) is a so-called test function [33]. Here, we use \(f(m_{i},m_{j})=m_{i}m_{j}\). Therefore, if the marks are uncorrelated, \(k\) equals 1. Values less than 1 indicate negative correlation and values greater than 1 positive correlation. The mark correlation function can be estimated by kernel estimation, namely \[\hat{k}(r)=\frac{\sum\limits_{i=1}^{n}\sum\limits_{j=1,j\neq i}^{n}m_{i}m_{j} \cdot w_{ij}}{\bar{m}^{2}}, \tag{3}\] where \(\bar{m}\) is the mean of the observed marks, \[w_{ij}=\frac{e_{b}(r-|x_{i}-x_{j}|)}{|W_{i}\cap W_{j}|},\] and \(e_{b}\) is the Epanecnikov kernel function with bandwidth \(b\)[33]. The bandwidth can be chosen e.g. by using the rule-of-thumb given in [30]. Since our data are replicated, we need to estimate the mark correlation function for each sample and then pool all the estimates to obtain the subject-wise and group-wise estimates in a similar fashion as we estimate the \(K\) function above. The mark correlation ( Eq. (3)) and the \(L\) function (Eq. (1)) will be used throughout the paper to assess the goodness of fit of the proposed model. The inference method we chose, on the other hand, requires an informative summary function of the data. To avoid using the same summary functions for inference and model evaluation, we used a different summary function for inference, which is described in the following section. ### Empty space function The _empty space distribution function_\(F(r):[0,\infty)\rightarrow[0,1]\) is related to the probability that an arbitrary point \(x\in\mathbb{R}^{2}\) has an empty disc of radius \(r\) around it. It is defined as \[F(r)=1-P(X(b(x,r))=0),\] where \(X(b(x,r))\) is the random number of points of the process in the disc centered at \(x\) with radius \(r\), denoted by \(b(x,r)\). Let \(\{y_{i}\}_{i=1}^{t}\) be \(t\) points randomly sampled within \(W_{\ominus r}:=\{x\in W:min(||\ x-x_{b}\ ||\ )\geq r,\quad x_{b}\in\partial W\}\), where \(\partial W\) is the boundary of \(W\). Then, an unbiased estimator for the empty space function \(F\)[12] is given by \[\hat{F}(r)=\frac{1}{t}\sum\limits_{i=1}^{t}\mathbf{1}\{min(||\ y_{i}-x\ ||)\leq r,\ x\in X\cap W\}. \tag{4}\] ## 4 Modelling the ENF thinning process Thinned point processes provide a class of models for point patterns that are caused by random mortality. A thinning operation defines a rule which determines which points in a point process \(X\) should be deleted, to obtain a thinned point process \(X_{thin}\). Thinning operations can be divided into the following three different types [12]: * _Independent \(p\)-thinning:_ In \(p\)-thinning each point in the point process \(X\) is deleted with constant probability \(1-p\), \(p\in[0,1]\), independently of its location and on the other points in \(X\). The parameter \(p\) is called the "retention probability". * _Independent \(\pi(x)\)-thinning:_ A generalisation of the \(p\)-thinning is the \(\pi(x)\)-thinning. In \(\pi(x)\)-thinning the retention probability depends on the location \(x\) of the point, that is for all \(x\in X\), the deterministic function \(\pi(x)\) gives the probability that \(x\in X_{thin}\). As \(p\)-thinning, \(\pi(x)\) thinning is statistically independent, that is the deletion or non-deletion of any particular point does not depend on the operation on the other points. * _Dependent thinning:_ More general thinning strategies can be constructed if we let the retention probability to depend on the other points in the point process \(X\), that is for every point \(x\in X\) the retention probability is given by a function \(\pi(x\mid X)\). As neuropathy advances, the number of nerve trees (base points) and end points decreases. Here, we suggest a thinning scheme to describe the biological process behind these changes. It is believed that whole ENF trees die and, in addition, some individual nerve endings may disappear or appear, the latter being caused by the existing nerve fibers branching and creating new end points in order to compensate for the loss of nerves. In addition, the spatial pattern of base and end points becomes more clustered as the neuropathy advances as illustrated in Figure 5. Both the base and end point patterns are clustered as their corresponding centered \(L\) functions ( Equation (1) ) are positive, except at very small distances. Note that the clustering of end points increases from healthy to mild and from mild to moderate patterns but the clustering of base points increases only from healthy to mild. Below, we first illustrate that an independent random thinning scheme, e.g. an independent p-thinning or \(\pi(x)\)-thinning, applied to healthy samples does not result in patterns similar to the mild patterns. Then, we propose a dependent thinning strategy, where the probability for a point to be retained depends on the other points in \(X\), and suggest an approximate Bayesian computation approach for the inference of the model. ### Independent random \(p\)-thinning A natural starting point is to investigate whether mild patterns can be constructed by randomly removing either complete nerve trees or individual nerve end points from healthy patterns, i.e. that there is no underlying mechanism that guides the nerve removal. However, if the process \(X\) is stationary, then Ripley's \(K\) and \(L\) functions are invariant under the independent random thinning operation, and therefore summary functions of \(X\) and of the thinned process \(X_{thin}\) are identical. As mild patterns of base and end points are more clustered than the healthy patterns, we expect the independent random thinning to be unsuitable for capturing the spatial structure of the mild diabetic patterns. To confirm this, we chose as our null models two different independent random \(p\)-thinning models and applied them to the healthy base and end point patterns. For each model this was performed by (i) estimating the probability \(p\) as the ratio between the corresponding mean mild group and mean healthy group intensities \(\bar{\lambda}_{M}\) and \(\bar{\lambda}_{H}\), and (ii) by randomly and independently removing either end points or base points together with all the connected end points with probability \(1-p\). Then, we constructed 95% global envelopes (see Appendix) based on 2500 independent summary curves obtained via simulations from an independent thinning model with the estimated retention probability \(\hat{p}\) (see Figure 4). Since the data curves (in red) fall completely outside the envelopes, we can conclude that, as expected, the thinned normal patterns fail to capture the structure present in the mild patterns. ### Dependent thinning It can be seen in Figure 5 that the mild base and end point patterns are more clustered than the corresponding healthy patterns. Therefore, base/end points should be removed from healthy patterns such that the resulting patterns are more clustered. Below, we suggest a parametric thinning strategy, where isolated nerve trees are more likely to be removed than non-isolated ones. Figure 4: Group-wise centred \(L\) functions based on the data (red, solid) for the mild patterns with 95% global envelopes (grey) based on independent random p-thinning of end points (left) or base points (right) of the healthy patterns. Let \(B^{M}\) and \(E^{M}\) denote the base and end point patterns for the targeted mild diabetic sample observed in \(W\) and \(n_{B}\) and \(n_{E}\) be the numbers of points in \(B^{M}\) and \(E^{M}\), respectively. Now, let \(B^{H}=\{y_{j},m(y_{j})\}\) and \(E^{H}=\{x_{i},m(x_{i})\}\), with \(j=1,\ldots,n_{B}^{\prime}\) and \(i=1,\ldots,n_{E}^{\prime}\), denote the marked base and end point patterns for a healthy sample, with mark \(m(y_{j})\) giving the Euclidean distance to the closest other base point \(y_{k}\), \(k\neq j\), with \(n_{B}^{\prime}\) and \(n_{E}^{\prime}\) being the numbers of points in \(B^{H}\) and \(E^{H}\), respectively. We thin the pattern \(B^{H}\) to exactly \(n_{B}\) base points according to an iterative thinning scheme (see Algorithm 1), with \(\pi(y_{j})=f(m(y_{j});\theta)\), where \(\theta\) is a scale parameter and \(f(\cdot;\theta)\) is given by \[f(m;\theta)\propto e^{-\theta^{2}m^{2}}. \tag{5}\] By definition, \(f(\cdot;\theta)\) favors the removal of isolated points, i.e. points with large marks \(m\), since the retention probabilities \(\pi(y_{j})\) decrease with increasing distance \(m\) (see Figure 6). Moreover, the removal probabilities are proportional to \(1-e^{-\theta^{2}m^{2}}\), and therefore, the larger the value of \(\theta\), the closer this thinning strategy is to independent thinning. For each \(y_{j}\) that is removed, we remove all the end points connected to it. The resulting base and end point patterns are denoted by \(y^{*}\) and \(x^{*}\), respectively. Figure 5: Group-wise pooled \(L(r)-r\) functions, for the end points (left) and base points (right) of the healthy (blue), mild (green) and moderate (black) groups with 95% pointwise bootstrap envelopes (dashed lines). The red dashed line corresponds to the complete spatial randomness. For a mild diabetic sample with \(n_{B}\) base points, the aforementioned thinning model is applied to all healthy patterns that have at least \(n_{B}+5\) base points. The number of such healthy patterns for a mild pattern \(a\in\{1,...,28\}\) is denoted by \(N_{a}\). Hence for each mild diabetic sample \(a\), \(N_{a}\) thinned replicates with exactly \(n_{B}\) base points are constructed. Then, these \(\prod_{a=1}^{28}N_{a}\) thinnings are used to construct group-wise \(L\) function estimates. Throughout the remainder of this paper, this model will be denoted as \(\mathcal{M}(\theta)=\mathcal{M}(\theta\mid B^{H},n_{B})\), with parameter \(\theta\), and is detailed in Algorithm 1. ``` Input: Healthy basepoint pattern \(B^{H}=\{y,m(y)\}\) and corresponding endpoint pattern \(E^{H}=\{x,m(x)\}\), parameter \(\theta\), desired number of basepoints \(n_{B}\). Output: Simulated mild diabetic basepoint pattern \(y^{*}\) and corresponding endpoint pattern \(x^{*}\). repeat Let \(I=\{1,\ldots,\mid y\mid\}\) be an index set. For \(y_{j}\in y\), \(j\in I\), calculate \(m_{j}=m(y_{j})>0\) For \(y_{j}\in y\), \(j\in I\), calculate \(f_{j}=\frac{1-e^{-\theta^{2}m_{j}^{2}}}{\sum_{k}1-e^{-\theta^{2}m_{k}^{2}}}\) Sample with replacement an index \(l\in I\) using the weights \(f_{j}\), \(j\in I\). Remove \(y_{l}\) from \(y\). Call \(y^{*}\) the resulting basepoint pattern. From the endpoint pattern \(x\), remove all points that are connected to \(y_{l}\). Call \(x^{*}\) the resulting endpoint pattern. Update the basepoint pattern \(y=y^{*}\) and the endpoint pattern \(x=x^{*}\) until\(|B^{H}|=n_{B}\) ``` **Algorithm 1** Dependent thinning model \(\mathcal{M}(\theta\mid B^{H},n_{B})\) ### Inference using approximate Bayesian computation To infer plausible values of the scale parameter \(\theta\) controlling the retention probabilities, we used approximate Bayesian computation (ABC) [31, 18]. This is a family of algorithms suitable for Bayesian Figure 6: The density function defined in Equation (5) for different values of the parameter \(\theta\). inference when the likelihood function associated to a statistical model \(\mathcal{M}(\theta)\) is unavailable in closed form, or is computationally too expensive to approximate, but given a parameter vector \(\theta\), it is possible to simulate artificial data from \(\mathcal{M}(\theta)\). The simplest ABC method is the acceptance-rejection sampling [28], that, for given data \(y\) and (vector of) summary statistics thereof \(S(y)\), samples from an approximation \(P_{\epsilon}(\theta\mid S(y))\) of the posterior distribution \(P(\theta\mid S(y))\). This is performed by (i) proposing a parameter \(\theta^{*}\sim P(\theta)\) sampled from its prior distribution \(P(\theta)\) ; (ii) conditionally on \(\theta^{*}\), simulate an artificial dataset \(y^{*}\) as \(\mathcal{M}(\theta^{*})\to y^{*}\), to be read as the output of a "run" of model \(\mathcal{M}(\theta^{*})\); (iii) reduce both \(y^{*}\) and \(y\) to a low-dimensional set of summary statistics \(S(y^{*})\) and \(S(y)\), respectively, and evaluate their proximity using some distance (e.g. Euclidean) \(\|S(y^{*})-S(y)\|\); and finally, (iv) retaine \(\theta^{*}\) if \(\|S(y^{*})-S(y)\|<\epsilon\), for some small \(\epsilon>0\), and reject it otherwise. The procedure (i)-(iv) is iterated until \(N\) parameter values have been accepted. A pseudocode for the ABC rejection sampler is given in Algorithm 2 below. Each accepted parameter is a draw from the approximate posterior \[P_{\epsilon}(\theta\mid S(y))\propto P(\theta)\int\mathbf{1}_{\|s^{*}-s\||< \epsilon}P(s^{*}|\theta)ds^{*},\] where in the integrand we have used the shorthand notations \(s^{*}=S(y^{*})\) and \(s=S(y)\). This algorithm is computationally inefficient when the posterior is very dissimilar to the prior, resulting in many rejections. Instead of fixing \(\epsilon\), we can simulate a large number of \(\theta\) values and choose the most appropriate of these as described below. More sophisticated ABC methods taking into account information about the previously accepted draws for \(\theta\) have been suggested, both in an MCMC framework [19, 27] and as sequential Monte Carlo algorithms ([32], [3], [6]). However, as our model includes only the scaling parameter \(\theta\), the simple ABC rejection based method described above was found to be suitable enough. ``` Input: prior \(P(\theta)\), model \(\mathcal{M}(\theta)\), summaries \(S(\cdot)\), threshold \(\epsilon>0\), positive integer \(N\). Output: posterior draws \((\theta_{1},...,\theta_{N})\). for\(i\gets 1,...,N\)do repeat Draw from prior \(\theta^{*}\sim P(\theta)\) Simulate \(\mathcal{M}(\theta^{*})\to y^{*}\) Compute \(S(y^{*})\) until\(\|S(y^{*})-S(y)\|<\epsilon\) \(\theta_{i}\leftarrow\theta^{*}\) endfor ``` **Algorithm 2** ABC rejection sampler Notice that \(P_{\epsilon}(\theta\mid S(y))\) coincides with \(P_{\epsilon}(\theta\mid y)\), when \(S(\cdot)\) is a sufficient statistic for \(\theta\). On the other hand, sufficient statistics are generally unavailable and therefore, in practice, ABC always returns approximate inference even in the limit when \(\epsilon=0\). It is therefore crucial to construct appropriate ("informative") summary statistics that are able to retain information about \(\theta\). As a rule of thumb, it is suggested that the length of the vector \(S(y)\) (which is the same as the length of \(S(y^{*})\)) should be the same as the length of \(\theta\)[9]. In our case, as we only have the scaling parameter \(\theta\) in Equation (5), we construct a single summary statistic. The \(K\) or \(L\) function could be chosen as the summary statistic in the ABC algorithm but since the centred \(L\) function will be used to evaluate the goodness-of-fit of the thinning model, we chose to use the empty space function \(F_{y}\) instead. However, we did not use the entire function but we considered the summary function previously used in [17]. In particular, we used \(s=S(y)=\min(\{r:\hat{F}_{y}(r)=0.3\})\), where \(\hat{F}_{y}(r)\) is the estimator given in Equation (4). The summary statistic for the observed data is \(s=S(y)=\min(\{r:\hat{F}_{y}(r)=0.3\})\), where \(\hat{F}_{y}(r)\) is the estimator given in Equation (4), and \(y\) are the observed mild diabetic basepoint patterns. Similarly, for generic simulated mild base point patterns \(y^{*}\) obtained using \(\mathcal{M}(\theta^{*}\mid B^{H},n_{B})\), as defined in Section 4.2, we computed the summary statistics \(s^{*}=S(y^{*})=\min(\{r:\hat{F}_{y^{*}}(r)=0.3\})\). Notice that, even though \(\mathcal{M}(\theta^{*}\mid B^{H},n_{B})\) generates a simulated endpoint pattern \(x^{*}\), the inference is solely based on the simulated and empirical mild basepoint patterns \(y^{*}\) and \(y\). ### ABC inference using simulated data A simulation study was conducted to assess the performance of the inference method. For this purpose, healthy data were simulated from a Matern cluster process using parameters estimated from the data using the minimum contrast method [12]. The simulated parent pattern, was represented by the base points, and the simulated daughter pattern, was represented by the end points. Then, the proposed dependent thinning was applied to the simulated pattern for different chosen values for \(\theta\), and the corresponding realisations were used to obtain the empirical summary statistics used in the ABC method. Each pattern was thinned such that \(n_{B}=14\) parent points (and their daughter points) remained. The posterior distributions for \(\theta\) are displayed in Figure 7 together with an exponential prior \(P(\theta)=\text{Exponential}(10)\) for \(\theta\). The true values of the data-generating \(\theta\) are included for comparison. We observe that the true parameter value \(\theta^{o}\) is well identified when \(\theta^{o}<0.1\), and while for larger values the posterior mode is close to the true value, the posterior uncertainty increases, that is as the value of \(\theta^{o}\) increases it becomes progressively more challenging to identify it. This is due to the fact that as \(\theta^{o}\) increases the dependent thinning scheme approaches independent random thinning, as shown in Figure 6, and as a result, the influence of the parameter on the realisations diminishes. Inference results have been obtained using the "reference table" version of the ABC rejection algorithm, which does not require the threshold \(\epsilon\) to be prefixed in advance, and it is illustrated as follows: (i) we simulated \(1,330,000\) parameters independently from the prior Exponential(10), and conditionally on these draws we simulated correspondingly \(1,330,000\) data sets \(y^{*}=\mathcal{M}(\theta^{*})\); (ii) we calculated \(S(y^{*})\) on each of these data sets and \(S(y)\) for the observed data, and (iii) accepted those \(\theta^{*}\)'s for which the corresponding distances \(\|S(y)-S(y^{*})\|\) were smaller than the \(0.1\)-th percentile of all \(1,330,000\) ABC distances. We used the abc function in the R abc package [5] to carry out the computations. In next section we consider inference on real ENF data. ## 5 Results In this section, the dependent thinning model is fitted to the ENF data and the goodness-of-fit of the model is investigated by comparing the thinned and target patterns with respect to some spatial and non-spatial summary statistics. Initially, we compare the structure in the thinned healthy and mild patterns, and then, in the thinned mild and moderate patterns. ### _Healthy vs Mild_ We applied the thinning strategy introduced in Section 4.2 and removed whole nerve trees (base points and end points connected to it) from the healthy patterns. The posterior distributions of \(\theta\) parameter based on the thinned healthy patterns are displayed for each mild diabetic pattern in Figure 8. As the model favors the removal of isolated trees if the \(\theta\) is small, and corresponds to independent thinning when \(\theta\) is large, an Exponential(10) prior was chosen for \(\theta\). Initially, we used a uniform \begin{table} \begin{tabular}{c|c|c} True value of \(\theta\) & Median & \(CI_{\theta}\) \\ \hline 0.02 & 0.028 & [ 0.011, 0.195 ] \\ 0.05 & 0.038 & [ 0.011, 0.211 ] \\ 0.10 & 0.119 & [ 0.037, 0.384 ] \\ 0.15 & 0.150 & [ 0.046, 0.404 ] \\ \end{tabular} \end{table} Table 1: Posterior median and 95% credibility intervals \(CI_{\theta}\) for the parameter \(\theta\) in the simulation study, where the target patterns have 14 base points and the true value of \(\theta\) varies between 0.02 and 0.15. Figure 7: Histograms of ABC posterior draws for \(\theta\) from a simulation study. The prior density (solid black line) is an Exponential(10) truncated to values larger than 0.01 and the true values of \(\theta\) (solid red lines) are also reported on the panel headings. prior which gave posterior distributions with large variance. We believe that this choice made the ABC method more efficient, which resulted in better inference. For several mild diabetic samples, the bulk of the posteriors \(P_{\epsilon}(\theta\mid S(y))\) is located around very small \(\theta\) values indicating that, on a typical healthy pattern, isolated nerve trees are favored to be removed from the healthy patterns in order to obtain patterns similar to these mild patterns. On the other hand, for some mild diabetic samples, the posterior \(P_{\epsilon}(\theta\mid S(y))\) is centred at "large" values, indicating that randomly thinning the healthy patterns is sufficient to capture the structure in the targeted mild diabetic sample. The latter patterns contained rather large number of nerve trees indicating that the neuropathy is in an early stage, and hence cannot be detected from the nerve patterns yet. Also for a few mild diabetic samples, the posterior \(P_{\epsilon}(\theta\mid S(y))\) coincides with the prior. In particular, the inference quality is low for the patterns with small number of nerve trees or for the patterns where most of the nerve trees are located close to the edge of the observation window. The results regarding the spatial structure of the end and base points of the thinned healthy patterns are presented in Figure 9. The proposed thinning scheme creates patterns that capture both the end point and base point spatial structure very well as the 95% global envelopes completely cover the empirical centered \(L\) curves. The envelopes are based on 2,500 simulations from the posterior predictive distribution of the thinning model. In other words, for each mild diabetic neuropathy sample, we simulate data \(y^{*}\) using \(\theta^{*}\), with \(\theta^{*}\) sampled from \(P_{\epsilon}(\theta\mid S(y))\), which are then used to calculate the mean \(L(r)-r\) for the simulated mild diabetic neuropathy group. Notice that each of the 2500 simulations is generated by selecting one of the approximately 1300 posterior draws available for Figure 8: ENF data: histograms of ABC posterior draws for \(\theta\) for each mild diabetic sample obtained by thinning healthy patterns. The prior densities are plotted with solid black lines. Each column corresponds to one subject (total 8 subjects), and each row corresponds to different samples from one subject. The number of samples for each subject are varying from 2 to 4. each mild diabetic sample and one of the 112 healthy diabetic samples at random. A pseudocode for this procedure is given in Algorithm 3. ``` Input: Basepoint and endpoint healthy patterns \(\{B^{H},E^{H}\}_{j}\) with \(j=1,\ldots,112\), Number of basepoints \(n_{B}^{i}\) in the mild diabetic samples \(i=1,\ldots,28\), Posterior sample \(\theta^{i}=(\theta_{1}^{i},\ldots,\theta_{N}^{i})\) for the mild diabetic samples \(i=1,\ldots,28\), Summary function \(T(\cdot)\). Output: 95% global envelopes constructed from 2500 simulations from the posterior predictive distribution. for\(s\gets 1,...,2500\)do for\(i\gets 1,...,28\)do Sample with replacement a value \(\theta\) from \(\theta^{i}\) Sample with replacement an index \(j\in\{1,\ldots,112\}\) Simulate \(\mathcal{M}(\theta\mid B_{j}^{H},n_{B}^{i})\rightarrow(y^{*},x^{*})\) Compute \(T_{i}=T(y^{*},x^{*})\) endfor Calculate \(\bar{T}_{g}^{s}\) using \((T_{1},\ldots,T_{28})\) as described in Equation (2) endfor Construct 95% global envelopes (Appendix B) using \((\bar{T}_{g}^{1},\ldots,\bar{T}_{g}^{2500})\) ``` **Algorithm 3** Posterior predictive bands Moreover, for each nerve tree we calculated the area of its reactive territory - convex hull determined by the base and its end points - which was then attached to each base point as a mark. To deal with zero areas, i.e. nerve trees with only one end point, the length between the end and the base point was used instead. This length is much smaller than a typical area of a reactive territory. Figure 10 illustrates the mark correlation function with 95% global envelopes constructed using simulations from the posterior predictive distribution as explained above. The thinning model captures even the mark correlation structure between the sizes of the reactive territories well. We also computed some non-spatial summary statistics, namely the cluster size distribution, i.e. the cumulative distribution of the number of end points per nerve tree, and of the total area of the reactive territories to evaluate the dependent thinning model, illustrated in Figure 11. The model seems to capture even these characteristics well. Figure 10: Group-wise mark correlation function for the base points marked by the areas of their reactive territories with 95% global envelopes constructed from 2500 simulations from the posterior predictive distribution of the thinning model. The solid curve is the mark correlation function estimated from the mild data. Figure 9: Group-wise centered \(L\) functions with 95% global envelopes for the end points (left) and base points (right) constructed from 2500 simulations from the posterior predictive distribution of the thinning model. The solid curves are the centered \(L\) functions estimated from the mild data. ### _Mild vs Moderate_ When applied to the healthy patterns, the suggested dependent thinning approach seems to be able to produce patterns similar to the empirical mild patterns. A natural question is whether we can obtain patterns similar to the observed moderate patterns by thinning mild patterns in a similar manner. It can be seen in Figure 5 that the end points in the moderate patterns are more clustered than in the mild patterns but since there is not such a big difference in clustering of the base points in the two groups, independent random thinning could be appropriate for the base points. Therefore, we randomly thinned the mild patterns by removing nerve trees to the number of nerve trees, i.e. base points, in the observed moderate patterns. Note that only the base points, not the end points, are randomly thinned. No parameters need to be estimated in this case as nerve trees are randomly thinned with equal probabilities. The group-wise centered \(L\) functions with 95% global envelopes for the base and end point patterns after independent thinning of nerve trees are given in Figure 12. We observe that the empirical centered \(L\) functions lie within the envelopes for the base point patterns indicating a good fit of the model. Even the overall structure of the end point patterns is captured quite well by the model. However, the mark correlation of the sizes of the reactive territories is not completely caught by the independent thinning model, see Figure 13. On the other hand, as seen in Figure 14, the end point cluster size distribution and the distribution of the area of the reactive territories are quite well described by the model even though in the former case the moderate data based distribution is very close to the upper boundary of the interval. Figure 11: Cumulative distribution functions of the cluster size (left) and the total area of reactive territories (right) with 95% global envelopes constructed from 2500 simulations from the posterior predictive distribution of the model. The solid curves are the corresponding cumulative distribution functions estimated from the mild data. Figure 12: Group-wise centered \(L\) functions with 95% global envelopes for the end points (left) and base points (right) constructed from 2500 simulations from the independent thinning model. The solid curves are the centered \(L\) functions estimated from the moderate data. Figure 13: Group-wise mark correlation function for the base points marked by the areas of their reactive territories with 95% global envelopes constructed from 2500 simulations from the independent random thinning model. The solid curve is the mark correlation function estimated from the moderate data. ## 6 Discussion The biological process that guides the physiological changes in the epidermal nerve fiber structure of the neuropathy was investigated. The effects of varying severity of the underlying neuropathic condition, diabetes, were also analyzed. For this purpose, we treated the ENF samples from the feet of healthy patients, and from patients with mild and moderate diabetic neuropathy, as realisations of spatial point processes occurring in response to progressive pathologic severity. As the spatial intensity of the ENFs decreases with progression of neuropathy, the mild diabetic neuropathy patterns can be considered as spatial thinnings of the healthy patterns, and the moderate diabetic neuropathy patterns as spatial thinnings of the mild patterns. Therefore, we proposed spatial thinning models for the changes that occur in the ENF structure in neuropathy as the as diabetes becomes more severe. To the best of our knowledge, this is the first study that considers such spatial thinning models to investigate the nerve removal as a result of varying severity of diabetes, and corresponding severity of neuropathy. Two spatial thinning models were investigated, an independent random \(p\)-thinning and a dependent thinning scheme. The scale parameter controlling the retention probability in the latter model was estimated by using approximate Bayesian computation (ABC), which is a very flexible methodology that is applicable whenever the likelihood function is unavailable or is computationally expensive to evaluate but it is feasible to simulate from a computer model. We focused first on the nerve removal from the healthy patterns to obtain patterns similar to the observed mild patterns. An independent random \(p\)-thinning seems insufficient in modeling the nerve removal process at this earliest stage of diabetic neuropathy. Therefore, a more complex thinning model that favored the removal of isolated nerves in order to increase the overall clustering in the patterns, was proposed. Measured by the \(L\) function, mark correlation function, and some non-spatial summary statistics, the model was able to describe the change from healthy to mild diabetic neuropathy very well. On the other hand, the independent removal of nerve trees was enough to model the nerve mortality in mild diabetic neuropathy patients to obtain patterns similar to the observed moderate patterns. Our original hypothesis was that first, whole ENF trees die due to the neuropathy and then, some individual nerve endings may disappear or appear. In our study, it was enough to remove entire nerve Figure 14: Cumulative distribution functions of the cluster size (left) and the total area of reactive territories (right) with 95% global envelopes constructed from 2500 simulations from the independent random thinning model. The solid curves are the corresponding cumulative distribution functions estimated from the moderate data. trees and no additional removal or addition of individual nerve endings was needed. However, we can see in Figure 14 that the independent thinning of mild patterns only barely covered the corresponding curve estimated from the moderate patterns and in Figure 13 that the fit of the mark correlation function was not perfect in this case. Therefore, even though the suggested models describe the data sufficiently well, they could still be improved. ## Acknowledgements The authors thank William R. Kennedy's group (University of Minnesota) for blister immunostaining, quantification and morphometry of the ENF data. The authors also thank the Swedish Research Council for financially supporting the project. UP acknowledges funding from the Swedish National Research Council (Vetenskapsradet 2019-03924) and the Chalmers AI Research Centre. ## Data availability statement Unfortunately, we are not able to share the data publicly.
2310.09256
Political claim identification and categorization in a multilingual setting: First experiments
The identification and classification of political claims is an important step in the analysis of political newspaper reports; however, resources for this task are few and far between. This paper explores different strategies for the cross-lingual projection of political claims analysis. We conduct experiments on a German dataset, DebateNet2.0, covering the policy debate sparked by the 2015 refugee crisis. Our evaluation involves two tasks (claim identification and categorization), three languages (German, English, and French) and two methods (machine translation -- the best method in our experiments -- and multilingual embeddings).
Urs Zaberer, Sebastian Padó, Gabriella Lapesa
2023-10-13T17:13:00Z
http://arxiv.org/abs/2310.09256v1
# Political claim identification and categorization in a multilingual setting: ###### Abstract The identification and classification of political claims is an important step in the analysis of political newspaper reports; however, resources for this task are few and far between. This paper explores different strategies for the cross-lingual projection of political claims analysis. We conduct experiments on a German dataset, DebateNet2.0, covering the policy debate sparked by the 2015 refugee crisis. Our evaluation involves two tasks (claim identification and categorization), three languages (German, English, and French) and two methods (machine translation - the best method in our experiments - and multilingual embeddings). ## 1 Introduction The identification of political claims in news is a core step in the analysis of policy debates. _Discourse networks_, whose nodes correspond to claims and the actors who advance them, provide a rich source of information on phenomena such as formation of coalitions (who agrees with whom), shift in salience due to external events (e.g., migration waves making the issues of refugee accommodation more central in a debate), emergence of leadership, and polarization of a discourse Leifeld and Haunss (2012); Koopmans and Statham (1999); Hajer (1993). Political claims are defined as demands, proposals or criticism that are _supported_ or _opposed_ by an _actor_ (a person or a group of persons). Political claims generally form a call to action: they refer to something that should (or should not) be done in a policy domain (e.g., assigning empty flats to refugees). Thus, political claims are related to, but add a new perspective on, the Argument Mining question of what claims are, and what are the best strategies for modeling them across domains Daxenberger et al. (2017); Schaefer et al. (2022). The potential and challenges of the NLP support to political claim analysis have been thoroughly explored in the recent years in a monolingual setting Chen et al. (2020); Dayanik et al. (2022); however, there are very few resources available in multilingual or crosslingual settings. Thus, there is little work on the comparison of policy debates in different countries, either completely automatic, or semi-automatic (supporting the inductive development of annotation guidelines in a new language). This paper reports on cross-lingual pilot experiments on two tasks (claim identification and categorization), comparing two well known approaches to cross-lingual transfer in NLP in general, and argument mining in particular: machine translation and multilingual embeddings Eger et al. (2018); Toledo-Ronen et al. (2020). We first work with a reference dataset for the German migration policy debate Blokker et al. (2023), and on its projection to English and French, before moving on to a newly annotated English test set on the same topic. Machine Translation turns out to be the best cross-lingual projection strategy. ## 2 Experimental Setting ### Tasks This work focusses on two constituent tasks of political claim analysis Pado et al. (2019). Our first task is **claim identification**, performed as a binary classification task at the sentence level. Our second task is **claim categorization**, phrased as a multi-label classification task at the sentence level.1 Footnote 1: For our evaluation in the claim categorization task, we consider all claims in the manually annotated gold standard. ### Data We carry out two experiments. In the first one, we use a German corpus, DebateNet, which we automatically translate into English and French: this represents a cross-lingual transfer within the same media outlet. In the second experiment, we transfer our DebateNet models to an original English dataset based on the _Guardian_ newspaper. DebateNet 2.0.Blokker et al. (2023) is a dataset2 targeting the German public debate on migration policies in the context of the 2015 so-called'refugee crisis'. It is based on 700 articles from the German quality newspaper _die Tageszeitung (taz)_ with a total of 16402 sentences. Footnote 2: [http://hdl.handle.net/11022/](http://hdl.handle.net/11022/) 1007-0000-0007-DB07-B Political claims are annotated as textual spans, and each claim span is associated with at least one of 110 categories drawn from a theory-based codebook (annotation guidelines). Around 15% of sentences are annotated to contain a claim span. In total, the dataset contains 3442 claim spans corresponding to 4417 claim labels (i.e., each claim span is associated with an average of 1.3 claim categories). Annotations are first proposed by pairs of students of political science, with an inter-coder reliability is \(\kappa=0.59\)(Pado et al., 2019), and then accepted, rejected or merged by domain experts. We randomly split DebateNet into a training, development, and test set with a ratio of 80:10:10. Crucially for our experiments, the 110 fine-grained categories are organized into 8 top-level categories which encode general domains of the migration policy field. In the claim categorization experiments in this paper we focus on the 8 top-level categories. Table 5 in the Appendix shows them with the percentage of claims annotated for each category and illustrative examples. Guardian test setTo compare German news translated into English to actual UK news, we collected an English-language test set of 36 articles from the British quality newspaper Guardian, extracted from the World News section and published in 2015. To make our test set as compatible as possible with _DebateNet2.0_, we look at the five months most represented in _DebateNet2.0_ and within each month sample from articles written in the seven-day spans with the highest frequency of articles in _DebateNet2.0_. Articles were further filtered by keywords (_migrant, refugee, asylum, Germany, Syria, Afghanistan_ and their morphological and syntactic variants) and by the mention of the most salient political actors (politician and parties). The Guardian test set was manually annotated by a native speaker, a MSc-level student in Computational Linguistics, based on the _DebateNet2.0_ guidelines. Claims were identified and assigned to one of the 8 top-level categories described in the previous section. Across the 36 articles with 1347 sentences, the test set contains 82 claim spans which correspond to 101 claim categories (mean of 1.2 categories per span).3 Refer to Table 5 in the Appendix for the distribution of claim categories. Footnote 3: 30 claims, albeit identified by our annotator, could not be classified in any categories of the codebook. ### Methods #### 2.3.1 Projection methods With the German DebateNet2.0 as our starting point, and the goal of testing the feasibility of cross-lingual projection to English and French (as target languages), we compare the two most established projection methods (Eger et al., 2018; Toledo-Ronen et al., 2020): machine translation (to make the modeling task monolingual) and multilingual embeddings (to let the model bridge the language gap implicitly). This yields three experimental conditions: Translate-train:We machine-translate the German training data into the target languages and fine-tune a monolingual target-language model on it, to be evaluated on the target-language test data.4 Footnote 4: We uses the DeepL translator via its web interface on a free trial of the “advanced” plan as of August 2022. Translate-test:We machine-translate the test data into German (as described above) and apply a monolingual German model fine-tuned on the original German data to it. For the DebateNet experiments in Section 3.1, we can only simulate this setting, as we do not have genuine foreign-language test data. We simulate it with a back-translation: first, we machine-translate the German DebateNet test set into the target language (EN/FR); then we translate the simulated EN/FR test sets back into German. It is only on the Guardian test set (Section 4) that we can fully evaluate our models in the translate-test configuration. Multilingual:We employ multilingual embeddings, fine-tune them on the original German data, and apply the resulting classifier on the target language test data, exploiting the model's internal alignment of the source and target languages. For both claim identification and classification, we re-implement standard Transformer-based models from the literature (Dayanik and Pado, 2020). We use BERT as well as its German, French and multilingual versions. Details on the classifier setups for both tasks follow below. #### 2.3.2 Claim identification Translate-train:For English, we select the uncased model (bert-base-uncased) based on its performance on the development set, and we set learning rate to 5e-5 and warm-up steps to 30. The same configuration is used for the German monolingual baseline. For French, we select the base version of CamemBERT, camembert-base, with a learning rate of 4e-5 with 30 warm-up steps. Translate-test:we employ a German BERT model, bert-base-german-cased, fine-tuned on the original German dataset. The hyperparameters are the same as for English translate-train. Multilingual:Based on performance on the development set, we select the cased variant of the multilingual BERT from the Huggingface transformer library, bert-base-multilingual-cased. Training this model requires a lower learning rate of 2.5e-5 and correspondingly more epochs. #### 2.3.3 Claim categorization Translate-train:For the English model, we assess both the cased and uncased versions. Since the uncased one (bert-base-uncased) again performs slightly better, we select it and use a learning rate of 5e-5. Experiments on the corresponding development sets establishes 25 warm-up steps as a reasonable choice for all configurations in Task 2. The French model - the same as for the claim identification task - requires a learning rate of 4e-5. Translate-test:We employ bert-base-german-cased with a learning rate of 4e-5. The same model is also used for the monolingual German baseline model. Multilingual:Based on performance on the development set, we select bert-base-multilingual-uncased with a low learning rate of 3e-5 and correspondingly more epochs. ## 3 Experiment 1: Within-outlet cross-lingual transfer ### Claim Identification on DebateNet The left-hand side of Table 1 shows results for the first main experiment, comparing the translate-train, translate-test, and the multilingual embedding approaches to claim identification to a monolingual baseline.5 For comparison, we also run the translate-train and translate-test approaches on the multilingual model (multilingual:en:de and multilingual:de:de-en). The language labels de-en and de-fr stand for German data translated into EN or FR and back-translated into German. Footnote 5: Unless indicated by a dagger \(\dagger\), reported values for all conditions are the averages of two runs to reduce variance. The main contrast of this set of experiments is the one between the translate-train approach and the multilingual embeddings approach with respect to their performance on the target languages (EN/FR). For both target languages, the translate-train approach outperforms the monolingual baseline and the multilingual embedding approach. We ascribe this (small) performance gain to the higher quality of the embeddings available for the target languages: The monolingual English model, bert-base, is trained on a much larger corpus (English Wikipedia and BookCorpus) than bert-base-german, which is only trained on the significantly smaller German Wikipedia. The French model's training corpus is also over ten times larger than the German one. This also means the translation process, albeit not perfect, has not degraded the claim "signal" in the training data. This point is also supported by the results for the "simulated" translate-test approach, which (cf. Section 2.3) can be considered a test of translation quality. Since the performance is in line with the monolingual baseline (de-en) or even slightly superior to it (de-fr)6, the claim signal is preserved \begin{table} \begin{tabular}{l c c c c} \hline \hline Setup & Train & Test & Id & Cat \\ \hline BL (mono) & de & de & 56.2 & **70.5** \\ \hline Translate-train & en & en & **57.3** & 67.8 \\ Translate-train & fr & fr & **57.4** & 69.7 \\ \hline Translate-test & de & de-en & 55.8 & 69.5 \\ Translate-test & de & de-fr & 58.3 & 69.8 \\ \hline Multilingual & de & en & 45.8 & 50.3 \\ Multilingual & de & fr & 51.1 & 51.0 \\ \hline Multilingual\(\dagger\) & de & de-en & 52.0 & 60.0 \\ Multilingual\(\dagger\) & en & de & 55.4 & 64.1 \\ \hline \hline \end{tabular} \end{table} Table 1: DebateNet test set results: F1 scores (positive class for claim identification (ID), macro average for claim categorization (Cat)). BL (mono): monolingual baseline. through the back-translation process. Footnote 1: [https://github.com/tranlate-train/](https://github.com/tranlate-train/) In contrast, the multilingual embeddings perform poorly, below the monolingual baseline. The bottom part of Table 1 shows additional experiments we carried out to better understand this result. We find that a monolingual setup with multilingual embeddings (DE-DE) still performs below the monolingual baseline, but the performance gap is narrower than for the cross-lingual setups (DE-EN and DE-FR). Reverting the direction of the mapping, contrasting the performance of English-German (55.6) vs. German-English (45.8), again speaks in favor of the German representations being the weak point - the training data for the English-German multilingual embeddings setup is the same as that of the translate-train approach. The confusion matrix for the best cross-lingual model for English (translate-train), Table 2, shows many fewer false negatives than false positives (i.e., a high precision). Regarding application to the (semi-)automatic extraction of discourse networks, this outcome is complementary to the high-recall approach applied by hauss2020-English to the German annotation in DebateNet, but lends itself to high-precision human-in-the-loop approaches like the one proposed by ein2019-English for argument mining. Error Analysis.The misclassified instances provide some more insight into the model. For instance, we might expect the word "fordern" ("demand", "call for") to frequently appear in claims and therefore lead the model to make a positive prediction. Indeed, in the misclassified instances of the German-French translate-test model, forms of the word "fordern" or "Forderung" are 13 times more likely to be FP than FN even though there are almost twice as many FNs. We can therefore conclude that this word influences the model in the expected way. We bolster these observations with more formal methods: using saliency-based analysis (Simonyan et al., 2014) we can assign each token a relevance for the model's prediction. The results partially confirm this: the token "fordert" gets scores above 0.9 throughout. However, other forms, like the infinitive, receive lower scores, presumably because the 3rd person singular is more highly associated with concrete claiming situations. Saliency scores are highly correlated between models and between languages. E.g., the sentence "Der bayerische Ministerprasident Horst Seehofer begruste die Plane" and its corresponding English version 'Bavaria's prime minister Horst Seehofer welcomed the plans.', are both labeled as claims. In both cases, the highest saliency is assigned to "Plane"/"plans". A systematic comparison of scores among models is however complicated by the differences in tokenizations among embedding models. Alternatively, we can compare instances misclassified by different models. Here, we observe large overlap. On one test run, the multilingual German-French model misclassified 122 out of 1007 test instances, while the monolingual English model misclassified 120 instances. These instances have an overlap of 58% (random assignment, should result in 12% overlap). This suggests that the models struggle with the same instances. A first qualitative inspection at such "difficult" instances has ruled out the impact of proper names, length of sentences as well as the type of involved actors; further analysis in this direction is required. ### Claim Categorization on DebateNet The right-hand side of Table 1 shows the results for the claim categorization task (F1 macro over all classes; Tables 6-9 in the Appendix provide per-category results). Unsurprisingly, this fine-grained task is more challenging for cross-lingual transfer. None of the experimental configurations beats the monolingual baseline. As in claim identification, translate-train outperforms multilingual embeddings. Error analysis.Inspection of sentences shows that many misclassifications arise from misleading local lexical material in the sentences. For example, "Die SPD findet dies konnte die Integration unterstutzen" ("The Social Democratic party believes this could support integration") includes the word 'integration' which is a strong cue for the claim category 'integration', which the model predicts. However, the correct category is'residency', as becomes clear from the broader context of the article. Another example is: "Die sollen ja auch in \begin{table} \begin{tabular}{l r r} \hline \hline & Target: yes & Target: no \\ \hline Predicted: yes & 71 & 39 \\ Predicted: no & 75 & 822 \\ \hline \hline \end{tabular} \end{table} Table 2: Claim identification (DebateNet) confusion matrix of the best model for English (translate-train) der Gesellschaft ankommen" ("They must arrive in society after all"), with misleading cue'society' indicating claim category'society' and gold category 'integration'. A saliency analysis, as before, confirmed this pattern: the "red herring" cues consistently receive the highest saliency scores in the sentences. Notably, the error pattern persists in the case of literal translations, but dispears when the translation changes the wording ('mit Sicherheheit' - "with security/certainty" \(\rightarrow\) 'certainly'). ## 4 Experiment 2: Cross-outlet cross-lingual transfer Results on the Guardian test set are shown in Table 3. For claim identification, the translate-train approach outperforms the other approaches, confirming the trend seen on the DebateNet data. For claim categorization, translate-test outperforms translate-train and multilingual embeddings. Both of these results are in line with our findings in Exp. 1. For both tasks, we see a substantial decrease of performance on the Guardian data (-30 points for claim identification, -15 points for claim categorization). Since our previous experiment also used English data, this performance drop cannot be due to cross-lingual differences, but rather to differences between the two outlets, taz and the Guardian. Indeed, we see that a British newspaper is likely to report differently on German domestic affairs than a German newspaper, which leads to differences in claim form and substance: They tend to focus on the internationally most visible actors and report claims on a more coarse-grained level. They also overreport the claim categories most relevant for the British readership: claims migration control account for 22% of all claims in DebateNet but for 34% in the Guardian. In contrast, domestic (German) residency issues make up 14% of the DebateNet claims but only 2% of the Guardian claims. See Table 5 in the Appendix for a detailed breakdown and example claims. Thus, even if the Guardian claims might be structurally easier to recognize, the cross-outlet differences in claim distribution make transferring model representations from DebateNet to the Guardian hard. The confusion matrix for claim identification in Table 4 shows a low-precision scenario, in contrast to the high precision of the cross-lingual within-DebateNet setup. It is interesting to note that claim identification suffers much more (-30 points) than claim categorization (-15 points), indicating that the model of claim topics survives the transfer to another outlet better than the model of what constitutes a claim. ## 5 Conclusion This paper explores different strategies for the cross-lingual projection of political claims analysis from German into English and French. Our experiments establish the potential of machine translation for both claim identification and categorization, setting the stage for further investigations on the factors affecting projection performance and on the applicability of cross-lingual transfer for similar analyses. Multilingual embeddings yielded worse results, in line with previous analyses arguing that they attempt to solve a harder (since more open-ended) task than Machine Translation (Pires et al., 2019; Barnes and Klinger, 2019). We find that the language is not the only relevant dimension, though: in fact, the differences in presentation between German and British articles on German affairs go substantially beyond the language gap (Vu et al., 2019). ## Acknowledgements This study was partially funded by Deutsche Forschungsgemeinschaft (DFG) through MARDY (Modeling Argumentation Dynamics) within SPP RATIO and by Bundesministerium fur Bildung und Forschung (BMBF) through E-DELIB (Powering up e-deliberation: towards AI-supported moderation). We are grateful to Brandon Sorensen, who \begin{table} \begin{tabular}{l r r} \hline \hline & Target: yes & Target: no \\ \hline Predicted: yes & 29 & 147 \\ Predicted: no & 83 & 1088 \\ \hline \hline \end{tabular} \end{table} Table 4: Claim identification (Guardian): confusion matrix of the best model for English (translate-train) \begin{table} \begin{tabular}{l r r r r} \hline \hline Setup & Train & Test & Id & Cat \\ \hline translate-train & en & en & **25.5** & 51.0 \\ \hline translate-test & de & de-en & 20.6 & **53.4** \\ \hline multilingual & de & en & 20.0 & 39.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Guardian test set results for claim identification (Id, F1 of positive class) and claim categorization (Cat, macro F1) annotated the Guardian test set. ## Limitations Our main experiment was limited to German, English, and French, three typologically very similar languages. Generalization to more distant languages is presumably harder, but was outside the scope of our study. Our Guardian test set is very small (albeit not significantly smaller than out-of-domain gold sets often gathered for validation purposes), and annotating it was challenging due to the need to apply a codebook developed for the German debate to an English source. We are currently working on improving the size and quality of our test set. While our experiments are reassuring as regards translation quality, we cannot exclude that translation biases may have been introduced in the data. We are also aware that DeepL is not the only option for automatic translation; evaluating different translation methods, however, falls outside the scope of this work. ## Ethical Considerations At the level of datasets and annotations, we employed an existing dataset (DebateNet2.0). Our own annotation contribution (the Guardian test set) was based on publicly available data; moreover, the annotation task was carried out following best practices. The Guardian test set is available upon request. At the modeling level, we use previously defined models that are publicly available; in this sense, our contribution does not raise new ethical questions (e.g. in terms of misuse potential). To the contrary, our focus is on understanding how these models transfer across languages and what biases can potentially arise in this transfer, as shown by our focus on error analysis.
2304.10840
IoT-Based Solution for Paraplegic Sufferer to Send Signals to Physician via Internet
We come across hospitals and non-profit organizations that care for people with paralysis who have experienced all or portion of their physique being incapacitated by the paralyzing attack. Due to a lack of motor coordination by their mind, these persons are typically unable to communicate their requirements because they can speak clearly or use sign language. In such a case, we suggest a system that enables a disabled person to move any area of his body capable of moving to broadcast a text on the LCD. This method also addresses the circumstance in which the patient cannot be attended to in person and instead sends an SMS message using GSM. By detecting the user part's tilt direction, our suggested system operates. As a result, patients can communicate with physicians, therapists, or their loved ones at home or work over the web. Case-specific data, such as heart rate, must be continuously reported in health centers. The suggested method tracks the body of the case's pulse rate and other comparable data. For instance, photoplethysmography is used to assess heart rate. The decoded periodic data is transmitted continually via a Microcontroller coupled to a transmitting module. The croaker's cabin contains a receiver device that obtains and deciphers data as well as constantly exhibits it on Graphical interfaces viewable on the laptop. As a result, the croaker can monitor and handle multiple situations at once.
L. Srinivasan, D. Selvaraj, D. Dhinakaran, T. P. Anish
2023-04-21T09:32:50Z
http://arxiv.org/abs/2304.10840v1
# IoT-Based Solution for Paraplegic Sufferer to Send Signals to Physician via Internet ###### Abstract We come across hospitals and non-profit organizations that care for people with paralysis who have experienced all or portion of their physique being incapacitated by the paralyzing attack. Due to a lack of motor coordination by their mind, these persons are typically unable to communicate their requirements because they can speak clearly or use sign language. In such a case, we suggest a system that enables a disabled person to move any area of his body capable of moving to broadcast a text on the LCD. This method also addresses the circumstance in which the patient cannot be attended to in person and instead sends an SMS message using GSM. By detecting the user part's tilt direction, our suggested system operates. As a result, patients can communicate with physicians, therapists, or their loved ones at home or work over the web. Case-specific data, such as heart rate, must be continuously reported in health centers. The suggested method tracks the body of the case's pulse rate and other comparable data. For instance, photoplethysmography is used to assess heart rate. The decoded periodic data is transmitted continually via a Microcontroller coupled to a transmitting module. The croaker's cabin contains a receiver device that obtains and deciphers data as well as constantly exhibits it on Graphical interfaces viewable on the laptop. As a result, the croaker can monitor and handle multiple situations at once. The program also allows us to check the data collected. If any implied anomalies or changes in a case's status, a burglar alarm linked to the system will provide an audible alert message that a specific room's case needs immediate attention. The GSM modem attached to the device also transmits a signal to each of the croakers within this unit with the room number of the instance, which requires prompt attention in the event that the croaker isn't in this chamber. To solve this problem, we created a technique that enables such individuals to communicate with elementary motions. This gadget may be made to fit within a person's clothing or be put on their finger. - Android operating, Bluetooth, Health Monitoring, Wireless, GSM modem 20232022022032022022022022022022022022022022022022022022022022022022202220220222022022202220222022202220222022202220222022220222202220222202222022220222202222022222022222202 The inability to purposefully and independently operate your muscles is known as immobility. It may be either transitory or ongoing. The most frequent causes are complicated diseases, spinal cord damage, and stroke. Paresis, a severe disability, is a condition in which all mobility is lost completely. Most frequently, disruption to the neurological system, particularly the spinal cord, results in paralysis [7-9]. The nervous system is damaged or afflicted with a condition that results in paralysis, which implies that the nerve impulses going to the limbs are disrupted. Therapy aims to assist a person in adjusting to life with paralysis by keeping individuals as autonomous as practicable, even though there are cutting-edge methods for healing or managing paralysis patients. We need help with the size and cost of the machinery currently built for these gadgets. They appear restricted to medical usage and are not usable at the care facility or at their convenience. Our objective is to create a gadget that can retrain a patient's mobility while allowing individuals to use it independently and keeping the cost low sufficiently so they would pay for it out of pocket [10]. This technology also handles the circumstance in which no one is available to assist the patient, delivering a message over GSM of what he wishes to say via SMS. Our suggested system operates by detecting that the user part is actually tilted. This device's operation is demonstrated by keeping the knuckles of the mobile arm. To communicate a message, the user only needs to tilt the gadget at a specific angle. The message is conveyed differently depending on the way the gadget is tilted. Here, the characteristics of mobility are measured using an altimeter. This information is then transmitted to the microcomputer [11-14]. The microprocessor analyses the data and presents the specific message per the input received. The corresponding information is now shown on the LCD screen by the microprocessor. As soon as the gyroscope sends a motion indication, it also emits a buzzing and a text. The patient can opt to rotate the gadget for an additional period, which could cause an SMS to be transmitted via a Mobile phone to the authorized caregiver of the patient with the information which the patient wants to express if there is no one available to attend towards the up these issues on the LCD. In this approach, the Autonomous Paralysis Health Care System regulates the person's ability to take care of themselves, ensuring prompt attention and, in turn, the patient's overall health. Patients with paralysis can benefit from this device [50]. Whenever they require assistance, they may ask by making certain gestures. They can live in this environment like any other person using this motion detection. This technology is sturdy, lightweight, and affordable. So that they can purchase debt-free, paralyzed individuals will be able to move thanks to this device independently. Just thought this task's nature and form differ from individual to individual does not mean that it is insignificant. As technology leaders, we are responsible for creating new technology to assist patients with paralysis. Various options are thus needed to help these patients. The microprocessor can be used to build this system shortly. The chip contains every component, so because we can. The paralyzed patient can use this chip with ease. Avoid using sleeves and wristbands. However, there seems to be one drawback that will materialize: cost increases [16]. ### Objective Medical organizations would have been forced to reduce nursing personnel for patients due to rising labor costs. The goal of our initiative is to provide fresh innovations for use in routine nursing home care. This study presents a safe IOT-based system for tracking and facilitating paralyzed patients' healthcare. It enables us to manage clinical outcomes without a nurse. With individuals who have paralysis, there are many different types of devices available to keep their bodies functioning normally instead of conversing. For example, "Acurpo Care System ACS Entryway Gym Rope Workout for Elbow" and "Circular Palm and Thumb Workout to Improve Fingers." Numerous more kinds of equipment are also available for paraplegic patients undergoing physiotherapy. Physicians and nurses will be present to communicate with patients and recognize their requirements and emergencies, not technology or systems. In the event that a victim has a need when in a crisis, a physician cannot always be with them; otherwise, the scenario becomes dangerous. Therefore, various approaches are required to support these individuals, and it is our responsibility as aspiring engineers to create new technology to assist those who are paralyzed. Thus, individuals with paralytic in Grades A and B will benefit significantly from this gear, which combines both hardware and software. It will be constructive for communicating both emergency cases and fundamental needs. It is also affordable to buy and simple to manage. IoT-based technology can be used by both educated and uneducated folks. The suggested technique enables basic hand movements for paralyzed patients to communicate. Each sensor is connected to a certain finger because of the way the inertial sensors are positioned on the gloves. These accelerometers are attached to the Atmega 32B-powered Arduino Board with the aid of connecting cables. When the accelerometer's orientation is altered, the baseline or constant reading of the sensor alters. The pre-coded phrases, like "contact the physician" or "critical," are shown following this value. The buzzer triggers the alarm when the text is shown to inform the patient's carers. ### _Scope_ One of the more frequent problems brought on by a stroke is paralysis or the incapacity of a limb to move. Shortly after a stroke, up to 9 out of 10 patients with stroke experience some level of paralysis. Even years after a brain, stroke patients can recover spontaneous mobility with ongoing rehabilitation and treatment. Our concept is made with the purpose of someone who has had a mild hemorrhage or nerve damage. We've created a system that allows someone who has had a cerebrovascular disease or partial paralysis to converse with someone in case of emergencies by simply moving his head. This system lets the individual interact with someone without needing assistance with basic tasks like having turned on the illumination or adapting the bed. He need not even communicate to request quick assistance. Additionally, we will gather real-time data on the patient's condition metrics and deliver an alert signal to the victim's family. Since the doctor can easily watch the patient's health progress over time, this knowledge can be constructive for the practitioner in determining any assumptions and providing the patient with the appropriate medical assistance. One of the main causes of illness and fatality in adults, hemorrhage results in 17.3 million yearly fatalities. Upwards of 1.56 billion dementia patients in India are predicted to pass away as a result of something like a stroke by the year 2030. A real emergency is a stroke. Therefore, as proposed in our conceptual model, the stroke physician's patient's health evaluation, tracking, and quick responsiveness to his demands will assist in shortening the time it takes for a healthcare carer to arrive and thereby lower the death rate. ### _Problem Description_ According to the vast majority of research, hospitalized patients' top requirements are self-assurance, connection, knowledge, learning, assistance for their healthcare, and soul. Also, urgency is crucial when it concerns a patient's basic needs, such as drinks, nutrition, and bathroom access. You cannot operate perhaps partly or totally the immobilized portions of the body. Physical inactivity may be characterized by a loss of consciousness depending on how the impairment happened. Catastrophes bring on temporary immobility. Furthermore, the biggest issue people have is that even if their bodies work on the inside, their knee and body motions are ineffective in communicating the needs of patients. However, one advantage is that they make a tiny hand motion that allows them to communicate their wants. ## 2 Related Work Neither the nervous system nor persistent disability can heal on their own. Bell's palsy is a temporary impairment that typically goes away by itself. Orthopedic, cognitive, and cognitive therapies can offer remedies and assistive devices to alleviate immobilization as well as enhance recovery. Effective rehabilitation methods can enhance life satisfaction and enable people with all types of immobility to maintain their independence. The need for increased potential will depend upon the kind and severity of the condition. The physician might advise on rehab in adding up to: 1. Technology that seems adaptable and allows you to function or feed independently. 2. Wearable technology includes crutches, walkers, motorcycles, and batons. 3. Orthotic and prosthetics tools, including braces. 4. Vocal style pc, illumination, and communications technology. M. M. Khan et al. [17] concentrated on developing and implementing an IoT-based health surveillance system. Users can choose their health criteria using an Internet - of - things gadget, which may assist them in maintaining their well-being over time. The patients might ultimately seek medical help if they are in need. Individuals could quickly and conveniently communicate the physician's medical factor data through a single application. Any physician can keep tabs on a patient's condition from the range. Their device will take a person's temp, pulse rate, and oxygen saturation levels before transmitting the information to an app over Bluetooth. The screen also receives this data, giving the individual a fast view of their present condition. With the aid of the method, older patients, those with asthmatic, Emphysema, chronic illness patients, COVID-19 patients, and those with diabetes will be capable of maintaining their long-term health. By adopting Patient Health Tracking, which uses monitoring and the Internet to interact with loved ones in case of issues, S. R. Krishnan et al. [51] offered a creative initiative to avoid such abrupt death rates. Their system includes temperatures and pulse sensors. A microprocessor has been linked to a Display screen to monitor the health diagnosis, and a wireless connection transmits the data to a web-based server. [34]If the person's body temperature or pulse suddenly changes, IoT is utilized to notify the person. Additionally, this device transmits the Web live patient temperatures and pulse data using date stamps. S. M. Hadis et al. [19] created a patient tracking system that can electronically show the findings using Android applications and recognize the threshold of physiological parameters, assess the degree of physiological parameters based on the condition of the patient, as well as provide alerts for faulty conditions. This initiative would decrease the workload for nurses working and offer a far more practical way to check each participant's vital signs throughout the ward. The traditional system, which calls for a physician to visit each patient to check their heart rhythm, takes much time. With this method, nurses can keep an eye on individuals' conditions using Android applications that can be downloaded to any Android smartphone. In addition, by retrieving the information from the cloud in the shape of an excel spreadsheet, physicians or nurses can easily evaluate the prior vital sign status. Richa et al. [20] proposed a patient medical surveillance system that can be utilized widely in real emergencies because it allows for regular inspection, recording, and database storage. Furthermore, the IoT gadget can be combined with laptop computers to transfer the database among critical care and therapy facilities. Additionally, this health is highly helpful in pandemic situations. The most crucial measures needed for trauma patients include body temp, heartbeat, and blood oxygen. M. M. Khan et al. [21] propose an IoT-based approach that acts as a significant health surveillance system using the key parameters of patient body temp, heartbeat, and oximetry. This device contains an LCD that can be readily synchronized with a smartphone app to provide rapid access to the observed temp, heartbeat, and blood oxygen level. The suggested IoT-based technique made use of an Uno-based system and was examined and approved by five people. S. Abdulmalek et al. [22] presented a review study that uses the IoTs to examine current trends in health - care surveillance systems. Regarding their importance and the advantages of IoT healthcare, the paper examines the advantages of IoT-based health services. Through a literature analysis, they provide a comprehensive evaluation of recent research on Internet - of - things medical surveillance systems. The literature evaluation contrasts the efficacy, effectiveness, data security, confidentiality, privacy, and surveillance of different systems. They also investigate IoT surveillance systems utilizing wireless and wearable sensors and offer a taxonomy of healthcare monitoring sensors. A. Rohith et al. [23] built a Patients Healthcare Monitoring System by utilizing an ESP8266 and an Uno. Thing Speak was indeed the Interactive system employed for this project. Using the HTTP protocol, the IoT application and API Thing Speak store and retrieves data from connected gadgets through a LAN or the Internet. The pulse rate and temp may be monitored using this IoTs gadget. It sends data to an IoT platform while continuously tracking the air temperatures and pulse rate. The oximeter detector can detect Pulse Beat (BPM) and Elevated Heart (HR/BPM). The LM35 sensor module is used to calculate body temperature. The patient must be housed in a space maintained at a particular humidity level and temperature ratio. As a result, the patient does not feel uncomfortable in the space. In some circumstances, assistive technology may have mobility issues. In some situations, manual bicycles can provide excellent mobility. However, size restrictions for wheelchairs might not be adequate for you. They may harm your spine, irritate your skin, and lead to ulceration in certain users. Limited supply and a wide range of options. Braces and other orthotic devices: These factors contribute to pain in the entire limb, lumbar pain, poor equilibrium stability, or bread of falling, general exhaustion and functional decline, aggravation, as well as skin problems, soreness, or discomfort. Voice recognition software for phones, lights, and computing your phrases will only sometimes display perfectly on the monitor. It's possible that voice recognition would not be capable of picking out the correct phrase. For example, it occasionally finds it difficult to differentiate between the synonyms "there" and "their." Furthermore, it may experience problems with acronyms, jargon, and technical terms. ## 3 Proposed Methodology In this research, individuals are often unable to communicate their demands because they are unable to talk clearly or use multiple languages due to a loss of muscle function in their brains. In such a case, we suggest a system that enables a disabled person to move any portion of his body capable of moving to broadcast a statement on the LCD. This method also addresses the circumstance in which nobody is available to take care of the individual and transmits a message via GSM, a digital cellular operator popular among smartphone users, so the sick might express his needs via SMS. Furthermore, we may continuously watch the paralyzed patient's movements through the website. Here, an RF module is used to transport data from the sender to the recipient, while an Arduino microcontroller handles a single device function, as displayed in Fig.1. We used a MEMs sensor to recognize mobility in the transmitter kit. Through all the Rf transmitters, the receiving kit would receive this data. The patient's side is where this kit is available. We will use the Gsm modem in the receiving kit to transfer the information to the website [5, 24-27]. Furthermore, this device is less expensive than current apparatus like exercise and analysis tools. The suggested technique enables basic hand movements for paralyzed patients to communicate. Each sensor is connected to a particular finger because of how the inertial sensors are positioned on the gloves. With the aid of connecting wires, this sensor is attached to the Atmega8-powered Arduino UNO. When the accelerometer's direction changes, the initial or steady measurement of the sensor changes [15, 28-33]. This value determines which well before warnings, including such as "contact the physician" or "urgent," are presented. We suggest a method that enables a disabled individual to move any motion-capable component of his body to easily convey text across the screen. The proposed system also handles the circumstance in which no one is available to care for the patient, delivering a message over GSM of what he wishes to say via SMS. Our suggested system detects whether the user part is tilting in a direction. The device's operation is demonstrated by grasping the gadget at the fingertips of the moving hand. The user generally wants to position the device at a precise angle to send a text message. The message is conveyed differently depending on the way the gadget is tilted [42]. Here, the characteristics of motion are measured using an accelerometer. This information is then transmitted to the microprocessor. According to forecasts, the newly suggested system will result in the following: 1. It includes a Wi-Fi module that enables the remote gathering of information and creation by allowing communication with sensors put on the board through Wi-Fi. 2. It is affordable so that the wealthy and the poor can use it if their needs are known. 3. It is an alphanumerical device used by both educated and uneducated folks. 4. It is a palm gadget that may be readily transported anywhere. 5. Despite the fact that some of the patients' body parts may not be functioning, they may still listen and comprehend how the process works. This study uses two block diagrams, one of which shows the blocks that make up the arm of the doctor, while the other of which shows the arm of the patient. The voltages regulator, inverters, and circuits were used to construct the power distribution circuits. A lasting dc voltage is obtained by starting with an air conditioner voltage, adjusting it, detaching to a dc level, and then successfully obtaining a desired settling dc voltage. The orientation is often obtained via a Board voltage control board, which accepts a dc voltage and outputs a relatively lower dc voltage. This voltage remains constant despite changes to the information dc voltage or the yielding stacking connected with the voltage source. A device called a transformer converts live electrical AC into reduced voltage AC or the opposite way over. Most likely, we'll switch from live electrical AC to low-power DC. Therefore, there is unquestionably no reason to use an advanced transformer [35-39]. The voyage voltage down typically travels down the information Voltage controlled, which is used as a component of the energy supply. The ratio of the turn's primary and secondary windings determines how far the distributor lowers the energy. Observe the oscillating flag's magnificence well before the converter component. When compared to the flag just after the converter component chart, their size is relatively large. This demonstrates that the flag was flown all around the generator. It becomes obvious that there is a question about why a transformer is used in this system. The following are the main justifications for using a converter in the system. The power supply that we receive from the Ac source must be reduced. Transformers are capable of lowering voltage levels straightforwardly and efficiently. The converter square's diodes cannot withstand the abnormally high voltage coming from the AC mains. In this manner, the converter is first circumvented by the power, and the rectification area is then attached to the voltage level. The specifications of the transformer are summarized in Table 1. An LCD screen is an electronic display with a wide range of applications. It is an important module used in diverse circuits and gadgets [37,40,41,43,44]. Such modules are superior to conventional LEDs of seven parts and other parts. Depending on the ATmega328P, the Arduino Microcontroller is a compact, comprehensive, and breadboard-friendly gadget introduced in 2008. It offers the same interfaces and specs as the Arduino microcontroller in a more convenient format. Fig.2. represents the flow of IoT-Based Solutions for Paraplegic sufferers. The Arduino has 30 male I/O connectors arranged in a DIP-30-like layout and configured utilizing Arduino software. It is actually shared by every device and is accessible either on or offline. The circuit can be supplied by a 9 V battery or a type-B micro connection. Establishing connectivity with the other actuators and pcs is possible using the Nano Board. The electronic pins P.I.N. 0(Rx) and Vcc pin is used for the serial port (Tx). Where Rx is being used during data reception and Tx is responsible for data transfer, The Arduino IDE now has a serial monitor that can send and receive text messages from and to the board. The software also comes with FTDI drivers, which function as a simulated serial port as data is exchanged between an FTDI and USB connection to the laptop, and a Light on the Tx and Rx pins flashes. To communicate serially between both the device and the pc, the Arduino Software Parallel API is used. Network Card, widely known as a serial to Wireless component, is a piece of Iot's data link. The purpose is to transform a serial port into an embedded module that can comply with the Wi-Fi wireless transmission medium and has built-in TCP/IP and 802.11a B.G.N. network protocols. \begin{table} \begin{tabular}{|c|l|c|} \hline **S. No** & \multicolumn{2}{|c|}{**Factor**} & **Range** \\ \hline 1 & Frequency & 50Hz \\ \hline 2 & Rated power & 24VA \\ \hline 3 & Input voltage & 230V \\ \hline 4 & Output voltage & 12 V \\ \hline \end{tabular} \end{table} Table 1: The specifications of the transformer The detectors used in MEMS, a chip-based innovation, consist of a hanging mass between two reactive plates. This dangling material causes a voltage difference when the detector is tilted. Next, a variation in inductance is used to quantify the difference that was formed. ## 4 Result The IOT-based paralyzed patient health care system is a software program created to assist the patient in communicating with physicians, nurses, or family members at work or home over the web. To accomplish this capability, the system uses electronics centered on a microprocessor. It uses a listener plus broadcaster circuit and a hand gesture recognition circuit. The hand gesture circuit uses a sensor and gyro to identify arm movements and sends this data wirelessly via RF to the transceiver. The receiver system is intended to take these instructions, interpret them, show the outcomes on the LCD screen and communicate information online to an IOT Gecko gateway. The IOT gecko host subsequently posts this data online to achieve the intended result [45-48,49,46]. The device's operation is demonstrated by grasping the device at the fingertips of the moving hand. To communicate a message, the patient needs to tilt the gadget. Several messages are sent when the gadget is tilted at multiple angles. Figures 3 to 7 represent the working model and the messages sent when the gadget is tilted at multiple angles. Here, we analyze the mobility parameters using an accelerometer. This information is then transmitted to the microcontroller. Figure 1: Architecture of the Proposed Model The microprocessor analyzes the information and displays the specific message according to the input received. The appropriate message is now shown on the LCD by the microprocessor. When it receives a mobility signal from the sensor, it also emits a buzzer-like sound coupled with either text. The person can opt to tilt the gadget for an additional period, which will cause an SMS to be sent using GSM technology to the authorized caregiver of the patient with information that the patient wishes to convey if there is no one to listen to the up these issues on the screen. Fig. 2: Flow of IoT-Based Solution for Paraplegic Sufferer The transistor initially provides power, which travels to the electrical supply. The Transmitter then communicates the message, which travels to the transceiver, as displayed in Fig. 8. Finally, the message travels to the Digital display, where we can view the person's basic needs. Additionally, thanks to the Wireless module, we can view the person's basic needs inside the phone service by activating the Wi-Fi connection and replicating the Port number in Browsers. Where a wireless transmission device, or Wi-Fi modules, is used, as displayed in Fig. 9. Additionally, a siren is installed so those nearby would notice and understand the requirement. This article is all about how the Paralysis Patient Basic Requirements System functions. The incapacity to purposefully and independently operate the arms is known as paralysis. It may be either transitory or ongoing. The most frequent causes are complicated diseases, spinal cord damage, and hemorrhage. Paresis, a severe paralysis, is a condition in which all mobility is lost completely. Figure 4: Communicates Message as Emergency Fig. 5: Communicates Message as Wash Room Figure 3: Working Model Figure 6: Communicates Message as Need Water Figure 7: Communicates Message as Need Food Most frequently, disruption to the neurological system, particularly the nervous system, results in immobility. The central nervous system, which is injured or ill, results in polio, which implies that the nerve impulses going to the legs are, disrupted [6]. Therapy aims to assist a person in adjusting to life despite immobility by keeping them as autonomous as practicable, even through cutting-edge methods for curing or managing polio patients. We need help with the size and cost of the equipment currently built for this kind of technology. They are restricted to medical usage and are not usable at the client's home or at their convenience. The objective is to create a gadget that can help individuals retrain their movements while also enabling them to operate it independently and making it affordable enough for individuals to do so without incurring significant debt. ## 5 Conclusion Patients with paralysis can achieve physical autonomy thanks to this system. Whenever they require assistance, they can ask by making specific movements. Using this motion tracking, they could also live in this environment like any other person--patients whose entire or a portion of their physique has been crippled by a polio assault. Due to a lack of motor coordination by their brains, these persons are typically unable to communicate their requirements because they are either capable of speaking clearly or using hand signals. In this case, we suggest a system that enables a disabled person to show a statement on a Display screen with just a single move of any portion of his anatomy capable of gesture. This system also handles the circumstance in which no one is available to care for the patient, delivering a message over GSM of what he wishes to say via SMS. The device's operation is demonstrated by grasping the gadget in the knuckles of the moving hand. The user must bend the gadget at a specific angle to communicate a message. The message is transmitted differently depending on the way the gadget is tilted. These folks require a variety of support, and it is our responsibility as aspiring scientists to create innovative solutions to assist paralyzed patients. Individuals with a disability can really benefit from this device. When individuals require assistance, individuals may request it by making certain gestures. By using this motion control, individuals could also live inside this environment like any other person. This technology is study, lightweight, and affordable. So they can purchase debt-free. Chronic immobility cannot be cured. The brain stem cannot recover on its own. Bell's palsy is an example of a temporary disability that frequently resolves over age. Therapies, adaptive equipment, and orthotic technologies can be provided by mental, occupational, and linguistic therapists to adapt to immobility and put in more effort. The microprocessor can be used to build this system later on. The kit includes every component so which we can. The paralyzed person can use this microchip with ease. Bracelets and elbow pads should not be worn.
2303.12028
Scalar CFTs from structural phase transitions
We discuss scalar conformal field theories (CFTs) that can be realized in structural phase transitions. The Landau condition and Lifshitz condition are reviewed, which are necessary conditions for a structural phase transition to be second order. We also review the perturbative analysis in $4-\epsilon$ expansion of the corresponding Landau actions, which were already analyzed thoroughly in the 80s. By identifying the global symmetries of these fixed points, it turns out that in perturbation theory only 6 different CFTs can be realized by commensurate structural phase transitions. This is a lecture note based on a series of talks given by the author. The goal of the lecture note is to bridge the gap between condensed matter physicists and conformal field theorists. The note will be further updated in the future.
Junchen Rong
2023-03-21T17:12:42Z
http://arxiv.org/abs/2303.12028v1
# Scalar CFTs from structural phase transitions ###### Abstract We discuss scalar conformal field theories (CFTs) that can be realized in structural phase transitions. The Landau condition and Lifshitz condition are reviewed, which are necessary conditions for a structural phase transition to be second order. We also review the perturbative analysis in \(4-\epsilon\) expansion of the corresponding Landau actions, which were already analyzed thoroughly in the 80s. By identifying the global symmetries of these fixed points, it turns out that in perturbation theory only 6 different CFTs can be realized by commensurate structural phase transitions. This is a lecture note based on a series of talks given by the author. The goal of the lecture note is to bridge the gap between condensed matter physicists and conformal field theorists. The note will be further updated in the future. ###### Contents * I Introduction * II Landau theory for spontaneous symmetry breaking * II.1 Invariant polynomials and the Landau condition * II.2 Irreps of space groups and images * II.3 Fluctuations and renormalization * II.4 The Lifshitz conditions * II.5 A useful handbook and the "ISOTROPY" Software Suite * III Crystal universalities * III.1 Perturbative fixed points * III.1.1.1 2.1.2 3.1.3 4.1.4 4.2 4.3 4.4.5 4.6 4.7 4.8 4.9 4.1.1 4.2 4.3 4.4.6 4.7 4.8 4.9 4.1.2 4.1.3 4.1.4 4.2 4.3 4.4.7 4.8 4.9 4.1.5 4.9 4.1.6 4.1.7 4.2 4.3 4.4.8 4.9 4.1.8 4.1.9 4.2 4.3 4.4.9 4.1.1 4.2 4.3 4.4.1 4.3 4.4.2 4.4.3 4.4.4 4.5 4.6 4.7 4.8 4.9 4.1.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.1.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.1.2 4.1 4.3 4.4.1 4.4 4.1.3 4.4 4.1.4 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.1.1 4.1 4.1.2 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.1.1 4.2 4.3 4.4 4.5 4.7 4.8 4.9 4.2 4.4 4.9 4.1.2 4.3 4.4 4.1.3 4.4 4.1.4 4.2 4.4 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.1.3 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.1.4 4.1.5 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.2 4.3 4.4 4.5 4.7 4.9 4.1 4.2 4.4 4.3 4.5 4.6 4.7 4.8 4.9 4.1.1 4.2 4.3 4.4 4.5 4.7 4.9 4.2 4.4 4.6 4.8 4.9 4.1.1 4.2 4.3 4.4 4.5 4.7 4.9 4.1.2 4.4 4.1 4.2 4.4 4.5 4.7 4.9 4.1.3 4.1 4.2 4.3 4.4 4.5 4.7 4.9 4.2 4.3 4.5 4.7 4.9 4.3 4.1 4.3 4.4 4.5 4.7 4.9 4.1.4 4.2 4.4 4.5 4.7 4.9 4.1.5 4.1 4.2 4.3 4.4 4.5 4.7 4.9 4.2 4.3 4.4 4.6 4.7 4.9 4.3 4.4 4.8 4.9 4.1.5 4.1 4.2 4.3 4.4 4.5 4.7 4.9 4.1.6 4.1 4.2 4.3 4.4 4.5 4.7 4.9 4.1.7 4.1 4.2 4.3 4.4 4.5 4.7 4.9 4.2 4.4 4.6 4.8 4.9 4.3 4.1 4.4 4.1 4.4 4.1 4.4 4.2 4.4 4.5 4.7 4.9 4.1.1 4.2 4.3 4.4 4.5 4.7 4.9 4.1 4.2 4.3 4.4 4.5 4.7 4.9 4.1.1 4.2 4.4 4.1 4.4 4.1 4.4 4.1 4.4 4.1 4.4 4.1 4.4 4.1 4. space group \(G\). At a temperature below the phase transition temperature, the new crystal structure preserves a different space group \(H\) which is a subgroup of \(G\)1. There exist two types of phase transitions, the continuous and the discontinuous phase transitions. The term "continuous" and "discontinuous" refers to whether the order parameter changes continuously (see Section II) when the phase transition happens. One "surprising" result of a continuous phase transition is that it is possible for completely different physical systems to have the same universal critical behavior near the critical temperature \(T_{c}\). For example, the liquid-gas phase transition at the critical point and the lattice Ising model at its critical temperature have the exact same critical behavior (see Section II.3). Because of this, continuous phase transitions can be classified into so-called "universality classes", according to their critical behavior. Footnote 1: For some materials, the high temperature phase may have smaller symmetry than the low temperature phase. Such exotic inverted transitions were observed in Rochelle’s salt [4; 5]. According to the modern theory of continuous phase transitions, the critical behavior near the critical temperature \(T_{c}\) are completely fixed by the behavior of thermal fluctuations exactly at \(T_{c}\). The later can then be described by a special type of quantum field theories called the conformal field theories (CFTs). As compared to regular quantum field theories, CFTs preserve extra symmetry than the Euclidean group \(E(3)\). This gains field theorists extra mileage in studying them. In particular, the critical behavior of many two dimensional phase transitions can be solved exactly using CFT techniques [6]. Recently, the development of the conformal bootstrap technique [7] has greatly improved our understanding of many conformal field theories in \(d\geq 2\), especially the CFTs that can be realized by structural phase transitions (see Table 1). Conformal bootstrap, together with Monte Carlo simulation (see for example [8]), are currently among the most successful methods in studying critical phenomena. Inspired by these new developments in CFT research, we review here the early literature on structural phase transitions. There already exist nice textbooks and reviews on structural phase transitions, such as [1; 9]. We will however give a brief review of the subject, emphasising on the relation to conformal field theories so as to bridge the gap between the two fields of research. In particular, for a structural phase transition to be second order, it satisfies the following necessary conditions * the group-subgroup relation, * the Landau condition, * the (weak) Lifshitz condition, * and stability under renormalization group flow. We will discuss these conditions in the following sections. We need to warn the readers that these conditions are based on either mean field theory or perturbation theory arguments. When the non-perturbative effect are taken into account, phase transitions violating these conditions may also be second order. We will mention some counterexamples in the future sections. After examining the early literature on structural phase transitions (which were conveniently reported in Table 12 of the book [10]) and identify the global symmetry groups of the perturbative fixed points, it turns out that in perturbation theory, only six different CFTs can be realized by structural phase transitions, which are summarized in Table 1. A fully non-perturbative result is still unavailable (see Section III). ## II Landau theory for spontaneous symmetry breaking ### Invariant polynomials and the Landau condition We will now explain Landau's theory of spontaneous symmetry breaking. Let us start with the Ising model. The Ising model is given by the following Hamiltonian \[H_{\text{Ising}}=-J\sum_{\langle ij\rangle}\sigma_{i}\sigma_{j}. \tag{2}\] Here \(\langle ij\rangle\) denotes nearest neighbour sites on the lattice. On each site, \(\sigma_{i}=\pm 1\). Clearly, the Hamiltonian of the Ising model preserves the \(Z_{2}\) symmetry, under which all the spins flip sign \[\sigma_{i}\rightarrow-\sigma_{i}. \tag{3}\] The partition function of the Ising model is given by \[Z(T)=\sum_{\sigma_{i}}e^{-\beta H},\quad\text{with}\quad\beta=\frac{1}{k_{B}T}. \tag{4}\] Here \(k_{B}\) is the Boltzmann constant, which we set to be equal to 1 for simplicity. The expectation value of the spin operator is given by \[\langle\sigma_{x}\rangle_{T}=\frac{\sum_{\sigma_{i}}\sigma_{x}e^{-\beta H}}{Z (T)}. \tag{5}\] Onsager's famous solution the Ising model on two dimensional square lattice [11] tells us that the Figure 1: The phase diagram of the Ising model. \(T>T_{c}\) is the disordered phase, while the \(T<T_{c}\) is the ordered phase with spontaneously broken \(Z_{2}\) symmetry. \begin{table} \begin{tabular}{|l|l|l|} \hline No. & Name & Images \\ \hline \hline 1 & Ising & A2a \\ \hline 2 & XY & B4a, B6b, B8a, B12a, B12b, B24a \\ \hline 3 & \(N\)=3 Cubic & C24a, C24c, C48a \\ \hline 4 & XY\({}^{2}\) & D32e, D64a, D64b, D64d, D72b, D128a, D144a \\ \hline 5 & \(N\)=4 Cubic & D192a, D192c, D384a \\ \hline 6 & XY\({}^{3}\) & E96k, E192j, E768b, E768c, E1536a \\ \hline \end{tabular} \end{table} Table 1: All perturbative critical universality classes which can be realized in structural phase transitions. See Section III for details. We make this table by identifying the the symmetry groups of the perturbative fixed points reported in Table 12 of the book [10]. The table is based on perturbation theory in \(4-\epsilon\) dimensions. Non-perturbative effect may change the result. phase diagram is given by Fig. 1. When \(T>T_{c}\), the Ising model is in the disordered phase, in which \[\langle\sigma_{x}\rangle_{T}=0. \tag{6}\] When \(T>T_{c}\), the Ising model is in the so called ordered phase, in which the \(Z_{2}\) symmetry is spontaneously broken, \[\langle\sigma_{x}\rangle_{T}=\pm v. \tag{7}\] Here \(v\) is a constant that depends on the temperature. The system has two degenerate vacua, \(\langle\sigma_{x}\rangle_{T}=\pm v\) that are related to each other by the \(Z_{2}\) transformation. Even though the Hamiltonian of the Ising model is symmetric under \(Z_{2}\), the vacuum is not. This phenomenon is called spontaneous symmetry breaking. The Landau theory is a generic theory about spontaneous symmetry breaking. In general, the theory depends on two factors, the symmetry group \(G\) and the irreducible representation that the order parameter transforms in. (Here we assume a single order parameter.) Let us first consider the simplest case, in which the symmetry group is \(Z_{2}\), which is also the symmetry group of the Ising model. We can easily write down the free energy function that is invariant under the \(Z_{2}\) operation \(\phi\rightarrow-\phi\), \[F(\phi)=a\phi^{2}+\lambda\phi^{4}+\lambda_{6}\phi^{6}+\cdots. \tag{8}\] The coupling constants \(a\), \(\lambda\), and \(\lambda_{6}\), in general, depend on the temperature. In case of Ising model, \(\phi\) should be understood as the vacuum expectation value \(\langle\sigma_{x}\rangle_{T}\). The location of the minima of this function depends on the sign of \(a\) (assuming \(\lambda>0\) and \(\lambda_{6}\geq 0\)), see Figure 2. The phase transition happens precisely when \(a\) changes sign. The parameter \(a\) depends on the temperature. Near the critical temperature, one can perform a linear expansion to get \(a\propto(T-T_{c})\). When \(T>T_{c}\), the minimum of the free energy potential is located at \[\phi=0.\] Figure 2: The free energy \(F(\phi)=a\phi^{2}+\phi^{4}\). The red, orange and blue curves correspond to \(a\)=1, 0 and -1 respectively. When \(T<T_{c}\), on the other hand, there are two minima located at \[\phi=\pm v,\] with the constant \(v\) depending on the temperature. The phase diagram of the Landau theory therefore agrees with the Ising model. When \(T\) is slightly below the critical temperature \(T_{c}\), the spontaneous magnetization can be approximated by \[\phi\propto(T_{c}-T)^{\beta}. \tag{9}\] The specific heat has the following power law behavior \[C_{P}=-\frac{\partial^{2}F}{\partial T^{2}}\propto|T-T_{c}|^{-\alpha}. \tag{10}\] The constants \(\alpha\) and \(\beta\) are called critical exponents, which characterize the universality class that the second-order phase transition belongs to. The Landau theory gives us the mean-field theory value \(\alpha=0\) and \(\beta=1/2\). In general, \(\alpha\) and \(\beta\) will be different from their mean field theory values. Universality means that there exist different physical systems, whose critical exponents at the second-order phase transition are the same. For example, the critical point of the liquid-gas phase transition and the Curie point of the magnetization phase transition belong to the same universality class. Landau argued that for a phase transition to be second order, the symmetry of the low-temperature phase \(H\) must be a subgroup of the high-temperature group \(G\). This also means that for two phases preserving \(G\) and \(G^{\prime}\) respectively, if \(G\) is not a subgroup of \(G^{\prime}\) and vice versa, the two phases can only be connected by a discontinuous phase transition. The "group-subgroup relation" between phases is, therefore, a necessary condition for second-order phase transition 2. Footnote 2: Quantum phase transitions beyond the Landau(-Ginzburg-Wilson) paradigm has been proposed [12]. In these cases, phases with two incompatible symmetries can be connected by second-order phase transitions. Landau also argued that if a phase transition is second order, the free energy function can not contain cubic (\(\phi^{3}\)) terms. This is sometimes called the Landau condition. An illustration is given in Fig. 3. In most cases, the symmetry groups we will encounter in structural phase transitions are Figure 3: The free energy \(F(\phi)=a\phi^{2}+\phi^{3}+\phi^{4}\). The red, orange and blue curves correspond to \(a_{2}\)=1, 1/4 and 0 respectively. At \(a\)=0, the barrier between the meta-stable (at \(\phi=0\)) and the true vacuum at \(\phi<0\) disappears, the first-order phase transition happens. finite groups. To construct the Landau theory which preserves a finite group \(G\), it will be nice to know how many polynomial terms can appear in each degree. The Molien function does precisely this job. For a finite group G, the Molien function of its representation \(\rho(g)\) is \[M(z)=\frac{1}{|G|}\sum_{g\in G}\frac{1}{Det[\mathbf{1}-z\rho(g)]}. \tag{11}\] It is a generating function that counts the number of invariant polynomials of a certain degree. Here \(|G|\) is the order of the group G, \(\rho(g)\) is a matrix, which is also a representation of G. In the Taylor expansion of the Molien function \(M(z)\) around \(z=0\), the coefficient of \(z^{n}\) indicates the number of invariant polynomials of degree \(n\). See, for example, [10, 13], where the Molien function was used to study effective actions. As explained in [10], the Molien function can be written as \[M(z)=\frac{\beta_{0}+\beta_{1}z+\ldots\beta_{m}z^{m}}{(1-z)^{\alpha_{1}}(1-z^{ 2})^{\alpha_{2}}\ldots(1-z^{n})^{\alpha_{n}}}. \tag{12}\] A generic invariant polynomial of the group G can be written as \[P=\sum_{j=1}^{m}P_{j}^{(r)}K_{j}(P_{1}^{(b)},P_{2}^{(b)},\ldots P_{n}^{(b)}) \tag{13}\] The \(P_{i}^{(b)}\) polynomials with \(i=1\ldots n\) are called the "basic" invariant polynomials. Basic invariant polynomials contribute to the denominator of the Molien function (12). The numbers \(\alpha_{n}\) count the number of basic invariant polynomials of degree \(n\). The \(P_{j}^{(r)}\) polynomials with \(j=1\ldots m\) are called the "relative" invariant polynomials. Relative invariant polynomials contribute to the numerator of the Molien function (12). For a finite group, the number of relative and basic invariant polynomials is finite. The function \(K_{j}\) is itself a polynomial, with \(P_{i}^{(b)}\)'s as its variables. Since basis polynomials \(P_{i}^{(b)}\)'s are invariant polynomials, \(K_{j}\) is also invariant. This explains the denominator of (12). Compared to basic invariant polynomials, relative polynomials have a special property that \((P_{j}^{(r)})^{n}\) with \(n\geq 2\) is not independent, it can be re-expressed as linear combinations of terms in (13). This explains the numerator of (12). Let us consider as an example the symmetric group \(S_{4}\). The Molien function of the standard representation of \(S_{4}\) (the permutation group of four elements) is \[M(z)=\frac{1}{(1-z^{2})(1-z^{3})(1-z^{4})}=1+z^{2}+z^{3}+\ldots. \tag{14}\] This data can be conveniently obtained using the following GAP [14] code: grp:=SymmetricGroup(4); tbl:=CharacterTable(grp); psi:=Irr(tbl); MolienSeries(psi[4]); The first line of the code specifies which group we wish to consider. The second line calculates the character table of the group. The third generates a list "psi" which contains all the irreps of the group. The fourth line calculated the Molien function of the 4th irrep, which is the standard representation. The GAP system has a library called "Smallgroup", which allows us to easily deal with finite groups with orders less than 2000. One can easily obtain the character table, a matrix representation of the generators of the finite group and even the Molien functions using this library. To get an explicit form of the invariant polynomials, we need to first get a matrix representation of the generators. The group \(S_{4}\) is generated by the permutation \[(1,2,3,4)\quad\text{and}\quad(1,2). \tag{15}\] Here we are using the cycle notation for elements of the symmetric group. By typing "IrreducibleRepresentationsDixon(grp, psi[4]: unitary);" in GAP, we get \[(1,2,3,4)=\left(\begin{array}{ccc}0&-\frac{1}{\sqrt{3}}&-\sqrt{\frac{2}{3}} \\ \frac{1}{\sqrt{3}}&-\frac{2}{3}&\frac{\sqrt{2}}{3}\\ \sqrt{\frac{2}{3}}&\frac{\sqrt{2}}{3}&-\frac{1}{3}\end{array}\right),\quad\text{ and}\quad(1,2)=\left(\begin{array}{ccc}\frac{1}{2}&\frac{1}{2\sqrt{3}}&-\sqrt{\frac{2}{3}} \\ \frac{1}{2\sqrt{3}}&\frac{5}{6}&\frac{\sqrt{2}}{3}\\ -\sqrt{\frac{2}{3}}&\frac{\sqrt{2}}{3}&-\frac{1}{3}\end{array}\right). \tag{16}\] To calculate the explicit form of the invariant polynomials, we can use, for example, the built-in functions of Mathematica. Take the degree three invariant polynomials as an example, we first use "m1=KroneckerProduct[g1,g1,g1]" to construct a \(3^{degree}\times 3^{degree}\) matrix from the generators. Here g1 is one of the generators, it is a \(3\times 3\) matrix. The matrix "m1" tells us how the generators act on the tensor product space \(V\otimes V\otimes V\), suppose \(V\) is the standard irrep of \(S_{4}\). The command "NullSpace[m1-IdentityMatrix[27]]" then calculates the 27-dimensional vectors corresponding to the invariant tensors. One should then solve for linear combinations of these null vectors which are also invariant under the action of "KroneckerProduct[g2,g2,g2]" (where g2 is the second generator). Convert the vectors back to the tensorial basis we get the invariant tensors, which are equivalent to the invariant polynomials. The three basic invariant polynomials of the standard representation of the symmetric group \(S_{4}\) with degree-two, three, and four are given by \[I_{2}(x_{i}) = x_{i}^{2}, \tag{17}\] \[I_{3}(x_{i}) = -\frac{x_{2}^{3}}{\sqrt{2}}-\frac{3}{2}x_{3}x_{2}^{2}+x_{3}^{3}+ \frac{3}{2}x_{1}^{2}\left(\sqrt{2}x_{2}-x_{3}\right),\] (18) \[I_{4}(x_{i}) = d_{ijm}d_{klm}x_{i}x_{j}x_{k}x_{l}. \tag{19}\] (Summation over repeated indices are understood.) Here the invariant tensor \(d_{ijk}\) is defined as \[d_{ijk}=\frac{\partial^{3}I_{3}}{\partial x_{i}\partial x_{j}\partial x_{k}}. \tag{20}\] A lattice model that preserves the \(S_{4}\) symmetry is the 4-state Potts model, which is a generalization of the Ising model, allowing the spins to take 4 values instead. The Hamiltonian is \[H_{\text{Potts}}=-J\sum_{\langle ij\rangle}\delta_{s_{i},s_{j}}. \tag{21}\] Here \(\delta_{s_{i},s_{j}}\) is the Kronecker delta function, which equals 1 when \(s_{i}=s_{j}\) and 0 otherwise. The spins \(s_{i}\) can take four values 0,1,2, and 3. The numerical simulation of this model shows a phase diagram that is very similar to the phase diagram of the Ising model. The difference is that the low-temperature phase is the symmetry-breaking phase of \(S_{4}\). Now we can write down the effective action of the 4-state Potts model according to (13). The leading terms are \[F(\phi)=a_{2}I_{2}(\phi_{i})+a_{3}I_{3}(\phi_{i})+a_{4,a}I_{4}(\phi_{i})+a_{4, b}\left(I_{2}(\phi_{i})\right)^{2}+\ldots \tag{22}\] (For an explicit form of the effective action of N-state Potts models with generic N, see [15].) Clearly, the effective action of the four-state Potts model contains a cubic term, this is consistent with the \(z^{3}\) term in the Molien series. According to Landau's argument, this transition should be first order. This is indeed true in three dimensions (see for example [16]). In two dimensions, however, the 4-state Potts model goes through a second-order phase transition, because the effect of thermal fluctuation is stronger in two dimensions [17; 18; 19]. In particular, the cubic operator \(I_{3}(\phi_{i})\) gets strongly renormalized and becomes irrelevant, see Section II.3. The Molien function of the standard representation of \(A_{4}\), the alternating group on four elements, is \[M(z)=\frac{1+z^{6}}{(1-z^{2})(1-z^{3})(1-z^{4})}. \tag{23}\] Notice the Molien function of \(S_{4}\) and \(A_{4}\) are exactly the same up to \(z^{5}\), the difference starts at \(z^{6}\) order. The group \(A_{4}\) is a subgroup of \(S_{4}\), which consists only of the even permutations of four elements. The standard irrep of \(S_{4}\), when branching into \(A_{4}\), remains irreducible. The group \(A_{4}\) shall preserve more invariant polynomials than \(S_{4}\). Suppose that we want to explicitly break the global symmetry group of the Landau theory from \(S_{4}\) to \(A_{4}\). From the discussion above, this means we will have to introduce \(\phi^{6}\) terms. In two dimensions, this operator is irrelevant at the 4-state Potts model fixed point. This means that a UV model with \(A_{4}\) symmetry will also be in the 4-state Potts model universality class. In general, the symmetry \(G\) of the second-order phase transition point, which is described by conformal field theories (see Section II.3), can be bigger than the symmetry \(H\) of the lattice model. As long as all operators that are singlet of \(H\) but carrying non-trivial quantum numbers of \(G\) are irrelevant. ### Irreps of space groups and images Bravais lattices are 3D lattices defined as a set of vectors \[\vec{R}=\sum_{i=1}^{3}n_{i}\vec{a}_{i},\quad\text{with}\quad n_{i}\in\mathbb{Z}. \tag{24}\] The three linearly independent vectors \(\vec{a}_{i}\) are called the primitive lattice vectors, and they define the so-called unit cell. The volume of the unit cell is given by \[\Omega=\epsilon_{\mu\nu\rho}a_{1}^{\mu}a_{2}^{\nu}a_{3}^{\rho}. \tag{25}\] We can define primitive reciprocal lattice vectors as \[b_{1}^{\mu}=\frac{2\pi}{\Omega}\epsilon_{\mu\nu\rho}a_{2}^{\nu}a_{3}^{\rho}, \quad b_{2}^{\mu}=\frac{2\pi}{\Omega}\epsilon_{\mu\nu\rho}a_{3}^{\nu}a_{1}^{ \rho},\quad b_{3}^{\mu}=\frac{2\pi}{\Omega}\epsilon_{\mu\nu\rho}a_{1}^{\nu}a_ {2}^{\rho}. \tag{26}\] The reciprocal lattice consist of vectors given by \(\vec{K}=\sum_{i}n_{i}\vec{b}_{i}\), with \(n_{i}\) again integers. Primitive lattice vectors and reciprocal lattice vectors satisfy \[\vec{a}_{i}\cdot\vec{b}_{j}=2\pi\delta_{ij},\quad\text{and}\quad\sum_{i}a_{i}^ {\mu}b_{i}^{\nu}=2\pi\delta^{\mu\nu}. \tag{27}\] The group that leaves the lattice invariant is called the space group \(G\) of the lattice. \(G\) is a subgroup of the three-dimensional Euclidean group \(E(3)\). Clearly, translation by a primitive lattice vector leaves the lattice invariant. The set of all such translations form the transnational group \(\mathcal{T}\), which is an Abelian normal subgroup of the space group. The group \(G\) also contains the subgroup of rotations and reflections that leave the lattice invariant, which we will denote as \(\mathcal{P}\). \(\mathcal{P}\) is also called a crystallographic point group, which is a subgroup of the three-dimensional orthogonal group \(O(3)\)3. Footnote 3: For a generic space group containing glide mirrors and screw axes, the definition of point group is more subtle. A generic space group element is \(\{g,\vec{t}\}\), which acts on a vector as \[\{g,\vec{t}\}\vec{r}=g\cdot\vec{r}+\vec{t}.\] The point group \(\mathcal{P}\) is the group of all \(g\)'s. For a Bravais lattice, all the lattice sites are made of the same type of atoms. It is also simply a tiling of the three dimensional flat space with empty unit cells. We can fill these cells with atoms that are different from the atoms living on the Bravais lattice sites. These new lattices will have a space group symmetry which is the subgroup of the space group of the underlying Bravais lattice. There are in total 32 three-dimensional crystallographic point groups, and in total 230 three-dimensional crystallographic space groups. The "International Tables for Crystallography" [3] collects the properties of these groups, with many introductory chapters explaining the classification. Interested readers may refer to these chapters for further information. The book [2] is also a nice reference to study this subject. As we mentioned, the Landau theory of spontaneous symmetry breaking says that for a second order phase transition to happen, the symmetry \(H\) of the low temperature phase must be a subgroup of the symmetry \(G\) of the high temperature phase. To study structural phase transitions, it is therefore desirable to classify the "group-subgroup relations" between crystallographic space groups. This has been done with the help of a computer, and a book containing the results was published [10]. The irreps of the translation group \(\mathcal{T}\) are labeled by a momenta point \(\vec{k}\) in the unit reciprocal lattice cell, which is alternatively called the Brillouin zone. Bloch's theorem tells us that the eigenfunctions of the translational group can be written as \[\rho_{\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}u(\vec{r}), \tag{28}\] with \(u(\vec{r})\) being a periodic function satisfying \[u(\vec{r}+\vec{a}_{i})=u(\vec{r}),\quad\text{for}\quad i=1,2,3. \tag{29}\] Notice the periodic function can also be decomposed into Fourier modes of momenta belonging to the reciprocal lattice \(u(r)=\sum_{\vec{K}}f_{K}e^{i\vec{K}\cdot\vec{r}}\). Under translations, the Bloch function picks a phase \[\rho_{\vec{k}}(\vec{r}+\vec{a}_{i})=e^{i\vec{k}\cdot\vec{a}_{i}}\rho_{\vec{k} }(\vec{r}),\quad\text{for}\quad i=1,2,3. \tag{30}\] Notice that the Bloch function with momentum \(\vec{k}\) and \(\vec{k}+\vec{b}_{i}\) are in the same irrep of the transnational group, due to (27). Starting with a point \(\vec{k}\) in the Brillouin zone, the action of the point group \(\mathcal{P}\) brings \(\vec{k}\) to other points in the Brillouin zone. The set of all these points is called the star of \(\vec{k}\), denoted as \(\vec{k}_{*}\). For a two dimensional square lattice, in general, \(\vec{k}_{*}\) contains 8 vectors. When \(\vec{k}\) is located at some special points, such as the edge of the Brillouin zone, the \(\vec{k}_{*}\) contains fewer vectors. This is because the Brillouin zone is a torus, so that the vectors \(\vec{k}\) and \(\vec{k}+\vec{b}_{i}\) are identical. See Fig. 4. There are two types of structural phase transitions, the order-disorder transitions and the displacive transitions [9]. In an order-disorder structural phase transition, the crystal consists of different types of atoms. In the high-temperature phase, different atoms can occupy the sites of the lattice sites with equal probability. Or in other words, the atoms are randomly distributed on the lattice. In the low temperature phase, on the other hand, the atoms occupying the lattices form certain structures, see Fig. 5. In a displacive structural phase transition, the location of certain atoms changes from a more symmetrical position to a position which breaks the space group symmetry, see Fig. 7. For convenience, we will use two-dimensional structural phase transitions to illustrate their difference. A type of phase transition in two dimensions that are analogous to the three dimensional structural phase transitions is the order-disorder phase transition of mono-layer atoms of molecules absorbed on the surface of certain substrate material [20; 21; 22; 23; 24; 25]. The absorbed mono-layer atoms or molecules can have different phases as temperature changes. The phase transitions are described by the spontaneous breaking of the two-dimensional space groups, also called the wallpaper groups. There are only 17 of them. As a simple example, let us consider the order-disorder transitions of adsorbed monolayers [26; 27] on a square lattice. This is essentially the two-dimensional version of the order-disorder structural phase transition in three dimensions. At high temperatures, the absorbed atoms distribute randomly on the lattice. Below the critical temperature, the absorbed atoms form commensurate super-lattice structures as in Figure 5. For simplicity, we now assume the lattice constant \(a=1\). That is, the primitive lattice vectors are \[\vec{a}_{1}=(1,0),\quad\vec{a}_{2}=(0,1). \tag{31}\] The primitive reciprocal lattice vectors are then \[\vec{b}_{1}=(2\pi,0),\quad\vec{b}_{2}=(0,2\pi). \tag{32}\] The average density of the red atoms is \[\rho(\vec{r})=Cu(\vec{r})+\phi e^{\frac{1}{2}(\vec{b}_{1}+\vec{b}_{2})\cdot \vec{r}}u(\vec{r}). \tag{33}\] The function \[u(\vec{r})=\sum_{\vec{R}}\delta(\vec{r}-\vec{R}). \tag{34}\] Figure 4: Example stars of \(\vec{k}\). is periodic and is invariant under the space group transformations. The high temperature phase corresponds to \[C=\frac{1}{2},\quad\phi=0. \tag{35}\] The low temperature phase, on the other hand, corresponds to \[C=\frac{1}{2},\quad\phi=\frac{1}{2}. \tag{36}\] In general, the coefficients of the two terms depend on the temperature. The first term is a space group singlet, so that it does not play any role in the symmetry breaking. We treat the second term \[\eta(\vec{r})=\phi\times e^{\mathrm{i}\frac{1}{2}(\vec{b}_{1}+ \vec{b}_{2})\cdot\vec{r}}u(\vec{r}), \tag{37}\] as the order parameter of the phase transition. The momentum of the order parameter lives on a special point of symmetry of the Brillouin zone, see Fig 5 c). At these points of symmetry, not all the space group elements are represented faithfully, which means that some group elements act trivially. The space group is generated by 90-degree rotations, reflection, and translation along the horizontal direction. The 90-degree rotations and reflections bring the Brillouin zone point to a new point which is equivalent to the original, therefore acts trivially. The translation, on the other hand, flips the sign of the order parameter as \[\eta(\vec{r}+\vec{a}_{1})=\phi e^{\mathrm{i}\frac{1}{2}(\vec{b}_{1}+\vec{b}_{ 2})\cdot\vec{a}_{1}}e^{\mathrm{i}\frac{1}{2}(\vec{b}_{1}+\vec{b}_{2})\cdot\vec {r}}u(\vec{r})=-\eta(\vec{r}), \tag{38}\] Figure 5: a) Disordered phase of adsorbed monolayer atoms on square lattice. b) Ordered phase. c) The Brillouin zone. The momentum of order parameter is at the corner of the Brillouin zone. so that \[\phi\rightarrow-\phi. \tag{39}\] We have used (27). The subgroup of the space group which is faithfully represented (denoted as \(G_{i}\), which stands for "image group") and the corresponding irrep of \(G_{i}\) that the order parameter transforms in together are called the image of the space group. In our case, \[G_{i}=Z_{2}, \tag{40}\] and the order parameter is in the odd representation of \(Z_{2}\). The Landau effective potential is \[F=a\phi^{2}+\lambda\phi^{4}+\cdots. \tag{41}\] Depending on the sign of \(\lambda\), the phase transition can be either first order or second order, see Fig. 2 and Fig 11. If the transition is second order, it will be in the two-dimensional Ising universality class. The order parameter can be measured in low-energy-electron-diffraction (LEED) experiments 4. As an example, the work of [28; 29; 30] measured the intensity of the diffraction beam at the momentum corresponding to the structure of the absorbed molecules. From the temperature dependence of the integrated intensity, one can measure the critical exponents \(\beta\) of the second order phase transition, see for example [29]. An idealized version of the pattern in the LEED experiment is given in Figure 6. Blow the critical temperature, a new Bragg peak appears at the momentum of the order parameter. Footnote 4: See [1] for a review on experimental measurements related to 3D structural phase transitions. Recently, a two-dimensional displacive structural phase transition was predicted and subsequently realized in monolayer transition-metal dichalcogenides (TMDs) [31; 32]. For a review of Figure 6: Idealized pattern of Bragg peaks observed in LEED experiment above and below the critical temperature. The blue square corresponds to the first Brillouin zone. Measuring the integrated intensities of the diffraction beam inside the red circle gives us the critical exponents of the transition. (The bright white dots are the Bragg peak of modes that are at zero momentum.) the experimental realization of two-dimensional structural phase transitions, see [33]. To understand the difference between displacive and order-disorder structural phase transitions, we consider the hypothetical structural phase transition given in Figure 7. We again assume the lattice constant to be \(a=1\). The red atoms, at low temperatures, change positions. The displacement of the red atoms depends on the lattice sites, which can be written as \[\vec{d}(\vec{r})=\phi_{1}\left(\begin{array}{c}1\\ 0\end{array}\right)e^{i\pi x}+\phi_{2}\left(\begin{array}{c}0\\ 1\end{array}\right)e^{i\pi y}. \tag{42}\] The phase in Fig 7 b) corresponds to \[\phi_{1}=d,\quad\phi_{2}=0. \tag{43}\] Figure 7: A hypothetical displacive structural phase transition. a) In The high-temperature phase, the red and blue atoms are randomly distributed. b) The low-temperature phase. c) The four degenerate symmetry-breaking phases. The atom at block A can move in four directions when symmetry breaking happens. By analyzing how the space group acts on the order parameter term, we can figure out how coefficients \((\phi_{1},\phi_{2})\) transform: \[\text{Translation }T_{x}: \phi_{1}\rightarrow-\phi_{1},\quad\phi_{2}\rightarrow\phi_{2},\] \[\text{Translation }T_{y}: \phi_{1}\rightarrow\phi_{1},\quad\phi_{2}\rightarrow-\phi_{2},\] \[\text{Four-fold rotation }R_{4}: \phi_{1}\rightarrow\phi_{2},\quad\phi_{2}\rightarrow-\phi_{1}. \tag{44}\] The above transformations form the dihedral group \(D_{4}\). Notice certain elements of the space group act trivially on these order parameters, such as \((T_{x})^{2}\). In other words, the space group is not faithfully represented by these order parameters. From the above transformations, we get the Landau effective potential \[F=a(\phi_{1}^{2}+\phi_{2}^{2})+a_{4,1}(\phi_{1}^{4}+\phi_{2}^{4})+a_{4,2}(\phi_ {1}^{2}+\phi_{2}^{2})^{2}+\cdots. \tag{45}\] The four configurations in Fig. 7 b) correspond to the four degenerate vacua of the effective potential, \[\phi_{1}=\pm d,\phi_{2}=0,\quad\text{and}\quad\phi_{1}=0,\phi_{2}=\pm d. \tag{46}\] The order of the phase transition depends on the choice of \(a_{4,1}\) and \(a_{4,2}\). In fact, there exists a famous lattice model with \(D_{4}\) symmetry, which is called the Ashkin-Teller model [34]. The phase diagram of the Ashkin-Teller model can be found in, for example, [35]. The phase diagram has phases separated by a second-order phase transition line. Interestingly, the critical exponents of the models vary along the line. This is because, in two dimensions, there is a family of \(c=1\) conformal field theories with \(D_{4}\) symmetry. A pedagogical introduction of conformal field theories with c=1 and in particular the Ashkin-Teller models can be found in Section 8.4 of [36]. We will not discuss the details here. ### Fluctuations and renormalization The Landau theory of phase transitions discussed above neglects the effect of thermal fluctuations. To take the fluctuations into account, let us first discuss the hysteresis loop. The hysteresis loop plots spontaneous magnetization when external magnetic field changes. It can be generalized to describe the phase coexistence phenomenon in generic first order phase transitions. Let us first consider the mean field theory hysteresis loop neglecting thermal fluctuations. Consider the following Ising action with external magnetic field \[F=a\phi^{2}+\lambda\phi^{4}-h\phi\cdots. \tag{47}\] In the low temperature phase (\(a<\)0), when \(h\neq 0\), the depth of the two vacua are different. Let us start with the \(h\ll 0\) configuration, given in Fig. 8 a), and slowly increase \(h\). The black dot indicates the location of the vacuum. As \(h\) becomes slightly bigger than 0, the vacuum remains stuck in the \(\phi<0\) meta-stable vacuum, since in mean field theory approximation we have neglected fluctuations that can help the system bypass the energy barrier to the true vacuum at \(\phi>0\), see Fig. 8 b). As \(h\) increase further, at a critical \(h_{c}\), the barrier between the meta-stable and the stable vacua disappears: this is when the phase transition happens. Fig. 9 a) shows the hysteresis loop when thermal fluctuations are neglected. Notice the hysteresis loop contains sharp edges, which get smoothed out when fluctuations are taken back into account. This is due to that fact that fluctuations can cause the vacuum to tunnel from the meta-stable vacuum to the true vacuum. Besides changing the hysteresis loop, the thermal fluctuations can also change the critical exponents of a second-order phase transition. To understand this, we consider again the two dimensional order-disorder phase transition discussed in the previous section, see Fig. 5. We now allow the critical mode to have spatially modulated fluctuations \[\eta(\vec{x})=\phi(\vec{r})\times e^{i\vec{k}\cdot\vec{x}}u(\vec{r}), \tag{48}\] with \(u(\vec{r})\) defined in (34). See Fig. 10 for an illustration of the spatially modulated critical mode. We assume the scale of the spatial modulation is much larger than the lattice scale. For example, taking \(\phi(\vec{x})=\phi_{0}\cos(\delta\vec{k}\cdot\vec{x})\), with \(|\delta\vec{k}|\ll|\vec{k}|\), the fluctuation becomes \[\eta(\vec{x})\sim\phi_{0}e^{i(\vec{k}+\delta\vec{k})\cdot\vec{x}}+\phi_{0}e^{ i(\vec{k}-\delta\vec{k})\cdot\vec{x}}+c.c.. \tag{49}\] Clearly, a large-scale spatial modulation corresponds to shift the momentum of the order parameter. The Lifshitz condition (which will be explained in Section II.4) makes sure that our order parameter is stable against such a shift of momentum. Spatially modulated fluctuations \(\phi(x)\) cost more energy compared to the homogeneous configuration. Therefore the effective action should Figure 8: Without fluctuations, the state can stuck in meta-stable vacuums. Figure 9: Hysteresis loop without and with fluctuations taken into account. contain extra terms that tend to suppress these fluctuations, such as \[\int dx^{3}\frac{1}{2}\left(\vec{\nabla}\phi(\vec{x})\right)^{2}.\] Since the scale of the fluctuations of \(\phi(x)\) is much larger than the lattice scale, we can treat \(\phi(\vec{x})\) as a continuous function in the Euclidean space \(R^{3}\). The \(\vec{\nabla}\) is simply the spatial derivative in the continuum. The space group symmetry also restricts the possible derivative terms that can appear in the effective action. Take the Ising model as an example, the full Landau action is \[S=\int dx^{3}\frac{1}{2}\left(\vec{\nabla}\phi(\vec{x})\right)^{2}+a\phi(x)^{2 }+\lambda\phi(x)^{4}+\ldots. \tag{50}\] Landau theory gives us the mean-field theory values of the critical exponents \(\alpha=0\) and \(\beta=1/2\). When thermal fluctuations are taken into account, the critical exponents can deviate from their mean-field theory values. From a modern point of view, second-order phase transitions are described by a special type of quantum field theory called conformal field theories (CFTs). Different universality classes correspond to different CFTs, for a review see [37]. Another important concept of quantum field theory is the renormalization group. We will review the basic concepts of the renormalization group theory, omitting many details. Interested readers should refer to [37; 38; 39]. Renormalization theory tells us that physics at different length scales is controlled by a set of equations called the renormalization group equations. Take the Ising model (50) as an example, the couplings constants of the action depend on the length scale \(l\) through \[l\frac{da}{dl}=\beta_{1}(a,\lambda),\] \[l\frac{d\lambda}{dl}=\beta_{2}(a,\lambda). \tag{51}\] The length scale \(l\) can be understood as a cutoff scale. In the Wilsonian picture, we coarse grain out all the microscopic physics smaller than this scale, by integrating out modes with momentum higher than \(1/L\)[38]. The beta functions are in general complicated, and can only be calculated in certain perturbative limits [40]. The RG equations have fixed points, at which the coupling Figure 10: The modulated critical mode \(\eta(x)=\phi(x)\times e^{\mathrm{i}kx}\), with \(\phi(x)=\cos(\delta kx)\). We choose \(k=\pi\) and \(\delta k=0.04\pi\). The function is only evaluated on the one dimensional lattice \(x\in\mathbb{Z}\), as shown by the red dots. Here \(k\) stands for the momentum of the critical mode, which depends on the UV details of the material. The \(\delta k\) is the characteristic momentum of spatial modulated large-scale fluctuation. Typically, \(\delta k\ll k\). constants are scale invariant. This means \[\beta_{1}(a,\lambda)=\beta_{2}(a,\lambda)=0. \tag{52}\] At these fixed points, the quantum field theory invariant under the Euclidean group becomes also scale invariant. Usually, the Euclidean symmetry and scaling symmetry get enhanced to a bigger symmetry called the conformal group [41, 42]. The corresponding quantum field theory is therefore a conformal field theory (CFT). The terms "fixed point" and "conformal field theory" are often used interchangeably. Near these fixed points, one can linearize the equation by the ansatz \(a=a_{*}+\delta a\) and \(\lambda=\lambda_{*}+\delta\lambda\), to get \[l\frac{d}{dl}\left(\begin{array}{c}\delta a\\ \delta\lambda\end{array}\right)=\left(\begin{array}{cc}\frac{\partial\beta _{1}}{\partial a}&\frac{\partial\beta_{1}}{\partial\lambda}\\ \frac{\partial\beta_{2}}{\partial a}&\frac{\partial\beta_{2}}{\partial \lambda}\end{array}\right)_{a=a_{*},\lambda=\lambda_{*}}\left(\begin{array}[] {c}\delta a\\ \delta\lambda\end{array}\right). \tag{53}\] Here \(a_{*}\) and \(\lambda_{*}\) satisfies (52). We care about large-scale physics, therefore the \(l\to\infty\) limit. The matrix in the above equations is sometimes called the stability matrix. These equations can be solved by diagonalizing the stability matrix, \[l\frac{d}{dl}\left(\begin{array}{c}\delta a^{\prime}\\ \delta\lambda^{\prime}\end{array}\right)=\left(\begin{array}{cc}\omega_{1} &0\\ 0&\omega_{2}\end{array}\right)\left(\begin{array}{c}\delta a^{\prime}\\ \delta\lambda^{\prime}\end{array}\right). \tag{54}\] Here \(\delta a^{\prime}\) and \(\delta\lambda^{\prime}\) are linear combinations of \(\delta a\) and \(\delta\lambda\) constructed from the eigenvectors of the stability matrix. The solutions to these equations are \[\delta a^{\prime}=c_{1}l^{\omega_{1}},\quad\delta\lambda^{\prime}=c_{2}l^{ \omega_{2}}. \tag{55}\] Clearly, as \(l\) increases, coupling constants grow if \(\omega>0\), while decay if \(\omega<0\). The terms in the action whose coupling constants grow (decay) as the scale increases are called relevant (irrelevant) operators. The RG flow of the Ising model has two fixed points with no relevant terms. They are the so called high temperature fixed point located at \(a=+\infty\) and \(\lambda=0\), and the low temperature fixed point located at \(a=-\infty\) and \(\lambda=0\). For the Ising model in three dimensions, there is another fixed point, which has only one relevant operator. This is the famous Wilson-Fisher fixed point [40] describing the lattice Ising model at the critical temperature. The number of relevant operators at the fixed point corresponds to the number of physical parameters that we need to tune to reach the fixed point. The critical point at \(T_{c}\) can be reached by tuning the temperature alone, this is because the critical Ising CFT has only one relevant operator (which preserves the \(Z_{2}\) symmetry). We can deform the Ising model to study tri-critical phenomena. We first allow the Ising spins to take value in \(\sigma_{i}=0,\pm 1\), and then add a coupling that favors the \(\sigma_{i}=0\) state. The Hamiltonian becomes \[H=-J\sum_{\langle ij\rangle}\sigma_{i}\sigma_{j}+g\sum_{i}(\sigma_{i})^{2}. \tag{56}\] This is the famous Blume-Capel model. The \(Z_{2}\) symmetry that flipps all the spins are still preserved. A Monte Carlo study of this model was performed in [43]. Figure 12 is a schematic phase diagram of this model. The tri-critical point is described by the tri-critical Ising CFT. As is clear from the phase diagram, the tri-critical Ising CFT can only be reached by tuning two physical parameters together, the temperature and \(g\). This is because the tri-critical Ising CFT has two relevant operators, \(\epsilon\) and \(\epsilon^{\prime}\), which are analogous to the \(\phi^{2}\) and \(\phi^{4}\) operators of free theory. Moving from the tri-critical Ising point to the Ising critical point on the fixed line of the phase diagram, corresponds to perturbing the tri-critical Ising point by the \(\lambda\epsilon^{\prime}\), which triggers an RG flow towards the Ising model if \(\lambda>0\). If \(\lambda<0\), on the other hand, the phase transition becomes first order. This is shown schematically in Fig. 11. This phase diagram in Fig. 12 is not restricted to the Blume-Capel model, see for example [44; 45]. The reason that the (tri-)critical Ising CFT appears in different lattice models is again deeply related to the concept of "universality". Lattice models with different ultra-violet (short distance) details, after renormalization, can flow to the same infra-red (long distance) CFT. Since the critical behavior of a second order phase transition is completely fixed by the corresponding CFT, their critical exponents will be the same. Figure 11: The free energy \(F(\phi)=a\phi^{2}-\phi^{4}+\phi^{6}\). The red, orange, and blue curves correspond to \(a\)=1, 1/4 and 0 respectively. Notice that at \(a=1/4\), there exists a free energy barrier between the \(\phi=0\) state and the \(\phi=\pm v\) states. This is a sign of first-order phase transitions. Figure 12: A schematic phase diagram of the Blume-Capel model. The solid line corresponds to second-order phase transitions, while the dashed line is first order. The red dot is a tri-critical point which is given by the tri-critical Ising CFT. ### The Lifshitz conditions In general, the mass order parameter \(a(T,P)\) in the Landau theory \[F=a(T,P)\sum_{i}\phi_{i}\phi_{i}+\ldots \tag{57}\] depends on physical parameters, where \(T\) and \(P\) stands for temperature and pressure. Here \(\phi_{i}\)'s denote a single faithful irreducible representation of the space group, as explained in Section II.2. The subscript \(i\) enumerates vectors in this representation. Notice the temperature \(T\) and the pressure \(P\) do not explicitly break the space group symmetry of the material, we use them as examples of such experimental parameters. In reality, instead of considering a single mode (irrep of space group), one has to consider the effect of other modes nearby. In particular, the modes with momentum close to the Brillouin zone point we consider are important. The mass of these modes, in addition to the physical parameters, also depends on their momentum. That is \[a=a(T,P,\vec{k}). \tag{58}\] For the transition to be driven by the critical mode at a specific momentum, we need \[a(T,P,\vec{k})=0,\quad\text{and}\quad\frac{\partial a(T,P,\vec{k})}{\partial k _{1}}=0,\quad\frac{\partial a(T,P,\vec{k})}{\partial k_{2}}=0,\quad\frac{ \partial a(T,P,\vec{k})}{\partial k_{3}}=0. \tag{59}\] In the five-dimensional parameter space given by \(\{T,P,\vec{k}\}\), we can at most find a one-dimensional family of solutions. Projection of the one-dimensional solutions onto the \((T,P)\) plane gives us a critical line so that we can reach the second-order phase transition by tuning a single parameter, see Figure 13. We discussed the Landau condition before, which says that for the phase transition to be second order, the corresponding Landau effective action should not contain cubic terms. Suppose the space group of the crystal is \(G\) and the order parameter that drives the phase transition lives in the irreducible representation \(\mathcal{R}\). The Landau condition then simply means that the symmetric product of three copies of \(\mathcal{R}\) should not contain the singlet representation: \[\text{Landau condition:}\qquad\mathbf{1}\notin[\mathcal{R}\otimes\mathcal{R }\otimes\mathcal{R}]_{s}. \tag{60}\] Figure 13: Critical solution in the \(\{T,P,\vec{k}\}\) space and its projection on the \(T\)-\(P\) plane. The subscript "s" stands for the symmetric product. The symbol \(\mathbf{1}\) stands for the singlet representation, in which all the group elements of \(G\) act trivially. The number of singlet representations contained in \([\mathcal{R}\otimes\mathcal{R}\otimes\mathcal{R}]_{s}\) is sometimes called the Landau frequency. The Lifshitz condition is another necessary condition for commensurate structural phase transition to be second order. Let us denote the representation of \(G\) that the lattice derivative \(\vec{\nabla}\) transforms in as \(\mathcal{V}\), then the Lifshitz condition is \[\text{Lifshitz condition:}\qquad\mathcal{V}\notin[\mathcal{R}\otimes \mathcal{R}]_{a}. \tag{61}\] Here the subscript "a" stands for the anti-symmetric product. Notice that the lattice derivative \(\vec{\nabla}\) is invariant under spatial translations. The irrep \(\mathcal{V}\) is also an irrep of the point group: it is the representation of the point group in which the three-dimensional vector \(\vec{r}\) transforms. The number of \(\mathcal{V}\) representations contained in \([\mathcal{R}\otimes\mathcal{R}]_{a}\) is sometimes called the Lifshitz frequency. The Lifshitz frequency counts the number of one derivative terms allowed by the space group symmetry in the Landau effective action, such as \[\int dx^{3}\phi_{i}(x)\nabla_{k}\phi_{j}(x). \tag{62}\] Notice the above term is anti-symmetric with respect to an interchange of the \(i\) and \(j\) index. This explains why the Lifshitz condition puts constraints on the anti-symmetric product representation \([\mathcal{R}\otimes\mathcal{R}]_{a}\). Another way of understanding the Lifshitz condition is that it tells us that the mass of the critical mode should be located at a local minimum in the momentum space. That is \[\frac{\partial a(T,P,\vec{k})}{\partial k_{1}}=0,\quad\frac{ \partial a(T,P,\vec{k})}{\partial k_{2}}=0,\quad\frac{\partial a(T,P,\vec{k}) }{\partial k_{3}}=0. \tag{63}\] We leave the derivation to Appendix A. From a renormalization group point of view, the Lifshitz terms of the form (62) are likely to be relevant operators for conformal field theories. If the space group symmetry allows such terms, without fine-tuning their couplings to zero, critical points are hard to reach. Clearly, the Lifshitz frequency is a property specific to the representations of the space group. A complete list of the Lifshitz frequency of all space groups' irreps is given in [10]. The points in the Brillouin zone are classified into so-called points of symmetry, lines of symmetry, planes of symmetry, and generic points. They denote a certain zero, one, two, and three-dimensional domain of points respectively, see Fig. 14. For a point in the Brillouin zone, there exists a subgroup \(H\) (the little group) of the space group \(G\) that leaves this \(\vec{k}\) point invariant. Within one domain, the subgroup \(H\) does not change. It can be proven that the Lifshitz conditions are satisfied only by representations whose momentum is at "points of symmetry" of the Brillouin zone. If we allow incommensurate structural phase transitions, the order parameter driving the phase transition is not restricted to the points of symmetry. The incommensurate phase will have Bragg peaks whose momentum is located inside the Brillouin zone domains. The order parameter forms a superstructure that is incommensurate with the lattice structure. The locations of these Bragg peaks change with temperature. One maybe worry about whether we should still identify incommensurate crystals at different temperatures as a single phase since different momentum points of the Brillouin zone clearly correspond to different irreps of the space group. As explained by Michelson in [46], even though the momentum of the order parameter changes as temperature varies, the corresponding space group of this symmetry-breaking phase does not change so that they can still be identified as a single phase. In case of incommensurate structural phase transitions, the Lifshitz conditions can be slightly relaxed. A weaker condition called the weak Lifshitz condition was introduced by Michelson [46], which will be discussed in Appendix B. In general, the irreps from points of symmetry, lines of symmetry, planes of symmetry, and even generic points may compete with each other. Each of these "critical" mode correspond to a "critical" line in the \((T,P)\) plane. The actual mode that drives the phase transition is the critical line at the highest temperature. This is shown schematically in Fig. 15. The intersection of these lines may give us tri-critical points which correspond to CFTs with order parameters from different irreps of the space group coupled together. As far as the authors are aware, no such fixed points have been observed yet in structural phase transitions. They are, however, natural predictions of Landau's theory. Recently, such tri-critical CFTs have been studied using both \(4-\epsilon\) expansion and conformal bootstrap techniques [47; 48; 49; 50]. Figure 14: The Brillouin zone of a square lattice. The Brillouin zone consists of points of symmetry, lines of symmetry, planes of symmetry, and generic points. Figure 15: Schematic form of the “critical” lines from different irreps of the space group. The segments at the highest temperature correspond to where the 2nd phase transition happens. The intersection of these pre-critical lines may give us tri-critical points which correspond to CFTs with order parameters from different irreps of the space group coupled together. ### A useful handbook and the "ISOTROPY" Software Suite The 230 crystallographic space groups have 4777 irreps whose momentum are on points of symmetry of the Brillouin zone (so that they can potentially satisfy the Lifshitz condition). The corresponding Landau theories, which are often called images, were classified in [51]. Surprisingly, there are only 132 in-equivalent images. In the book [10], the images of all representations were listed, together with the generators of the image group \(G_{i}\) (see Table 2), the Molien function (see Table 11), and the invariant polynomials of the corresponding Landau potential (see Table 10). The book also contains the "group-subgroup relations" among the 230 crystallographic space groups. That is, given an irrep of a space group, the book lists all the possible subgroups (of the low-temperature phases) that this irrep can break the symmetry into. This information is given together with the Landau and Lifshitz frequencies of the irrep (See Table 1). Much more recently, a software suite called "ISOTROPY" collecting all above was introduced [52]. The software is also available through an online interface ([https://iso.byu.edu/iso/isowww.php](https://iso.byu.edu/iso/isowww.php)). By specifying successively the space group, the location of the momentum in the Brillouin zone, the representation of the space group at this momentum, the software automatically generates the corresponding Landau effective action. One can also easily check the Landau frequency and the Lifshitz frequency of this representation. For further details, the readers should consult the user manual. ## III Crystal Universities As we mentioned, there are only 132 inequivalent images, which can describe commensurate structural phase transitions. The corresponding Landau actions are scalar field theories with \(N\) scalars coupled together. There are images with \(N=1,2,3,4,6,8,12,16\) and \(26\). We will follow the convention of [10] to name the images. Take the image "B4a" as an example. The letter B tells us the number of scalars, or the dimension of the representation of the image group (A=1, B=2, C=3, D=4, E=6, F=8, G=12, H=16, J=24). The number 4 in "B4a" is the order of the image group \(G_{i}\). The letter a in "B4a" is used to distinguish images with the same \(N\) and the same order. The image group \(G_{i}\) of "B4a" is generated by the following matrix, \[B_{2}=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right). \tag{64}\] This group is isomorphic to the cyclic group \(Z_{4}\). The Landau action of the image "B4a" is \[F(\phi)=a(\phi_{1}^{2}+\phi_{2}^{2})+\lambda(\phi_{1}^{2}+\phi_{2}^{2})^{2}+ \lambda^{\prime}(\phi_{1}^{4}+\phi_{2}^{4})+\lambda^{\prime\prime}(\phi_{1}^{ 3}\phi^{2}-\phi_{1}\phi_{2}^{3})+\cdots. \tag{65}\] Define \(\chi=\phi_{1}+\mathrm{i}\phi_{2}\), then the potential can also be written as \[F(\chi)=a\chi\chi^{*}+\lambda(\chi\chi^{*})^{2}+\lambda^{\prime}\frac{1}{8} \left(\chi^{4}+\chi^{*4}+6(\chi\chi^{*})^{2}\right)-\lambda^{\prime\prime} \frac{1}{4}\mathrm{i}(\chi^{3}\chi^{*}-\chi\chi^{*3})+\cdots \tag{66}\] The first two terms preserve the \(O(2)\) symmetry, the third term breaks \(O(2)\) to \(D_{4}\), and the last term then breaks the \(D_{4}\) group to \(Z_{4}\). For a representation, if both the Landau and the Lifshitz condition are satisfied, the representation is called "active". For a certain image, if there exists a representation satisfying both the Landau and Lifshitz conditions that maps to this image, one also calls the image "active". See Table 8 of [10] for images and whether they are active or not. After knowing the images and the Landau theories, one can then study these scalar field theories in \(4-\epsilon\) dimension, to analyze the renormalization group flow of these theories. The results are conveniently summarized in Table 12 of the book [10], where the images were listed together with whether there is a perturbatively stable fixed point. To be more precise, the table lists the possible subgroups of the image groups that the Landau potential (truncated to quartic order) with proper coupling constants can break the symmetry into. If the subgroup also lives in the attractor basin of a stable fixed point, then this phase transition is second order 5, at least perturbatively. These are the universality classes we can get from structural phase transitions. Footnote 5: These subgroups are denoted with a double asterisk “**” in Table 12 of [10]. For a recent discussion about the structural transition of perovskites, in particular, the role of attractor basin of CFT fixed points, see [53]. To have a better understanding of the results, we will review how they was obtained. It turns out that there are no active \(N>8\) images [10]. This means that the corresponding \(N>8\) Landau theories can never give us to a second-order structural phase transition. We will focus on \(N\leq 8\) images. For images with \(N=8\), only four of the images are active. The corresponding \(4-\epsilon\) fixed points with \(N=8\) were studied in [54], and it turns out that none of these fixed points are RG stable. This means that the corresponding \(N=8\) Landau theories cannot be second-order structural phase transitions either. For \(N\leq 6\) images, we start to have stable fixed points that can be realized in structural phase transitions. We list them in Table 1. #### iv.2.1 Perturbative fixed points We will now explain the perturbative fixed points in Table 1. * The fixed point No. 1 is the Ising CFT, which is the Wilson-Fisher fixed point of \[\mathcal{L}=\frac{1}{2}(\partial\phi)^{2}+\lambda\phi^{4}.\] (67) The global symmetry of the CFT is the \(Z_{2}\) group. See also Chapter 3 of [37]. * The fixed point No. 2 is the XY model fixed point, which is the Wilson-Fisher fixed point of the following Lagrangian \[\mathcal{L}=\frac{1}{2}\sum_{i=1}^{2}(\partial\phi_{i})^{2}+\lambda(\phi_{1} ^{2}+\phi_{2}^{2})^{2}.\] (68) The global symmetry of the CFT is the \(O(2)\) group. See also Chapter 4 of [37]. * The fixed point No. 3 is the stable fixed point of the following \(N=3\) Cubic model, \[\mathcal{L}=\frac{1}{2}\sum_{i=1}^{3}(\partial\phi_{i})^{2}+u(\phi_{1}^{2}+ \phi_{2}^{2}+\phi_{3}^{2})^{2}+v(\phi_{1}^{4}+\phi_{2}^{4}++\phi_{3}^{4}).\] (69) The global symmetry of the CFT is the Cubic group \((Z_{2})^{3}\rtimes S_{3}\). The three \(Z_{2}\) symmetries flips the sign of \(\phi_{1}\), \(\phi_{2}\) and \(\phi_{3}\) respectively. The permutation group \(S_{3}\), on the other hand, permutates the three scalar fields. The 1-loop renormalization group flow equations of the above Lagrangian have four fixed points, corresponding to zero solutions of the beta functions of the two coupling constants, \[l\frac{d}{dl}u=\beta_{u}(u,v)=0,\quad\frac{d}{dl}v=\beta_{v}(u,v)=0.\] (70) Among them, only one is stable, that is the CFT which has only one relevant operator \(\sum_{i}\phi^{i}\phi^{i}\). In other words, one have to make sure that the stability matrix \[\left(\begin{array}{cc}\frac{\partial\beta_{u}}{\partial u}&\frac{\partial \beta_{u}}{\partial v}\\ \frac{\partial\beta_{v}}{\partial u}&\frac{\partial\beta_{v}}{\partial v} \end{array}\right)\] (71) has no positive eigenvalues, so that there is no relevant operator coming from the \(\phi^{4}\) terms. See also Chapter 11.3 of [37]. * The fixed point No. 4 is a fixed point given by two copies of XY models coupled together, which we will denote as "XY\({}^{2}\)" fixed point. It is the stable fixed point of the following Lagrangian \[\mathcal{L}=\frac{1}{2}\sum_{i=1}^{4}(\partial\phi_{i})^{2}+u(\sum_{i=1}^{4} \phi_{i}^{2})^{2}+v\left((\phi_{1}^{2}+\phi_{2}^{2})^{2}+(\phi_{3}^{2}+\phi_{ 4}^{2})^{2}\right).\] (72) The global symmetry of the CFT is the group \(O(2)^{2}\rtimes Z_{2}\). The first copy of the \(O(2)\) group rotates \((\phi_{1},\phi_{2})\), while the seconds \(O(2)\) rotates \((\phi_{3},\phi_{4})\). The \(Z_{2}\) symmetry, on the other hand, does \[\phi_{1}\longleftrightarrow\phi_{3},\quad\phi_{2}\longleftrightarrow\phi_{ 4}.\] (73) Similarly, the RG flow equations of the above Lagrangian have four fixed points. At the critical temperature, only the stable fixed point is realized. * The fixed point No. 5 is the \(N=4\) Cubic fixed point, which is also the stable fixed point of \[\mathcal{L}=\frac{1}{2}\sum_{i=1}^{4}(\partial\phi_{i})^{2}+u(\sum_{i=1}^{4} \phi_{i}^{2})^{2}+v(\sum_{i=1}^{4}\phi_{i}^{4}).\] (74) The global symmetry of the CFT is the Cubic group \((Z_{2})^{4}\rtimes S_{4}\). * The fixed point No. 6 is given by three copies of XY models coupled together, which we will denote as "XY\({}^{3}\)". It is the stable fixed point of the following Lagrangian \[\mathcal{L}=\frac{1}{2}\sum_{i=1}^{6}(\partial\phi_{i})^{2}+u(\sum_{i=1}^{6} \phi_{i}^{2})^{2}+v\left((\phi_{1}^{2}+\phi_{2}^{2})^{2}+(\phi_{3}^{2}+\phi_{ 4}^{2})^{2}+(\phi_{5}^{2}+\phi_{6}^{2})^{2}\right).\] (75) The global symmetry of the CFT is the group \(O(2)^{3}\times S_{3}\). #### iv.2.2 N=6 We now come back to the perturbative RG analysis of the images. The images with \(N=6\) were studied in [55] up to 1-loop order in the \(4-\epsilon\) expansion. A careful analysis of the results shows that the only stable fixed point is the XY\({}^{3}\) model. #### iv.1.3 N\(\underline{\times}\)4 In [56], all irreducible subgroups of O(4) were classified. Irreducible subgroups are groups under which the four-dimensional vector representation of O(4) remains irreducible. This also means that the corresponding Landau theory has only one quadratic mass term. This constraint is related to the requirement that the potential second-order transition can be reached by tuning a single physical parameter. In other words, if the Landau action has more than one quadratic polynomial, all of them need to be tuned to zero to reach criticality. The corresponding CFTs can at most be tri-critical points. Based on this result, the paper [57] studied the perturbative RG flow of these Landau theories. The RG flows were studied up to two-loop order. It was found that there are only four stable fixed points. They are the \[\text{XY}^{2},\quad N{=}4\text{ Cubic} \tag{76}\] fixed points we already discussed, and two extra fixed points which will not be realized in structural phase transitions. * The first extra fixed point is the tetrahedron fixed point of the Lagrangian \[\mathcal{L}=\frac{1}{2}\sum_{i=1}^{4}(\partial\phi_{i})^{2}+u(\sum_{i=1}^{4} \phi_{i}^{2})^{2}+v\sum_{ijklm}d_{ijm}d_{klm}\phi_{i}\phi_{j}\phi_{k}\phi_{l}.\] (77) The model was introduced in [15]. The invariant tensor \(d_{ijk}\) is an invariant tensor of the \(S_{5}\) group, which can be calculated using the procedure discussed in [15]. The global symmetry of this CFT is the group \(S_{5}\times Z_{2}\). This is the symmetry group of a (hyper)tetrahedron with five vertices, which can be embedded in the four-dimensional Euclidean space. In group theory language, the symmetric group \(S_{5}\) has a four-dimensional irreducible representation, sometimes called the "standard" representation. We denote the CFT "the (hyper)tetrahedral" in Table 2. * The second extra fixed point is the \(O(4)\) vector model \[\mathcal{L}=\frac{1}{2}\sum_{i=1}^{4}(\partial\phi_{i})^{2}+\lambda(\sum_{i= 1}^{4}\phi_{i}^{2})^{2}.\] (78) The global symmetry of the CFT is clearly \(O(4)\). \begin{table} \begin{tabular}{|l|l|} \hline number of scalars & fixed points \\ \hline 1 & Ising \\ \hline 2 & XY \\ \hline 3 & the O(3) vector model, the \(N{=}3\) Cubic model \\ \hline 4 & the O(4) vector model, XY\({}^{2}\), the \(N{=}4\) Cubic model, the (hyper)tetrahedral \\ \hline \end{tabular} \end{table} Table 2: The stable perturbative fixed point with up four scalars. (Tri-critical fixed points with two mass terms are not considered.) In the paper [56], the possible Landau theories with N\(\leq 3\) scalar fields were also listed, again based on knowledge of irreducible subgroups of O(2) and O(3). One can similarly work out the possible irreducible fixed points, which are also listed in \(\mathbb{II}\). Notice even though the main topic of our paper is structural phase transitions, the classification in [55; 56] applies to all phase transitions which can be described by up to four scalars coupled together. Many of the irreducible subgroups and Landau theories may seem purely theoretical at the beginning, but they later do appear in interesting condensed matter systems. As an example, the group \(GL(2,\mathbb{Z}_{3})\) (or \([D_{3}/C_{2};O/D_{2}]\) in the convention of [55; 56]) is the symmetry group of the effective action that describes a certain frustrated Ising model on the Kagome lattice [58]. #### iv.1.4 Comments on non-perturbative results Notice that the renormalization group flow analysis we discussed above is perturbative in \(\epsilon=4-D\). The non-perturbative RG flow in three dimensions (\(\epsilon=1\)) can be different in many aspects. For example, perturbatively stable fixed points can become unstable and vice versa. The attractor basin of the stable fixed points may change. In general, non-perturbative physics is hard to attack. However, certain methods such as Monte Carlo simulation (see for example [8]) and conformal bootstrap [7] have greatly improved our understanding of non-perturbative RG flow. Let us take the XY\({}^{2}\) model as an example. For an RG space with \(O(2)^{2}\rtimes Z_{2}\) symmetry, at two-loop order, the perturbatively stable fixed point is the fixed point where two O(2) vector models are interactively coupled. There exists another fixed point with \[u=0,\quad v\neq 0 \tag{79}\] in (72). Notice since \(u=0\), the model can be written as two copies of XY models decoupled, one involves the scalars \(\phi_{1}\) and \(\phi_{2}\), another involves \(\phi_{3}\) and \(\phi_{4}\). Even though this decoupled fixed point is perturbatively unstable, the non-perturbative results from both Monte Carlo simulation and conformal bootstrap suggest that it is non-perturbatively stable. Let us denote \(\epsilon_{1}\sim(\phi_{1}^{2}+\phi_{2}^{2})\) and \(\epsilon_{2}\sim(\phi_{3}^{2}+\phi_{4}^{2})\) as the mass operators of the O(2) vector models. The \(\epsilon\) operator of the O(2) model has scaling dimension \(\Delta_{\epsilon}=1.51124(22)\)[59]. Since the two copies of the O(2) models are decoupled, the operator \(O=I_{23}=\epsilon_{1}\epsilon_{2}\) will not be re-normalized. We get \(\Delta_{O}=2\Delta_{\epsilon}>3\). This means that the decoupled fixed point is stable when perturbed by this operator. In fact, all operators preserving the \(O(2)^{2}\rtimes Z_{2}\) symmetry of the decoupled O(2) models are irrelevant (except for the mass operator \(\epsilon_{1}+\epsilon_{2}\) whose coupling will be tuned to zero at the critical temperature). The decoupled O(2) fixed point is non-perturbatively stable. Similarly, for the XY\({}^{3}\) model, the true stable fixed point is the three copies of decoupled XY CFTs. The Ising and XY fixed points were known to be non-perturbatively stable for a long time. The non-perturbative stability of the \(N\)=3 and \(N\)=4 Cubic fixed points is related to a recent conformal bootstrap study in an interesting way. The model was first introduced in [60]. The relative stability of two of the fixed points was under debate for a long time (see Section 11.3 of [37]). The model can be generalized to an arbitrary number of scalar fields. The two competing fixed points are the O(\(N\)) invariant fixed point, and the fixed point where with \((Z_{2})^{N}\rtimes S_{N}\) symmetry, which we will call the \(N\)-state Cubic fixed point. As noted in [60], there exists a critical \(N_{c}\), above which \(N\)-state Cubic fixed point becomes more stable than O(\(N\)) invariant fixed point. To determine that \(N_{c}\) has been the subject of many theoretical works, see [53] for a review of the early theoretical works. The bootstrap result of [61] proves non-perturbatively that \(N_{c}<3\), so both the 3-state and the 4-state Cubic models should be non-perturbatively stable (see [62; 63; 64; 65] for earlier works which attempts to bootstrap the Cubic CFT directly). Images with the same universality class can have different critical exponent \(\omega\), which controls the leading correction to the critical behavior [37]. This critical exponent is related to the scaling dimension of the leading irrelevant operator allowed by the image group \(G_{i}\). The critical fixed point can have an enhanced symmetry, which is bigger than the symmetry group of the RG flow, the image symmetry group \(G_{i}\). Take the B4a image (65) as an example, for which the image group is \(Z_{4}\). At the critical point, the \(Z_{4}\) symmetry is enhanced to \(O(2)\). The quadratic polynomials allowed by \(Z_{4}\) are \[O_{1}=(\chi\chi^{*})^{2},\quad O_{2}=Re[\chi^{4}],\quad\text{and}\quad O_{3}= \mathrm{i}Im[\chi^{4}]. \tag{80}\] The operator \(O_{1}\) has scaling dimension \(\Delta_{O_{1}}=3.794(8)\), while the operators \(O_{2}\) and \(O_{3}\) are the leading charge-4 operator in the spectra, with scaling dimension \(\Delta_{O_{2}}=\Delta_{O_{3}}=3.11535(73)\)[66]. A nice review of the conformal data of the O(N) vector models is [67]. Since the leading irrelevant operators allowed by the image group \(Z_{4}\) are \(O_{2}\) and \(O_{3}\), the "B4a" image has \(\omega=0.11535(73)\). The image group of "B6a" (whose image group is \(Z_{6}\)), on the other hand, only allows the \(O_{1}\) deformation, which has \(\omega=0.794(8)\). Many of the universalities listed in Table 1 have small \(\omega\). This makes this corresponding second-order phase transition "dirty". For Monte Carlo simulation, this means that the finite size effects are hard to get rid of. For real experimental measurements, this means that one has to be very close to the critical temperature to observe a good scaling behavior of the physical quantities. ## IV Future directions _Incommensurate structure phase transitions universalities._ We discussed briefly in Section II.4 and Appendix B the incommensurate structural phase transitions. The order parameter of the symmetry-breaking phase is incommensurate with the lattice structure. This has been observed in materials such as Rb\({}_{2}\)ZnCl\({}_{4}\)[68], and the transition was shown to be in the three-dimensional XY model universality class. The irreps of the 230 crystallographic space groups satisfying the weak Lifshitz condition and also the Landau condition are classified in [69]. It will be interesting to write down the corresponding Landau effective action and then use the knowledge of three-dimensional conformal field theory to determine the order of the phase transition, and which universality class the phase transition is in. A full list of possible CFTs that can be realized in incommensurate transitions will be interesting, which we leave for future work. _Two-dimensional structural phase transitions._ We briefly mentioned the two-dimensional structural phase transitions of absorbed monolayers in Section II.2. This type of phase transition is controlled by the spontaneous breaking of two dimension space groups, which are also called wallpaper groups. There are only 17 wallpaper groups. Classifying the irreps and analyzing the corresponding Landau theory is less laborious than in three dimensions. The order-disorder type of phase transitions was classified in [26; 27]. The irreps of the wallpaper at points of symmetry, their Landau and Lifshitz frequencies, and the subgroups that are preserved by these irreps were later worked out in [70], which therefore include a classification of possible displacive type transitions. The effective actions of all irreps of the wallpaper group satisfying both Landau and Lifshitz con ditions were worked out in [71]. The effective actions appearing in these studies include the actions of the two-dimensional Ising model, the three-state Potts model, the four-state Potts model, the clock model, the three-state Cubic model and etc. One should, however, be careful when using the Landau and Lifshitz conditions to infer the order of the transitions because of the strong thermal fluctuations. Typical second-order phase transitions which violate the Landau condition include the three-state and four-state Potts models. We believe the (weak) Lifshitz condition should also be used with caution. It will be interesting to study more carefully these two dimensional Landau actions more using the knowledge of two-dimensional CFTs in the future. _Magnetic Phase Transitions._ Table 1 lists the conformal field theories than can be realized in commensurate structural phase transitions. One can potentially do a similar analysis for magnetic transitions. This type of phase transition can be analyzed using the representation theory of the so-called "magnetic space group (Shubnikov groups)" [9], which describes magnetic phase transitions through group-subgroup relations just like the crystallographic space group that describes structural phase transition, see for example [72]. A full list of possible CFTs that can be realized in magnetic phase transitions will be an interesting problem to attack in the future. _Quantum phase transitions._ Recently, it was noticed that certain 2+1 dimensional scalar CFTs can be realized in quantum phase transitions. These include the Ising model [73, 74, 75], the O(2) CFT [76, 77, 78], the \(N=3\) Cubic CFT [79, 80, 81, 82], and the O(4) CFT [83, 84, 85]. These quantum transitions typically happen on two-dimensional lattices at zero temperature. The thermal fluctuations that drive the phase transition are replaced by quantum fluctuations. The effective actions of these transitions are also closely related to the representations of space groups, more precisely, projective representations of the space group [86, 58]. The phase transitions are usually driven by an order parameter charged under a U(1) gauge symmetry (or its subgroup). The order parameter will experience a background gauge field on the lattice, which causes them to pick up a Berry phase when moving on the lattice. This explains the appearance of the projective representations of the space group. It will be interesting to study these phase transitions systematically using group theory methods. **Acknowledgments** This lecture note is based on a series of talks given by the author in a few places, including a journal club talk given at the Institut des Hautes Etudes Scientifiques, and also lectures given in the Department of Physics, the University of Hong Kong. The author would like to thank the audience of these lectures and many other friends for valuable questions and comments, in particular, Slava Rychkov, Balt van Rees, Gregory Korchemsky, Jiaxin Qiao, Benoit Sirois, Marten Reehorst, Fidel Ivan Schaposnik, Zechuan Zheng and Aditya Hebbar, Martin Hasenbusch, Johan Henriksson, Stefanos Kousvos, Andreas Stergiou, Ziyang Meng, Weilun Jiang, Yang Qi and Xiaoyan Xu. The manuscript is partially finished during the workshop "Bootstrapping Nature: Non-perturbative Approaches to Critical Phenomena" and also the "Hong Kong Computational and Theoretical Physics Study Group 2023". We thank the Galileo Galilei Institute and the University of Hong Kong receptively for their hospitality. The research of the author is supported by the Huawei Young Talents Program at IHES. ## Appendix A The Lifshitz condition in momentum space The Lifshitz condition is equivalent to the condition that the mass of the critical mode should be located at a local minimum in momentum space, that is \[\frac{\partial a(T,P,\vec{k})}{\partial k_{1}}=0,\quad\frac{\partial a(T,P,\vec{ k})}{\partial k_{2}}=0,\quad\frac{\partial a(T,P,\vec{k})}{\partial k_{3}}=0. \tag{106}\] We review here a derivation given in [9]. The free energy is a functional of the density function \[F[\rho_{0}+\delta\rho(\vec{r})]=F[\rho_{0}]+\int d\vec{r}d\vec{r^{\prime}}G( \vec{r},\vec{r^{\prime}})\delta\rho(\vec{r})\delta\rho(\vec{r^{\prime}}). \tag{107}\] The density fluctuation can be expand in irreps of the space group \(G\) (for small \(\vec{q}\)), \[\delta\rho(\vec{r})=\sum_{\vec{k}\in\vec{k}_{*}}\sum_{\vec{\mathcal{R}}}\sum_ {i}^{dim(\vec{\mathcal{R}})}\int d\vec{q}\phi_{i}^{\vec{\mathcal{R}}}(\vec{k }+\vec{q})\eta_{i,\vec{k}+\vec{q}}^{\vec{\mathcal{R}}}(\vec{r}). \tag{108}\] In (33) and (42) we focused on a single mode in the expansion. The vector \(\vec{k}\) is a point in the Brillouin zone and \(\vec{q}\) is a small deviation of the momentum. Here \(\vec{\mathcal{R}}\) is the irrep of the little group that keeps the vector \(\vec{k}\) invariant. The dimension of the irrep of the space group equals \(dim(\vec{\mathcal{R}})\) times the number of vectors in \(\vec{k}_{*}\). We will drop the dependence on \(k_{*}\) and \(\vec{\mathcal{R}}\) for simplicity. The second term in (107) is therefore \[F_{2}=\sum_{ij}\int d\vec{q}\phi_{i}(\vec{k}+\vec{q})\phi_{j}(-\vec{k}-\vec{q })A_{i,j}(\vec{k}+\vec{q}), \tag{109}\] with the kernel \[A_{i,j}(\vec{k}+\vec{q})=\int d\vec{r}d\vec{r^{\prime}}G(\vec{r},\vec{r^{ \prime}})\eta_{i,\vec{k}+\vec{q}}(\vec{r})\eta_{j,-\vec{k}-\vec{q}}(\vec{r^{ \prime}}). \tag{110}\] The kernel can be expanded in \(\vec{q}\), \[A_{i,j}(\vec{k}+\vec{q}) = A_{i,j}(\vec{k})+\vec{q}\cdot\vec{B}_{i,j}(\vec{k})+\ldots\] \[A_{i,j}(\vec{k}) = \int d\vec{r}d\vec{r^{\prime}}\left(\eta_{i,\vec{k}}(\vec{r}) \eta_{j,-\vec{k}}(\vec{r^{\prime}})+\eta_{i,\vec{k}}(\vec{r^{\prime}})\eta_{j,-\vec{k}}(\vec{r})\right)G(\vec{r},\vec{r^{\prime}}),\] \[\vec{B}_{i,j}(\vec{k}) = -\mathrm{i}\int d\vec{r}d\vec{r^{\prime}}\vec{r}\left(\eta_{i, \vec{k}}(\vec{r})\eta_{j,-\vec{k}}(\vec{r^{\prime}})-\eta_{i,\vec{k}}(\vec{r^ {\prime}})\eta_{j,-\vec{k}}(\vec{r})\right)G(\vec{r},\vec{r^{\prime}}). \tag{111}\] We used the relation \(\eta_{i,\vec{k}+\vec{q}}(\vec{r})=e^{\vec{q}\cdot\vec{r}}\eta_{i,\vec{k}}(\vec {r})\). Diagonalizing \(A_{i,j}(\vec{k})\) and pick the lowest eigen-value gives us the mass of the critical modes \(a(T,P,\vec{k})\). To get vanishing derivative, we need \[\vec{B}_{i,j}(\vec{k})=0. \tag{112}\] This is possible if \(G(\vec{r},\vec{r^{\prime}})\) takes certain special forms. However, if the term vanishes due to symmetry reasons, second order phase transitions have a better chance to happen. The function \(\left(\eta_{i,\vec{k}}(\vec{r})\eta_{j,-\vec{k}}(\vec{r^{\prime}})-\eta_{i, \vec{k}}(\vec{r^{\prime}})\eta_{j,-\vec{k}}(\vec{r})\right)\) lives in the \([\mathcal{R}\otimes\mathcal{R}]_{a}\) representation. Notice also \[\vec{B}_{i,j}(\vec{k})=-\mathrm{i}\int d\vec{r}d\vec{r^{\prime}}(\vec{r}+\vec{ t})\left(\eta_{i,\vec{k}}(\vec{r})\eta_{j,-\vec{k}}(\vec{r^{\prime}})-\eta_{i, \vec{k}}(\vec{r^{\prime}})\eta_{j,-\vec{k}}(\vec{r})\right)G(\vec{r},\vec{r^{ \prime}}), \tag{113}\] for arbitrary \(\vec{t}\). Requiring that \(\vec{B}_{i,j}(\vec{k})\) vanish for generic \(G(\vec{r},\vec{r^{\prime}})\) therefore gives us precisely the Lifshitz condition, \[\text{Lifshitz condition:}\qquad\mathcal{V}\notin[\mathcal{R}\otimes\mathcal{R }]_{a}. \tag{114}\] ## Appendix B Incommensurate phase transitions and the weak Lifshitz condition The incommensurate structural phase is controlled by a weaker version of the Lifshitz condition, which was introduced by Michelson in [46]. The Brillouin zone contains points of symmetry, lines of symmetry, planes of symmetry, and generic points. For incommensurate transitions, the momenta of the critical mode are not located at points of symmetry. In general, the momentum is irrational numbers times the reciprocal lattice constant vector. This means the order parameter forms spatially modulated waves that are incommensurate with the lattice structure. Let us denote the dimension of the symmetry domain that \(\vec{k}\) lives in as \(m(\vec{k})\). It can be proven that the number of Lifshitz invariants at \(\vec{k}\) is always bigger or equal to \(m(\vec{k})\), with \(m(\vec{k})=0,1,2,3\) for points of symmetry, lines of symmetry, planes of symmetry, and generic points respectively. Take a plane of symmetry as an example, suppose the space group allows two Lifshitz invariants, the effective will contain two terms of the form (62). The two coupling constants of these terms, \[c_{1}(T,P,\vec{k}),\quad c_{2}(T,P,\vec{k}), \tag{66}\] need to be zero for the transition to be second order. The conditions \(c_{1}(T,P,\vec{k})=c_{2}(T,P,\vec{k})=0\) correspond to a line in the Brillouin zone, which might intersect with the plane of symmetry at isolated points: let us denote the intersection point as \(\vec{k}_{0}\). The location of \(\vec{k}_{0}\) depends on temperature and pressure \((T,P)\). We demonstrate the weak Lifshitz condition for planes of symmetry in Figure 16. The mass gap of this \(\vec{k}_{0}\) model is therefore a function of \((T,P)\), \[a(T,P)=a\left(T,P,\vec{k}_{0}(T,P)\right). \tag{67}\] The second order phase transition line corresponds to \(a(T,P)=0\), which is a one-dimensional line in the \((T,P)\) plane, therefore can be reached without fine-tuning. Unlike the points of symmetry which allow no Lifshitz invariants, the lines of symmetries allow at most one Lifshitz invariant, the planes of symmetry allow two Lifshitz invariants, and a generic momentum point allows three Lifshitz invariants. In summary, the number of allowed Lifshitz invariants should be equal to \(m(\vec{k})\). The irreps of the 230 crystallographic space which satisfy these weak Lifshitz conditions are Figure 16: Weak Lifshitz condition for planes of symmetry. classified in [69].
2308.07031
Continuous and discrete universality of zeta-functions: Two sides of the same coin?
In 1975 Voronin proved the universality theorem for the Riemann zeta-function $\zeta(s)$ which roughly says that any admissible function $f(s)$ is approximated by $\zeta(s)$. A few years later Reich proved a discrete analogue of this result. The proofs of these theorems are almost identical but it is not known whether one of them implies the other. We will see that if we translate the question in the language of linear dynamics then there is a link which we exploit to obtain in a straightforward way a big variety of discrete universality results appearing in the literature.
Athanasios Sourmelidis
2023-08-14T09:45:25Z
http://arxiv.org/abs/2308.07031v1
# Continuous and discrete universality of zeta-functions: two sides of the same coin? ###### Abstract. In 1975 Voronin proved the universality theorem for the Riemann zeta-function \(\zeta(s)\) which roughly says that any admissible function \(f(s)\) is approximated by \(\zeta(s)\). A few years later Reich proved a discrete analogue of this result. The proofs of these theorems are almost identical but it is not known whether one of them implies the other. We will see that if we translate the question in the language of linear dynamics then there is a link which we exploit to obtain in a straightforward way a big variety of discrete universality results appearing in the literature. Key words and phrases:Riemann zeta-function, Hurwitz zeta-function, universality, linear dynamics, strong recurrence 2020 Mathematics Subject Classification: Primary 11M06, 11M35; Secondary 47Axx, 37B20 The author is supported by FWF project M 3246-N ## 1. Introduction and Main results ### Introduction The Riemann zeta-function \(\zeta(s)\) is defined for every complex number \(s:=\sigma+it\) with \(\sigma>1\) by the following two expressions \[\zeta(s):=\sum_{n\geq 1}\frac{1}{n^{s}}=\prod_{p}\biggl{(}1-\frac{1}{p^{s}} \biggr{)}^{-1}.\] In his pathbreaking memoir [21], Riemann showed that \(\zeta(s)\) has a meromorphic continuation to the whole complex plane \(\mathbb{C}\) with a simple pole at \(s=1\) and that it satisfies a certain functional equation connecting \(\zeta(s)\) with \(\zeta(1-s)\). This in turn implies that the only real zeros of \(\zeta(s)\), the so called _trivial zeros_, are the negative even integers and Riemann conjectured that all complex zeros of \(\zeta(s)\), the so called _non-trivial zeros_, lie on the vertical line \(1/2+i\mathbb{R}\) (Riemann hypothesis). In 1975 Voronin [26] proved the following: _If \(0<r<1/4\) and \(f:\{s\in\mathbb{C}:|s|\leq r\}\to\mathbb{C}\) is a non-vanishing continuous function which is analytic in the interior of the disk, then for every \(\varepsilon>0\) there is \(\tau>0\) such that_ \[\max_{|s|\leq r}|\zeta(s+3/4+i\tau)-f(s)|<\varepsilon. \tag{1.1}\] In the proof it is already implicit that the set of those \(\tau>0\) satisfying (1.1) has positive lower density in the non-negative real numbers \(\mathbb{R}_{+}\). Recall that a set \(A\subseteq\mathbb{R}_{+}\) has _positive lower density_ if \[\liminf_{T\to\infty}\frac{1}{T}\mathrm{meas}(A\cap[0,T])>0,\] where \(\mathrm{meas}(A)\) denotes the Lebesgue measure of the set \(A\). Since \(\zeta(s)\) approximates a large family of analytic functions, the above result has been known ever since as the _(continuous) universality theorem_ for \(\zeta(s)\). Voronin's theorem has been refined and generalized within the decades for other zeta- and \(L\)-functions. There is an exhaustive literature of results indicating that the universality property is rather the norm than the exception when one deals with a Dirichlet series satisfying some natural (for applications to number theory) conditions. We refer to [11, 16, 19, 24] for a survey of such results. In 1979/1981 Gonek [7] and (independently) Bagchi [1, 2] generalized Voronin's theorem in several directions. For instance, they improved his theorem by replacing the disc centered at \(3/4\) and having radius \(0<r<1/4\), by any compact set \(K\) of the strip \(\mathcal{D}:=\{s\in\mathbb{C}:1/2<\sigma<1\}\) that has connected complement. _Remark_.: We take the chance here to introduce some notation in order to simplify the upcoming exposition. A tuple \((K,f,\varepsilon)\) will be called _admissible_ if \(K\subseteq\mathcal{D}\) is compact with connected complement, \(f:K\to\mathbb{C}\) is continuous in \(K\) and analytic in its interior and \(\varepsilon>0\). A tuple \((K,f,\varepsilon)^{*}\) will also be called admissible if the previous assumptions hold in addition to \(f\) being non-vanishing in \(K\). Gonek considered another generalization of \(\zeta(s)\), namely the Hurwitz zeta-function, which is defined by \[\zeta(s;\alpha):=\sum_{n\geq 0}\frac{1}{(n+\alpha)^{s}},\quad\sigma>1,\quad \alpha\in(0,1].\] Like \(\zeta(s)\), the Hurwitz zeta-function has a meromorphic continuation to \(\mathbb{C}\) with a simple pole at \(s=1\). Gonek showed that if the parameter \(\alpha\) is a fixed transcendental or rational number \(\neq 1/2,1\) then the following holds: _If \((K,f,\varepsilon)\) is admissible, then_ \[\liminf_{T\to\infty}\frac{1}{T}\mathrm{meas}\left\{\tau\in[0,T]:\max_{s\in K} \left|\zeta(s+i\tau;\alpha)-f(s)\right|<\varepsilon\right\}>0. \tag{1.2}\] Observe that in this case the target function \(f\) is allowed to have zeros in \(K\) and this is why we talk about _(continuous) strong universality_. We will see later on that this property of the Hurwitz zeta-function is equivalent to saying that the set \(\{\zeta(\cdot+i\tau;\alpha):\tau\geq 0\}\) is dense in the space of analytic functions \(f:\mathcal{D}\to\mathbb{C}\), \(H(\mathcal{D})\), equipped with the topology of uniform convergence on compact sets. In particular, this set intersects any open subset of \(H(\mathcal{D})\)_frequently often_ as the positive lower density statement implies (we will give a precise formulation of what this means in the following section). In this context and in view of Voronin's theorem, we also have that \(\{\zeta(\cdot+i\tau):\tau\geq 0\}\) intersects frequently often any open subset of the subspace \(H^{*}(\mathcal{D}):=\{f\in H(\mathcal{D}):f\text{ is non-vanishing}\}\) which is endowed with the induced topology. At the same time with Gonek and Bagchi, Reich [20] showed the discrete analogue of Voronin's theorem: _Let \(h>0\). If \((K,f,\varepsilon)^{*}\) is admissible, then_ \[\liminf_{N\to\infty}\frac{1}{N}\#\left\{n\leq N:\max_{s\in K}|\zeta(s+ihn)-f( s)|<\varepsilon\right\}>0, \tag{1.3}\] _where \(\#A\) denotes the cardinality of the set \(A\subset\mathbb{N}\)._ In other words, the set \(\{\zeta(\cdot+ihn):n\geq 1\}\) intersects frequently often any open subset of \(H^{*}(\mathcal{D})\). In this case we talk about \(h\)_-discrete universality_ while for the Hurwitz zeta-function one has \(h\)_-discrete strong universality_ (see for example [12]). ### Main results The proofs of continuous and \(h\)-discrete (strong) universality coincide up to some point and then it seems that the two methods, though similar, are not directly connected. As a matter of fact, in most cases (including the one of the Hurwitz zeta-function) the \(h\)-discrete universality is obtained for all \(h>0\) except for a set of zero Lebesgue measure. As a consequence there are many results which treat these phenomena separately. Particularly, if the universality property for a zeta- or an \(L\)-function is established, then shortly after its discrete analogue appears in the literature as well. So it is natural to ask if there is a connection between these concepts and whether one of them implies the other. We will see that if we translate this question in the language of linear dynamics, then there is an affirmative answer which we can employ to prove the following theorems. **Theorem 1.1**.: _Let \(h>0\). The continuous strong universality of the Hurwitz zeta-function implies its \(h\)-discrete strong universality. Conversely, the \(h\)-discrete strong universality implies (1.2) with \(\limsup\) in place of \(\liminf\)._ **Theorem 1.2**.: _Let \(h>0\). Assuming the Riemann hypothesis, the continuous universality of the Riemann zeta-function is equivalent to its \(h\)-discrete universality._ It will be seen that in the case of strong universality the theorem is in fact a consequence of the Conejero-Muller-Peris theorem and a classic result on strong recurrence due to Gottschalk and Hedlund. The theorem for the Riemann zeta-function will follow by the Conejero-Muller-Peris theorem and an equivalent formulation of the Riemann hypothesis with the strong recurrence property of \(\zeta(s)\) due to Bagchi. In the end we present some heuristics on why the Conejero-Muller-Peris theorem is not applicable if we assume the existence of hypothetical non-trivial zeros of \(\zeta(s)\) off the vertical line \(1/2+i\mathbb{R}\). Nevertheless, we will employ an older of result due to Oxtoby and Ulam to obtain a much weaker result for \(\zeta(s)\) unconditionally. **Theorem 1.3**.: _The continuous universality of the Riemann zeta-function implies the existence of a dense \(G_{\delta}\)-set \(J\subseteq\mathbb{R}_{+}\) such that if \(t_{0}\in J\), then for any admissible tuple \((K,f,\varepsilon)^{*}\), there is a sequence \((n_{k})_{k\geq 1}\subseteq\mathbb{N}\) such that_ \[\max_{s\in K}|\zeta(s+it_{0}n_{k})-f(s)|<\varepsilon,\quad k\geq 1.\] The proofs of the theorems are based on properties of dynamical systems. Hence, we prove them for the simplest cases of \(\zeta(s)\) and \(\zeta(s;\alpha)\) with the purpose of exhibiting the method. The main point of the present work is to show that any form of continuous universality for zeta-functions implies in one way or the other its discrete analogue. ### Structure and notations In Section 2 we present the necessary material from the theory of linear dynamics that will be repeatedly employed in the sequel. In Section 3 we show how the questions we want to address on zeta-functions can be translated in the language of linear dynamics and provide the proofs of the main results. In the last two sections we state shortly further generalizations of the method introduced in earlier sections. ## 2. Linear dynamics and strong recurrence The interested reader can turn to [5, 9] for a contemporary treatment of the theory of linear dynamics. In the sequel \(X\) will denote a topological vector space over \(\mathbb{C}\). A continuous linear map \(T:X\to X\) is called an _operator_ and a vector \(x\in X\) is called a _hypercyclic vector of \(T\)_ if its orbit \[\operatorname{orb}(x,T):=\left\{T^{n}x:n\geq 0\right\},\] is dense in \(X\); here and in the sequel we use the usual convention from operator theory \(Tx:=T(x)\) and \[T^{n}x:=\underbrace{T\circ\cdots\circ T}_{n\text{ times}}x.\] The set of hypercyclic vectors is denoted by \(\operatorname{HC}(T)\). Hence, a vector \(x\) is hypercyclic if for every open set \(U\subseteq X\), there is \(n\in\mathbb{N}\) such that \(T^{n}x\in U\). If additionally we can show that \[\liminf_{N\to\infty}\frac{1}{N}\#\left\{n\leq N:T^{n}x\in U\right\}>0\] for every open set \(U\subseteq X\), then \(x\) is called a _frequently hypercyclic vector of \(T\)_ and the set of frequently hypercyclic vectors will be denoted by \(\operatorname{FHC}(T)\subseteq\operatorname{HC}(T)\). In the continuous case, a one-parameter family \(\mathcal{T}=(T_{t})_{\tau\geq 0}\) of operators defined on \(X\) is called a _strongly continuous semigroup_ (or \(C_{0}\)-semigroup) if \(T_{0}=I\), \(T_{t}T_{s}=T_{t+s}\) for all \(t,s\geq 0\), and \(\lim_{t\to s}T_{t}x=T_{s}x\) for all \(s\geq 0\), \(x\in X\). In this setting, a vector \(x\in X\) is called a _hypercyclic vector of \(\mathcal{T}\)_ if \(\operatorname{orb}(\mathcal{T},x):=\left\{T_{t}x:t\geq 0\right\}\) is dense in \(X\) and a _frequently hypercyclic vector of \(\mathcal{T}\)_ if \[\liminf_{T\to\infty}\frac{1}{T}\mathrm{meas}\left\{t\in[0,T]:T_{t}x\in U \right\}>0\] for every open set \(U\subseteq X\). Completely analogously as in the discrete case, the set of hypercyclic, respect. frequently hypercyclic, vectors of the family \(\mathcal{T}\) is denoted by \(\operatorname{HC}(\mathcal{T})\), respect. \(\operatorname{FHC}(\mathcal{T})\subseteq\operatorname{HC}(\mathcal{T})\). From the above definitions, it follows that \(\operatorname{HC}(T_{t})\subseteq\operatorname{HC}(\mathcal{T})\) for every \(t>0\) because \(T_{t}^{n}=T_{nt}\). Whether the converse implication \(\operatorname{HC}(\mathcal{T})\subseteq\operatorname{HC}(T_{t})\) is true for some \(t>0\), has been first answered partially by Oxtoby and Ulam [18]. We also refer to [9, Theorem 7.22] for a proof. **Theorem** (Oxtoby-Ulam).: _If \(\mathcal{T}=(T_{t})_{t\geq 0}\) is a \(C_{0}\)-semigroup on a separable space \(X\) and \(x\in\operatorname{HC}(\mathcal{T})\), then there is a dense \(G_{\delta}\)-set \(J\subseteq(0,+\infty)\) such \(x\in\operatorname{HC}(T_{t})\) for any \(t\in J\)._ Whether one has \(\operatorname{HC}(\mathcal{T})\subseteq\operatorname{HC}(T_{t})\) for every \(t>0\), it has been answered recently by Conejero, Muller and Peris [6]. **Theorem** (Conejero-Muller-Peris).: _If \(\mathcal{T}=(T_{t})_{t\geq 0}\) is a \(C_{0}\)-semigroup of operators that is locally equicontinuous, then \(\operatorname{HC}(\mathcal{T})\subseteq\operatorname{HC}(T_{h})\) and \(\operatorname{FHC}(\mathcal{T})\subseteq\operatorname{FHC}(T_{h})\) for any \(h>0\)._ We close the section with the notion of strong recurrence. A vector \(x\in X\) will be called _strongly recurrent_ for the operator \(T\), respectively for the \(C_{0}\)-semigroup \(\mathcal{T}\), if for every open set \(U\subseteq X\) with \(x\in U\) \[\limsup_{N\to\infty}\frac{\#\left\{n\leq N:T^{n}x\in U\right\}}{N}>0,\] respectively \[\limsup_{T\to\infty}\frac{\operatorname{meas}\left\{t\in[0,T]:T_{t}x\in U \right\}}{T}>0.\] There is a connection between continuous and discrete version of strong recurrence which was established by Gottschalk and Hedlund [8, Theorem 2]. **Theorem** (Gottschalk-Hedlund).: _Let \(\mathcal{T}=(T_{t})_{t\geq 0}\) be a \(C_{0}\)-semigroup of operators and assume that \(h>0\). Then a vector \(x\in X\) is strongly recurrent for \(\mathcal{T}\) if and only if is strongly recurrent for \(T_{h}\)._ The notion of strong recurrence in the context of zeta- and \(L\)-functions has been realized by Bagchi in his thesis [1] and then in subsequent work in [2, 3]. **Theorem** (Bagchi).: _The following statements are equivalent:_ \((1)\) _The Riemann hypothesis is true._ \((2)\) _For any admissible \((K,\zeta,\varepsilon)\)_ \[\liminf_{T\to\infty}\frac{1}{T}\mathrm{meas}\left\{\tau\in[0,T]:\max_{s\in K} \left|\zeta(s+i\tau)-\zeta(s)\right|<\varepsilon\right\}>0.\] \((3)\) _Statement_ \((2)\) _holds with \(\limsup\) in place of \(\liminf\)._ It should be noted that Bagchi proves only \((1)\Leftrightarrow(3)\) but the statement cited mostly in the relevant literature is \((1)\Leftrightarrow(2)\). We give a sketch of the proof. The implication \((1)\Rightarrow(2)\) follows from Voronin's theorem while \((2)\Rightarrow(3)\) is obvious. Lastly, assuming that \((3)\) holds but \((1)\) does not, it would imply by Rouche's theorem the existence of at least \(cT\) many zeros in the rectangle \(1/2<\sigma<1\), \(0<t<T\), for some \(c>0\) and infinitely many \(T\). But these would contradict classic zero-density estimates of \(\zeta(s)\) in the half-plane \(\sigma>1/2\) (see for example [25, Chapter IX]). ## 3. Proofs of the main results Before giving the proofs we construct the space \(X\) and the one-parameter family of operators \(\mathcal{T}\) which are connected to the universality of zeta-functions. Let \((K_{n})_{n\geq 1}\) be an exhaustion of \(\mathcal{D}=\{s\in\mathbb{C}:1/2<\sigma<1\}\) by compact sets, that is \(K_{n}\subseteq K_{n+1}\subseteq\mathcal{D}\), \(n\geq 1\), and for every compact set \(K\subseteq U\), there is \(N\geq 1\) such that \(K\subseteq K_{N}\). Since \(\mathcal{D}\) is simply connected we can construct \(K_{n}\) to have connected complement. We equip \(H(\mathcal{D})=\{f:\mathcal{D}\to\mathbb{C}\mid f\text{ is analytic}\}\) with the sequence of norms \[p_{n}(f):=\max_{s\in K_{n}}|f(s)|,\quad f\in H(\mathcal{D}),\quad n\geq 1,\] and it becomes a (separable) Frechet space, i.e. a complete metric space with the metric \[d(f,g):=\sum_{n\geq 1}\frac{1}{2^{n}}\min\left(1,p_{n}(f-g)\right),\quad f,g \in H(\mathcal{D}).\] Lastly, we define the sequence of _translation operators_\(\mathcal{T}:=(T_{\tau})_{\tau\geq 0}\) by \[T_{\tau}f:=f(\cdot+i\tau),\quad f\in H(\mathcal{D}),\quad\tau\geq 0,\] and it can be quickly verified that it is a well-defined \(C_{0}\)-semigroup on \(H(\mathcal{D})\). The family is also locally equicontinuous because \(H(\mathcal{D})\) is a Frechet space. Proof of Theorem 1.1.: Let \(h>0\). If \(U\) is an open subset of \(H(\mathcal{D})\), then there is \(n\geq 1\) and an analytic function \(h:\mathcal{D}\to\mathbb{C}\) such that \[V:=\left\{g\in H(\mathcal{D}):\max_{s\in K_{n}}|g(s)-h(s)|<\frac{1}{n}\right\} \subseteq U.\] If the Hurwitz zeta-function is continuous strongly universal for some fixed parameter \(\alpha\in(0,1]\), then \[\liminf_{T\to\infty}\frac{1}{T}\text{meas}\left\{\tau\in[0,T]:\max_{s\in K_{n }}|\zeta(s+i\tau;\alpha)-h(s)|<\frac{1}{n}\right\}>0,\] which implies in the language of linear dynamics that \[\liminf_{T\to\infty}\frac{1}{T}\text{meas}\left\{\tau\in[0,T]:T_{\tau}\zeta( \cdot;\alpha)\in U\right\}>0.\] Since \(U\) is arbitrary and \(\zeta(\cdot;\alpha)\) is an element of \(H(\mathcal{D})\), we get that \(\zeta(\cdot;\alpha)\in\operatorname{FHC}(\mathcal{T})\). In view of the Conejero-Muller-Peris theorem it follows that also \(\zeta(\cdot;\alpha)\in\operatorname{FHC}(T_{h})\). Let now \((K,f,\varepsilon)\) be admissible. By Mergelyan's theorem there is a polynomial \(P(s)\) with \[\max_{s\in K}|P(s)-f(s)|<\frac{\varepsilon}{2}.\] On the other hand \[W:=\left\{g\in H(\mathcal{D}):\max_{s\in K}|g(s)-P(s)|<\frac{\varepsilon}{2}\right\}\] is an open subset of \(H(\mathcal{D})\). Since \(\zeta(\cdot;\alpha)\in\operatorname{FHC}(T_{h})\), we have that \[\liminf_{N\to\infty}\frac{1}{N}\#\left\{n\leq N:T_{h}^{n}\zeta(\cdot;\alpha) \in W\right\}.\] Therefore, by the triangle inequality and the equation \(T_{h}^{n}=T_{hn}\) we conclude that \[\liminf_{N\to\infty}\frac{1}{N}\#\left\{n\leq N:\max_{s\in K}|\zeta(s+ihn; \alpha)-f(s)|<\varepsilon\right\}>0.\] Hence, \(\zeta(s;\alpha)\) is \(h\)-discrete strongly universal as well. Now we assume that the converse statement is true, i.e. the Hurwitz zeta-function is \(h\)-discrete strongly universal. If \(U\subseteq H(\mathcal{D})\) is open with \(\zeta(\cdot,\alpha)\in U\), then by assumption \[\liminf_{N\to\infty}\frac{1}{N}\left\{n\leq N:T_{h}^{n}\zeta(\cdot;\alpha)\in U \right\}.\] Hence, \(\zeta(\cdot;\alpha)\) is strongly recurrent for the operator \(T_{h}\) and, consequently, for the family \(\mathcal{T}\) as can be seen by the Gottshalk-Hedlund theorem. If \((K,f,\varepsilon)\) is admissible, then the \(h\)-discrete strong universality implies that there is integer \(N\) such that \[\max_{s\in K}|\zeta(s+ihN;\alpha)-f(s)|<\frac{\varepsilon}{2}. \tag{3.1}\] Since \(\zeta(\cdot;\alpha)\) is strongly recurrent for \(\mathcal{T}\), we know that \[\limsup_{T\to\infty}\frac{1}{T}\text{meas}\left\{\tau\in[0,T]:\max_{s\in ihN+ K}|\zeta(s+i\tau;\alpha)-\zeta(s;\alpha)|<\frac{\varepsilon}{2}\right\}>0.\] From the triangle inequality and (3.1), we obtain now the second statement of the theorem. Proof of Theorem 1.2.: Let \(h>0\). It should be noted that if \((K,f,\varepsilon)\) is admissible (\(f\) may also have zeros), then \[\liminf_{T\to\infty}\frac{1}{T}\text{meas}\left\{\tau\in[0,T]:\max_{s\in K}| \log\zeta(s+i\tau)-f(s)|<\varepsilon\right\}. \tag{3.2}\] For, we know that hypothetical zeros of \(\zeta(s)\) off the vertical line \(1/2+i\mathbb{R}\) are distributed sparser as we move higher in the strip \(\mathcal{D}\). Therefore, for any compact set \(K\), it is possible to show that the measure of those \(\tau\in[0,T]\) such that \(i\tau+K\) does not contain any zeros of \(\zeta(s)\), is asymptotically equal to \(T\). On the other hand, Voronin's theorem implies that the set of \(\tau\) such that \(\zeta(s+i\tau)\) is close to \(e^{f(s)}\) has positive lower density. Relation (3.2) follows by combining these two facts. It is only now that we have to assume the Riemann hypothesis in order to ensure that \(\log\zeta\in H(\mathcal{D})\). We then have the same setting as for the Hurwitz zeta-function in the previous theorem. The continuous universality of \(\zeta(s)\) implies the continuous strong universality of \(\log\zeta(s)\) which in turn (by the Conejero-Muller-Peris theorem) implies the \(h\)-discrete strong universality of \(\log\zeta(s)\) and by exponentiation the \(h\)-discrete universality of \(\zeta(s)\). For the converse statement however, we employ additionally Bagchi's theorem after applying the Gottshalk-Hedlund theorem. Of course it is again essential to ensure that \(\log\zeta\in H(\mathcal{D})\) as follows from the Riemann hypothesis. By repeating now the same steps as in the previous proof but moving from \(\zeta(s)\) to its logarithm and preserving the \(\liminf\) notation, we obtain that the \(h\)-discrete universality of \(\zeta(s)\) implies its continuous universality. _Remark_.: One of the main arguments in the proof of the Conejero- Muller-Peris theorem is that the (frequently) hypercyclic vector can approximate itself in the space \(X\). This may not be possible in our case unless we assume the Riemann hypothesis in which case Voronin's theorem is applicable. Another approach relies on working in a slit half-plane \(\Omega\) which is defined as the whole \(\mathcal{D}\) without the segments \((1/2+i\gamma,\beta+i\gamma]\) for any zero \(\rho=\beta+i\gamma\) of \(\zeta(s)\). Since \(\zeta(1+it)\neq 0\), \(t\in\mathbb{R}\), \(\Omega\) is simply connected and we can show as in the case of \(H(\mathcal{D})\), that \(H(\Omega)\) is a Frechet space with the induced topology. The disadvantage of this approach is that the family \(\mathcal{T}\) of vertical shifts in not anymore well-defined in \(H(\Omega)\). Nevertheless, if we leave aside the notion of operators then Voronin's theorem implies the strong recurrence of \(\zeta(s)\) in the space \(H(\Omega)\) (in the form given by Bagchi). However, later in the proof of the Conejero-Muller-Peris theorem, a certain continuity argument is needed that involves the elements \(x+\lambda T_{\tau}x\), \(x\in HC(\mathcal{T})\), \(\lambda\in\mathbb{C}\) and \(\tau\) from some suitable interval. This argument does not seem that can be adjusted to our case if we drop some defining properties of a \(C_{0}\)-semigroup. Instead, to prove the weaker statement of Theorem 1.3, we modify the proof of [9, Theorem 7.22], where no self-approximation is needed. Proof of Theorem 1.3.: Let \(U_{m}\), \(m\geq 1\), be an enumeration of the sets \[\left\{f\in H^{*}(\mathcal{D}):\max_{s\in K_{N}}|f(s)-e^{P(s)}|<\frac{1}{N} \right\},\quad N\geq 1,\,P\in\mathbb{Q}[X],\] which form a countable base of \(H^{*}(\mathcal{D})\) and for each \(m\geq 1\) define \[J_{m}=\left\{\tau\in(0,+\infty):\exists n\in\mathbb{N}\text{ with }\max_{s\in K_{N}}|\zeta(s+in\tau)-e^{P(s)}|<\frac{1}{N}\right\},\] which is an open subset of \((0,+\infty)\). By Voronin's theorem it follows that \(J_{m}\) is dense. For, if \(0<a<b<+\infty\), then there is \(n_{0}\in\mathbb{N}\) such that \(n_{0}b>(n_{0}+1)a\) and, thus, \((n_{0}a,+\infty)\subseteq\cup_{n\geq n_{0}}(na,nb)\) The universality theorem implies that \(T_{s}\zeta\in U_{m}\) for (infinitely many) \(s\geq n_{0}\). Hence, \(s\in(na,nb)\) for some \(n\geq n_{0}\) and if we set \(t_{0}=s/n\), then \(t_{0}\in J_{m}\cap(a,b)\). Therefore, the Baire category theorem yields that the set \[J:=\bigcap_{m\geq 1}J_{m}\] is a dense \(G_{\delta}\)-set in \((0,+\infty)\) whose elements satisfy the desired property of \(\zeta(s)\). ## 4. Multidimensional case There is no real advantage on considering only \(\zeta(s)\) or \(\zeta(s;\alpha)\) since any other function defined in a similar space as \(H(\mathcal{D})\) could take their place. In fact, we introduce the following generalization, which has also been realized by Bagchi [3]: Let \(N\in\mathbb{N}\), \(\mathbf{h}:=(h_{1},\ldots,h_{N})\) be a vector of positive real numbers and \(\sigma_{1n}<\sigma_{2n}\), \(n\leq N\). If \(\mathcal{D}_{n}:=\left\{s\in\mathbb{C}:\sigma_{1n}<\sigma<\sigma_{2n}\right\}\), \(n\leq N\), and \(\mathcal{C}:=\prod_{n\leq N}\mathcal{D}_{n}\) is their Cartesian product, then \(H(\mathcal{C})\) is a separable Frechet space (with the product topology) and the family of operators \(T_{\mathbf{h}}=(T_{\tau})_{\tau\geq 0}\) defined by \[T_{\tau}(f_{1},\ldots,f_{N})=(f_{1}(\cdot+ih_{1}\tau),\ldots,f_{N}(\cdot+ih_{N} \tau)),\quad(f_{1},\ldots,f_{N})\in H(\mathcal{C}),\quad\tau\geq 0,\] is a well-defined \(C_{0}\)-semigroup. Lastly, \(H^{*}(\mathcal{C})\subseteq H(\mathcal{C})\) will consist of the vectors whose entries are zero-free functions. A vector of **zeta-functions**\((\zeta_{1},\ldots,\zeta_{N})\in H(\mathcal{C})\) will be called: 1. _continuous jointly universal_ if for any admissible \((K_{n},f_{n},\varepsilon)^{*}\), \(n\leq N\), \[\liminf_{T\to\infty}\frac{1}{T}\mathrm{meas}\left\{\tau\in[0,T]:\max_{n\leq N }\max_{s\in K_{i}}|\zeta_{n}(s+ih_{n}\tau)-f_{n}(s)|<\varepsilon\right\}>0; \tag{4.1}\] 2. _continuous joint strongly universal_ if for any admissible \((K_{n},f_{n},\varepsilon)\), \(n\leq N\), relation (4.1) holds; 3. \(h\)_-discrete jointly universal/joint strongly universal_ if in (4.1) we substitute the continuous variable \(\tau\in[0,T]\) with a discrete variable \(hm\), \(m\in[0,T]\cap\mathbb{N}\), and the Lebesgue measure notation meas with the notation of the cardinality of a set \(\#\). With the above notations, we can adjust the proofs from the previous sections to obtain similar results for the multidimensional case. **Theorem 4.1**.: _If \((\zeta_{1},\ldots,\zeta_{N})\in H(\mathcal{C})\) is continuous joint strongly universal, then it is \(h\)-discrete joint strongly universal for every \(h>0\). If \((\zeta_{1},\ldots,\zeta_{N})\in H^{*}(\mathcal{C})\) is continuous jointly universal, then it is \(h\)-discrete joint universal for every \(h>0\). If \((\zeta_{1},\ldots,\zeta_{N})\in H(\mathcal{C})\) is continuous jointly universal, then there is a dense \(G_{\delta}\)-set \(J\subseteq(0,+\infty)\) such that if \(t_{0}\in J\), then for any admissible tuple \((K_{n},f_{n},\varepsilon)^{*}\), \(n\leq N\), there is a sequence \((n_{k})_{k\geq 1}\subseteq\mathbb{N}\) such that_ \[\max_{n\leq N}\max_{s\in K_{n}}|\zeta_{n}(s+it_{0}n_{k})-f_{n}(s)|<\varepsilon,\quad k\geq 1.\] ## 5. Zeta-functions The reason we highlighted the term "zeta-functions" is because in analytic number theory there is no rigorous definition of what a zeta- or an \(L\)-function should look like. They are usually considered as Dirichlet series \[L(s)=\sum_{n\geq 1}\frac{a_{n}}{e^{\lambda_{n}s}},\quad\sigma>\sigma_{0},\] where \((a_{n})_{n\geq 1}\subseteq\mathbb{C}\) and \((\lambda_{n})_{n\geq 1}\subseteq\mathbb{R}\) are sequences of number-theoretic interest (for example \(\lambda_{n}=\log n\) and \(a_{n}=1\)) and \(\sigma_{0}<\infty\) is the abscissa of absolute convergence. Naturally, \(L(s)\) can not be universal in the half-plane \(\sigma>\sigma_{0}\). If it can, however, be analytically continued to a vertical strip \(\sigma_{1}<\sigma<\sigma_{0}\) with the exception of finitely many poles, then it will most likely be universal as well. Here we can make two distinctions. If \(L(s)\) has also an Euler product representation \[L(s)=\prod_{n\geq 1}\biggl{(}1-\frac{b_{n}}{e^{\mu_{n}s}}\biggr{)}^{-1},\quad \sigma>\sigma_{0},\] for some \((b_{n})_{n\geq 1}\subseteq\mathbb{C}\) and \((\mu_{n})_{n\geq 1}\subseteq\mathbb{R}\), then similar zero-density estimates as in the case of \(\zeta(s)\) could possibly be attained, allowing us to say that \(L(s)\) can not be strongly universal. If, on the other hand, such representation does not exist, then \(L(s)\) will be strongly universal. A good candidate of a zeta-function having an Euler product representation are elements from the so-called _Selberg class_ introduced by Selberg [22]. For instance, the Riemann zeta-function, Dirichlet \(L\)-functions, Dedekind zeta-functions and Hecke \(L\)-functions belong to this class. For a detailed survey we refer to [24], while sufficient conditions for an \(L(s)\) from this class to be universal are given in [17]. If additionally we assume some sort of independence between elements \(L_{1},\ldots,L_{N}\) of the Selberg class, then we can also have joint universality. For example if \(\chi_{1},\ldots,\chi_{N}\) are Dirichlet characters that are pairwise nonequivalent, then the associated Dirichlet \(L\)-functions \((L(s,\chi_{1}),\ldots,L(s,\chi_{N}))\) are jointly universal. This was proved independently by Bagchi [1, 2], Gonek [7] and Voronin [27]. A more general framework in the context of the Selberg class is given in [15]. Zeta-functions without an Euler product are usually occurring when \(\lambda_{n}=\log\kappa_{n}\) with \(\kappa_{n}\in\mathbb{R}_{+}\setminus\mathbb{N}\) or when they can be expressed as a linear combination of two or more zeta-functions which have an Euler product. A classic example of the first case is the Lerch zeta-function [11] which is a generalization of the Hurwitz zeta-function, while an example of the second case are Dirichlet series with periodic coefficients [23]. In both cases we have strong universality while in [4, 13] a more general framework is given. If a tuple of such zeta-functions has some sort of independence between them, then they will also be joint strongly universal [10, 14]. We only presented a selection of results. Their discrete analogues can also be found in the literature. On the other hand, Theorem 4.1 implies in a strong or a weaker sense that studying the continuous universality may suffice.
2303.04892
Some New Results on the Maximum Growth Factor in Gaussian Elimination
This paper combines modern numerical computation with theoretical results to improve our understanding of the growth factor problem for Gaussian elimination. On the computational side we obtain lower bounds for the maximum growth for complete pivoting for $n=1:75$ and $n=100$ using the Julia JuMP optimization package. At $n=100$ we obtain a growth factor bigger than $3n$. The numerical evidence suggests that the maximum growth factor is bigger than $n$ if and only if $n \ge 11$. We also present a number of theoretical results. We show that the maximum growth factor over matrices with entries restricted to a subset of the reals is nearly equal to the maximum growth factor over all real matrices. We also show that the growth factors under floating point arithmetic and exact arithmetic are nearly identical. Finally, through numerical search, and stability and extrapolation results, we provide improved lower bounds for the maximum growth factor. Specifically, we find that the largest growth factor is bigger than $1.0045n$ for $n>10$, and the lim sup of the ratio with $n$ is greater than or equal to $3.317$. In contrast to the old conjecture that growth might never be bigger than $n$, it seems likely that the maximum growth divided by $n$ goes to infinity as $n \rightarrow \infty$.
Alan Edelman, John Urschel
2023-03-08T21:16:42Z
http://arxiv.org/abs/2303.04892v4
# Some new results on the maximum growth factor in Gaussian elimination ###### Abstract. This paper combines modern numerical computation with theoretical results to improve our understanding of the growth factor problem for Gaussian elimination. On the computational side we obtain lower bounds for the maximum growth for complete pivoting for \(n=1:75\) and \(n=100\) using the Julia JuMP optimization package. At \(n=100\) we obtain a growth factor bigger than \(3n\). The numerical evidence suggests that the maximum growth factor is bigger than \(n\) if and only if \(n\geq 11\). We also present a number of theoretical results. We show that the maximum growth factor over matrices with entries restricted to a subset of the reals is nearly equal to the maximum growth factor over all real matrices. We also show that the growth factors under floating point arithmetic and exact arithmetic are nearly identical. Finally, through numerical search, and stability and extrapolation results, we provide improved lower bounds for the maximum growth factor. Specifically, we find that the largest growth factor is bigger than \(1.0045n\), and the lim sup of the ratio with \(n\) is greater than or equal to \(3.317\). In contrast to the old conjecture that growth might never be bigger than \(n\), it seems likely that the maximum growth divided by \(n\) goes to infinity as \(n\to\infty\). Key words and phrases:Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA. 2020 _Mathematics Subject Classification._ Primary 65F05, 15A23. Accompanying software and data may be found in the online repository. Introduction The 1960s saw the maximum growth of a certain class of "non-linear" growth factor (see [28]). The growth factor \(n\) is a function of the growth factor \(n\), and the growth factor \(n\) is a function of the growth factor \(n\). The growth factor \(n\) is a function of the growth factor \(n\), and the growth factor \(n\) is a function of the growth factor \(n\). notably is the first to explore the growth problem for complete pivoting with numerical optimization software, specifically the NPSOL Library out of Stanford (Nonlinear Programming, Stanford Optimization Laboratory). In particular they were the first to observe the number \(4.1325\) when \(n=5\) as an output of the optimization software. In 1989, Higham and Higham [18] pointed out that many common matrices can have growth factors of order \(n\) (for any pivoting strategy). **The early 1990s:** Interest in the growth factor was substantially rejuvenated when Trefethen and Schreiber [34] performed average case analyses of the growth factor in 1990. One year later, Nick Gould [15] surprised everyone by finding a 13x13 matrix with growth bigger than 13 in finite precision using his LANCELOT software. The solution was confirmed to be near a true example in exact arithmetic in 1992 [7]. **1993-Present:** In the over 30 years since, there was no progress whatsoever in improving Gould's numbers for complete pivoting through computation (which would raise a lower bound) or lowering any mathematical upper bounds. This is a testament to the difficulty of the problem. ### Other pivoting analyses **No Pivoting:** In 2006, the celebrated smoothed analysis4 of Sankar, Spielman, and Teng [26] showed that large growth is unlikely from a probabilistic perturbative viewpoint with no pivoting, and pointed out that such an analysis could be possible for partial and complete pivoting. Footnote 4: Incidentally “smoothed analysis” was named by the first author in his car while driving Spielman and Teng in Cambridge, MA. **Partial Pivoting:** In 1994, Foster [11] pointed out that practical problems can bump into the unacceptable \(2^{n-1}\) bound for partial pivoting. The first author remarked in [9] that numerical experiments suggested in contrast to [34] that the growth might be more like \(O(n^{1/2})\) than \(O(n^{2/3})\) on average. In addition to the smoothed analysis for no pivoting, Sankar [25] also performed a smoothed analysis of partial pivoting with sub-exponential bounds. Very recently Huang and Tikhomirov [19] obtained new results exploring the average case analysis for partial pivoting. **Complete Pivoting for Hadamard Matrices:** It remains unknown, though perhaps it seems unlikely, that a Hadamard matrix could have an earlier pivot bigger than \(n\), given that the last three pivots can only be \(n/2,n/2\) and \(n\), and the fourth from the end is at most \(n/2\). Nonetheless, complete pivot patterns for Hadamard matrices remain a fascinating topic of research. A comprehensive review of the topic including new progress written in 2013 by Kravvaritis may be found in [21]. Of note are the investigations by Seberry [27] and also [8, 9]. ### Technical Background The solution of a linear system, i.e., given a matrix \(A\) and vector \(b\), finding a vector \(x\) satisfying \(Ax=b\), is one of the oldest problems in mathematics. Gaussian elimination, a technique in which a matrix is factored into the product of a lower and upper triangular matrix, is one of the most fundamental and important techniques for solving linear systems. The algorithm proceeds by converting \(A\) into upper triangular form through row operations. In particular, given an \(n\times n\) matrix \(A=(a_{i,j})\), Gaussian elimination performs the iteration \[a_{i,j}^{(1)} :=a_{i,j} \text{for}\quad i,j=1,...,n,\] \[a_{i,j}^{(k+1)} :=a_{i,j}^{(k)}-\frac{a_{i,k}^{(k)}a_{k,j}^{(k)}}{a_{k,k}^{(k)}} \text{for}\quad i,j=k,...,n,\;k=1,...,n-1.\] This can be equivalently written as successive rank one updates of sub-matrices of \(A\), i.e., \[A^{(k+1)}:=A_{k+1:n,k+1:n}^{(k)}-\frac{1}{a_{k,k}^{(k)}}\,A_{k+1:n,k}^{(k)}\, A_{k,k+1:n}^{(k)}\text{for}\quad k=1,...,n-1,\] where \(A^{(k)}=(a_{i,j}^{(k)})_{i,j\geq k}\) and \(A_{i_{1}:i_{2},j_{1},j_{2}}\) is defined as the sub-matrix of \(A\) containing only rows \(\{i_{1},...,i_{2}\}\) and columns \(\{j_{1},...,j_{2}\}\). The resulting LU factorization of \(A\) is given by \[L(i,j)=\frac{a_{i,j}^{(j)}}{a_{j,j}^{(j)}}\quad\text{for}\quad i\geq j,\quad \text{and}\quad U(i,j)=a_{i,j}^{(i)}\quad\text{for}\quad j\geq i,\] and this factorization is unique (up to scaling, i.e., \(A=(LD)(D^{-1}U)\) for any invertible diagonal matrix \(D\)). Not all matrices have an LU factorization (issues arise if \(a_{k,k}^{(k)}=0\) for some \(k<n\)), and may require a permutation of the rows (or, equivalently, columns) of the matrix in order for such a factorization to exist. In addition, when computations are performed in finite precision, issues due to round-off error can occur. The backward error due to rounding in Gaussian elimination can be estimated by the number of bits of precision, the condition number of the matrix \(A\), and the growth factor of the Gaussian elimination algorithm (see [20, Theorem 2.6] or [17, Theorem 9.5] for details). For this reason, understanding the growth factor under different permutation strategies is of both theoretical and practical importance. Using exact arithmetic, the growth factor of Gaussian elimination is defined as \[g(A):=\frac{\max_{i,j,k}|a_{i,j}^{(k)}|}{\max_{i,j}|a_{i,j}|}.\] When performing Gaussian elimination in finite precision, say, using only numbers that can be represented in base \(\beta\) with a length \(t\) mantissa, the algorithm suffers from round-off error, and the growth factor in this setting may be larger than \(g(A)\). However, as we will see in Section 4, when \(t=\omega(\log_{\beta}^{2}n)\), the maximum growth factors in exact and floating point arithmetic are nearly identical (up to a \(1-o(1)\) multiplicative factor) under complete pivoting (see Theorem 4.2). For this reason, we focus almost exclusively (save for Section 4) on exact arithmetic. The most popular and well-studied methods for permuting a matrix in Gaussian elimination are partial pivoting (requiring \(|a_{i,k}^{(k)}|\leq|a_{k,k}^{(k)}|\)), complete pivoting (requiring \(|a_{i,j}^{(k)}|\leq|a_{k,k}^{(k)}|\)), and the slightly less well-known rook pivoting (requiring \(|a_{i,k}^{(k)}|,|a_{k,j}^{(k)}|\leq|a_{k,k}^{(k)}|\)). The growth factor for partial pivoting is well understood in the worst case, and so, in this work, we primarily focus on complete pivoting and, to some extent, rook pivoting as well. Let \(\mathbf{GL}_{n}(\mathbb{C})\) be the set of \(n\times n\) non-singular complex matrices. For simplicity, when considering a given pivoting strategy, we simply restrict ourselves to the set of matrices that satisfy the constraints of the pivoting procedure. In particular, we define \[\mathbf{PP}_{n}(S) =\{A\in\mathbf{GL}_{n}(\mathbb{C})\cap S^{n\times n}\,|\,|a_{i,k} ^{(k)}|\leq|a_{k,k}^{(k)}|\text{ for all }i\geq k\},\] \[\mathbf{CP}_{n}(S) =\{A\in\mathbf{GL}_{n}(\mathbb{C})\cap S^{n\times n}\,|\,|a_{i,j} ^{(k)}|\leq|a_{k,k}^{(k)}|\text{ for all }i,j\geq k\},\] \[\mathbf{RP}_{n}(S) =\{A\in\mathbf{GL}_{n}(\mathbb{C})\cap S^{n\times n}\,|\,|a_{i,k} ^{(k)}|,|a_{k,j}^{(k)}|\leq|a_{k,k}^{(k)}|\text{ for all }i,j\geq k\},\] where \(S\) is some arbitrary subset of \(\mathbb{C}\) (typically \(\mathbb{R}\) or \(\mathbb{C}\)). We denote the supremum of the growth factor for a set \(\mathbf{X}\subset\mathbb{C}^{n\times n}\) by \(g\big{[}\mathbf{X}\big{]}\), e.g., \(g\big{[}\mathbf{CP}_{n}(\{0,1\})\big{]}\) is the maximum growth factor of a non-singular \(n\times n\) binary matrix under complete pivoting. For all sets \(\mathbf{X}\) under consideration in this work, this supremum is a maximum. ### Related Work The maximum growth factor for partial pivoting is well understood. This quantity is known to be exactly \(2^{n-1}\) for \(n\times n\) complex matrices, achieved by the famous example matrix [36, p.212] (see [18] for all such real matrices): \[A=\begin{pmatrix}1&0&\cdots&0&1\\ -1&\ddots&\ddots&\vdots&\vdots\\ \vdots&\ddots&1&0&1\\ -1&\cdots&-1&1&1\\ -1&\cdots&-1&-1&1\end{pmatrix}. \tag{1.1}\] For complete pivoting, much less is known. A classical result due to Wilkinson bounds the growth factor using only Hadamard's inequality [35, Equation 4.15], and produces the estimate \[g\big{[}\mathbf{CP}_{n}(\mathbb{C})\big{]}\leq\sqrt{n}\big{(}2\;3^{1/2}\;... \;n^{1/(n-1)}\big{)}^{1/2}\leq 2\sqrt{n}\,n^{\ln(n)/4}. \tag{1.2}\] Minor improvements to this estimate are possible using the inexactness of Hadamard's inequality, but to date no non-trivial improvement (say, in the exponential constant) is known, even when restricted to real numbers. This estimate has historically been considered quite pessimistic, and the conjecture that we attribute to Wilkinson reasonably states (now known not to hold) that the growth factor for real matrices under complete pivoting is at most \(n\): **Conjecture 1.1** (Folklore?5).: \(g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\leq n\)_, with equality achieved only by Hadamard matrices._ The complex analogue of this conjecture is clearly not true, as illustrated by the dimension three example [29, 31] \[A=\begin{pmatrix}1&1&1\\ 1&z&z^{-1}\\ 1&z^{-1}&z\end{pmatrix},\] which, when \(z=\big{(}-1+2\sqrt{2}i\big{)}/3\), has growth factor \(16/(3\sqrt{3})\approx 3.07\). As noted by Higham, Conjecture 1.1 was one of the most famous conjectures in numerical analysis [17]. Attempting to bound or numerically compute the growth factor for small values of \(n\) was a reasonably active area of research. For instance, there are a number of proofs that the maximum third and fourth pivots are \(2.25\) and \(4\), respectively (see Cryer [4], Tornheim [28, 29, 30, 31], Cohen [2], and Day and Peterson [5]). Tornheim also showed that the maximum fifth pivot is bounded above by \(4\,\frac{17}{18}\)[30, 31]. Conjecture 1.1 was eventually shown to be false in dimension \(13\) by Gould in IEEE double precision floating point arithmetic [15], and soon after by Edelman in exact arithmetic [7]. Since these results, very little progress has been made on the asymptotic behavior of the maximum growth factor under complete pivoting or the exact values of growth for small choices of \(n\). Rook pivoting is relatively understudied compared to partial and complete pivoting, despite, in some sense, containing the best characteristics of both methods. In practice, the expected number of comparisons required should be roughly the same order of computation as partial pivoting, see [12, 24] for empirical results and theorems of this type for certain restrictive classes of random matrices. In addition, rook pivoting has a quasi-polynomial upper bound on the maximal growth factor of \[g\big{[}\mathbf{RP}_{n}(\mathbb{C})\big{]}\leq\frac{3}{2}\,n^{3\ln(n)/4}, \tag{1.3}\] as shown by Foster [12]. Similar to complete pivoting, the gap between worst-case constructions and upper bounds is quite large. The growth factor has also been studied in a variety of other contexts. Trefethen and Schreiber studied the average growth factor over some distributions and numerical observed that for complete pivoting the growth factor appeared to exhibit a \(n^{1/2}\) type behavior [34]. Higham and Higham have given numerous examples of matrices from practical applications with order \(n\) growth factor [18], and recently produced a class of random matrices with growth of order \(n/\log n\)[16] (both for any pivoting strategy). Sankar, Spielman, and Teng provided a smoothed analysis of growth factor without pivoting, proving that if a matrix is perturbed, it unlikely to have large growth factor [26] (in Sankar's thesis, the more complicated case of partial pivoting was also considered [25]). Recently Huang and Tikhomirov obtained new results exploring the average case analysis for partial pivoting [19]. Parker proved that, using random butterfly matrices, any non-singular matrix can be transformed into one that does not require pivoting [22]; Peca-Medlin and Trogdon further analyzed the benefits of butterfly matrices for a variety of pivoting strategies in [23]. Townsend produced bounds for the growth factor when non-optimal pivots are used [32]. ### Contributions of this paper In this work, we prove a number of results regarding the maximum growth factor under complete pivoting, strengthen various conjectures, provide strong evidence for some results, and perform extensive numerical computations. Through numerical search, and stability and extrapolation results, we provide improved lower bounds for the maximum growth factor: **Theorem 1.2**.: \(g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\geq 1.0045\,n\) _for all \(n\geq 11\), and \(\limsup_{n}\big{(}g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}/n\big{)}\geq 3.317\)._ This is the first proof that Conjecture 1.1 is false for all \(n>10\), and also the first proof that illustrates a multiplicative gap away from \(n\). In addition, we also provide asymptotic lower bounds for rook pivoting. By noting that the set of rook pivoted matrices are closed under Kronecker products, we convert finite results into lower bounds for the exponent of the growth factor, showing that rook pivoting can exhibit super-linear growth: **Theorem 1.3**.: \(g\big{[}\mathbf{RP}_{n}(\mathbb{R})\big{]}>\frac{1}{641}n^{1.669}\) _for all \(n\in\mathbb{N}\)._ Numerical search is a key ingredient in the proofs of both Theorems 1.2 and 1.3, and our numerical results also provide insights beyond the aforementioned theorems, which we briefly summarize through the following figures and tables: * Table 1 shows improvements compared to previously known data. * Table 2 outlines the implications of our results for low order Hadamard matrices. * Table 3 tabulates our numerical results for every \(n=1:75\) and also \(n=100\). * Figure 2 plots the numerical values from Table 3. The reported numerical computations were performed in software that is entirely open source in Julia using the modern JuMP (Julia for Mathematical Programming) [6] package. We note that, when \(n=52\) we have found a matrix for which the growth factor is greater than \(2n\), and at \(n=100\) the growth factor is well above \(3n\). We also found a matrix for which the growth factor with rook pivoting is \(955\) at \(n=48\). We discuss our methodology for the computation of these results and state a pair of natural conjectures in Subsection 1.6. We also outline our more theoretical results: We show that the maximum growth factor over matrices with entries restricted to a subset of \(\mathbb{R}\) is nearly equal to the maximum growth factor over all real matrices **Theorem 1.4** (Informal Version of Theorem 3.3).: _For any \(S\subset\mathbb{R}\), \(g\big{[}\mathbf{CP}_{14n^{2}}(S)\big{]}\geq\big{(}\mathrm{diam}(S)/2\max(S) \big{)}\,g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\) for all \(n\in\mathbb{N}\)._ This implies that understanding the growth factor for any restricted set, say, binary matrices, is equivalent (up to polynomial factors) to understanding growth for all real matrices (i.e., if growth for binary matrices is polynomial, then it is polynomial for all matrices, and if growth for all matrices is super-polynomial, then it also is for binary matrices). We note that the \(O(n^{2})\) relationship is certainly pessimistic for many sets \(S\) of interest; our purpose here is merely to show it is possible within small polynomial factors to find such sweeping results. \begin{table} \begin{tabular}{|c|c|} \multicolumn{2}{c}{Math known} & \multicolumn{2}{c|}{Ours / Gould [15]} \\ \hline 1 & **1** \\ 2 & **2** \\ 3 & **2.25** \\ 4 & **4** \\ \hline \multicolumn{2}{c}{Ours = same as [5, 15]} \\ \hline 5 & 4.1325 \\ 6 & 5 \\ 8 & 8 \\ \hline \multicolumn{2}{c}{Ours / Gould [14]} \\ \hline 18 & 21.25 & 20.45 \\ 20 & 24.71 & 24.25 \\ 25 & 33.67 & 32.99 \\ \hline \end{tabular} \(\longleftarrow\) as documented in [7] \end{table} Table 2. Hadamard matrices: For decades, Hadamard matrices, interesting in their own right, seemed relevant to the growth factor problem. Gould [15] shattered that notion with his computation for \(n=16\). We observed that the notion can already be shattered partially at \(n=8\) and fully at \(n=12\): \(n=4\): Mathematics shows \(g_{4}=4\) and the optimum is Hadamard \(n=8\): \(g_{8}=8\) remains a conjecture, however one new observation is that the matrix need not be Hadamard. \(n=12\): We are the first to report the discovery of a 12x12 matrix with \(g_{12}>12\) hence not Hadamard. \(n=16\): Gould reported the discovery of a 16x16 matrix with \(g_{16}>16\) hence not Hadamard. We observed a slightly more optimal matrix. \begin{table} \begin{tabular}{|c|c|c|} \multicolumn{2}{c}{Math known} & \multicolumn{2}{c|}{Ours / Gould [15]} \\ \hline 1 & **1** \\ 2 & **2** \\ 3 & **2.25** \\ 4 & **4** \\ \hline \multicolumn{2}{c}{Ours = same as [5, 15]} \\ \hline 5 & 4.1325 \\ 6 & 5 \\ 8 & 8 \\ \hline \multicolumn{2}{c}{Ours / Gould [14]} \\ \hline 18 & 21.25 & 20.45 \\ 20 & 24.71 & 24.25 \\ 25 & 33.67 & 32.99 \\ \hline \end{tabular} \(\longleftarrow\) as documented in [7] \end{table} Table 1. In 1991, Gould [15] presented a Table 3.1: Maximum Growth Factors Encountered. We thought it would be of interest to present the maximum growth factors we encountered over 30 years later side by side. The **blue** number 13.0205 is Gould’s 1991 first surprising example of a matrix with \(g(A)>n\). The **red** numbers show that it is possible to find examples even when \(n=11\) and \(12\). The **red** and **magenta** numbers are improvements over previously computed results. Only the bold face **black** numbers are mathematical certainties. We also show that the growth factor under floating point arithmetic and exact arithmetic are nearly identical. \begin{table} \begin{tabular}{|c|r|r|r|r|r|r|r|r|r|} \hline \(n=\) & \(g\geq\downarrow\) & \(n=\) & \(g\geq\downarrow\) & \(n=\) & \(g\geq\downarrow\) & \(n=\) & \(g\geq\downarrow\) & \(n=\) & \(g\geq\downarrow\) \\ \hline 1 & 1 & 16 & 18.46 & 31 & 45.43 & 46 & 85.85 & 61 & 137.55 \\ 2 & 2 & 17 & 19.86 & 32 & 47.74 & 47 & 87.54 & 62 & 141.83 \\ 3 & 9/4 & 18 & 21.25 & 33 & 50.36 & 48 & 91.44 & 63 & 144.72 \\ 4 & 4 & 19 & 22.85 & 34 & 52.78 & 49 & 94.72 & 64 & 148.05 \\ 5 & 4.13 & 20 & 24.71 & 35 & 54.84 & 50 & 97.24 & 65 & 153.98 \\ \hline 6 & 5 & 21 & 26.21 & 36 & 57.66 & 51 & 101.82 & 66 & 157.05 \\ 7 & 6.05 & 22 & 28.01 & 37 & 59.91 & 52 & 104.61 & 67 & 162.20 \\ 8 & 8 & 23 & 29.72 & 38 & 63.18 & 53 & 108.09 & 68 & 166.89 \\ 9 & 8.69 & 24 & 31.63 & 39 & 64.87 & 54 & 111.19 & 69 & 171.33 \\ 10 & 9.96 & 25 & 33.67 & 40 & 67.52 & 55 & 114.76 & 70 & 174.45 \\ \hline 11 & 11.05 & 26 & 34.96 & 41 & 70.44 & 56 & 118.18 & 71 & 182.98 \\ 12 & 12.55 & 27 & 36.88 & 42 & 73.49 & 57 & 121.90 & 72 & 184.91 \\ 13 & 13.76 & 28 & 39.05 & 43 & 77.68 & 58 & 126.23 & 73 & 190.57 \\ 14 & 15.25 & 29 & 41.46 & 44 & 79.25 & 59 & 129.42 & 74 & 193.28 \\ 15 & 16.92 & 30 & 43.40 & 45 & 82.56 & 60 & 134.27 & 75 & 196.79 \\ \hline \end{tabular} \(\vdots\) \end{table} Table 3. GECP Data computed by JuMP for \(n=1:75\) and \(100\) Figure 1. We compare the determinant and pivots for GECP ( matrices of size \(n=100\)). **Red: Wilkinson’s bound.** Yellow: a particular \(n=100\) Hadamard matrix. **Blue: our observed maximum matrix.** (a) reveals that at least on an admittedly muted log scale, the observed determinant curve qualitatively is bending in a manner resembling Wilkinson’s bound, while the Hadamard data feels qualitatively different, and thus, less relevant. (b) suggests the same conclusions as those of (a) and also suggests that “slow and steady wins the race” rather than “greedy.” **Theorem 1.5** (Informal Version of Theorem 4.2).: _Let_ \[t\geq 1+\log_{\beta}\big{[}5n^{3}g^{2}\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]} \big{]}.\] _Then the maximum growth factor for a real \(n\times n\) matrix under floating point arithmetic with base \(\beta\) and mantissa length \(t\) is at most \((1+1/n)\,g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\)._ A more precise version of this statement is given by Theorem 4.2. This treats a longstanding gap in the numerical analysis literature, a field where so much energy is often devoted to the distinction between floating point and exact computations, but somehow in the context of growth factors, this has not been analyzed to date. ### Maximum Growth Factors Encountered With modern software and architecture we were able to find growth factors for matrices well beyond \(n=25\) found by Gould [15] and also we were able to find larger growth for matrices as small as \(n=7\). No doubt future researchers will be able to improve our results in the same manner. As the optimal growth problem is a constrained optimization problem it is natural to run optimization software. In 1988, Day and Peterson [5] posed the problem as a function of the \(n^{2}\) elements of the matrix and reported some success with the FORTRAN77 nonlinear programming package NPSOL [13]. By contrast, Gould considered the advantages of posing the problem as an optimization over \(n^{2}+(n-1)^{2}+\ldots+1\) variables with constraints. He used the FORTRAN77 LANCELOT package that he Figure 2. The ratio between numerically observed growth factors and matrix size for \(n\) equals \(1\) to \(75\) and \(100\). Only the values for sizes \(n=1,2,3,4\) are known mathematically to be the exact maximal growth factor though we suspect at least for the smaller values of \(n\) we are achieving the maximum with our JuMP software. This data leads us to make Conjecture 1.7. codeveloped [3]. (LANCELOT is an acronym for "Large And Nonlinear Constrained Extended Lagrangian Optimization Techniques). We chose to follow Gould's approach but chose to use the modern JuMP (acronym: Julia for Mathematical Programming) [6] software library with IPOPT (acronym: Interior Point Optimizer) to formulate and solve the minimization problem. The software advantage of using Julia is that the problem can be naturally formulated in a manner very similar to the mathematics [1]. The optimization engine was the COIN-OR Ipopt (interior point optimizer) package called through Ipopt.jl. The software and results may be found in the online repository [10]. JuMP was run in parallel with 64 randomly chosen starting points on 64 separate threads and the winner, the largest growth factor, was saved. Computations were performed on a server located at MIT consisting of two AMD EPYC 7502 32-Core Hyperthreading Processors, typically using 64 of the 128 hyperthreads at a time so that others could use the machine for their own work. We studied \(n=10\) extensively numerically and never exceeded 9.96, so we feel the evidence is very strong to state the following conjecture: **Conjecture 1.6**.: _The growth factor for complete pivoting: \(g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\geq n\) if and only if \(n\geq 11\)._ In addition, though exact asymptotic estimates for growth factor remain elusive, we feel that we have seen sufficient numerical evidence (see Figure 2) to conjecture that the growth factor is super-linear: **Conjecture 1.7**.: \(g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}=\omega(n)\)6_._ Footnote 6: Recall, \(f(n)=\omega(g(n))\) if \(\lim_{n\to\infty}f(n)/g(n)=\infty\). ### Remainder of Paper The remainder of the paper is organized as follows. In Section 2, we prove a stability lemma, which, when combined with numerical experiments and extrapolation results, imply lower bounds for both complete and rook pivoting. In Section 3, we show that the maximum growth factor for matrices with entries in an arbitrary non-trivial set \(S\) is nearly as large as the maximum growth factor over all real matrices. In Section 4, we consider the growth factor in finite precision, and show that only polylogarithmically many bits (in \(n\)) are needed for this quantity to be at most a constant times the growth factor in exact arithmetic. In Section 5, we describe the numerical programs used to search for large growth factors, prove extrapolation results, and report our mathematically verified results. Finally, in Section 6, we study the growth factor for rook pivoting. ## 2. Key Stability Lemma: Almost Completely Pivoted is Almost Completely Pivoted The "stability lemma" in this section is a critical technical ingredient in the majority of the theorems that follow. One immediate application follows a longstanding tradition of numerical analysis : backward error analysis. The lemma shows that if a numerical computation, such as the computations described in Sections 1 and 5, provides a computed growth factor for a "nearly" completely pivoted matrix, then there is a "nearby" matrix which has a "nearby" growth factor for complete pivoting. For a given \(\boldsymbol{\varepsilon}=(\varepsilon_{1},...,\varepsilon_{n-1})\in\mathbb{ R}^{n-1}\), \(\varepsilon_{i}>-1\) for \(i=1,...,n-1\), we define \[\mathbf{CP}_{n}^{\boldsymbol{\varepsilon}}(S)=\{A\in\mathbf{GL}_{n}(\mathbb{C })\cap S^{n\times n}\,|\,|a_{i,j}^{(k)}|\leq(1+\varepsilon_{k})|a_{k,k}^{(k)}| \text{ for all }i,j\geq k\},\] where \(S\) is some arbitrary subset of \(\mathbb{C}\) (typically \(\mathbb{R}\) or \(\mathbb{C}\)), e.g., the set of matrices that are "almost" completely pivoted (or, for \(\varepsilon_{k}<0\), "overly" completely pivoted) up to a multiplicative error of \(\varepsilon_{k}\) at the \(k^{th}\) step of Gaussian elimination. When \(\varepsilon_{1}=...=\varepsilon_{n-1}>0\), these sets are generally referred to as threshold-pivoted matrices. We first prove the following lemma, showing that every matrix in \(\mathbf{CP}_{n}^{\boldsymbol{\varepsilon}}(S)\) is close to a matrix in \(\mathbf{CP}_{n}^{\boldsymbol{\delta}}(S)\), where \(\varepsilon_{i}\geq 0\geq\delta_{i}\) for all \(i=1,...,n-1\). The more general case in which \(\varepsilon_{i}\) and \(\delta_{i}\) may have the same sign is similar, but is slightly more complicated and not needed for our purposes. We also give an algorithmic description of the procedure in the proof of Lemma 2.1 for the case \(\boldsymbol{\delta}=0\) in Algorithm 1, as this subroutine is a crucial part of converting numerically computed results to mathematical proofs of lower bounds. **Lemma 2.1**.: _For every \(A\in\mathbf{CP}_{n}^{\boldsymbol{\varepsilon}}(S)\), where \(S\) equals \(\mathbb{R}\) or \(\mathbb{C}\), and \(\boldsymbol{\delta}=(\delta_{1},...\delta_{n-1})\) satisfying \(-1<\delta_{i}\leq 0\leq\varepsilon_{i}\) for \(i=1,...,n-1\), there exists a matrix \(B\in\mathbf{CP}_{n}^{\boldsymbol{\delta}}(S)\) such that \(b_{n,n}^{(k)}=a_{n,n}^{(k)}\) for all \(k=1,...,n\), and_ \[\big{|}b_{i,j}^{(k)}-a_{i,j}^{(k)}\big{|}\leq\max_{\min\{i,j\}\leq\ell\leq n-1 }\frac{\bigg{[}\bigg{(}\frac{1+\varepsilon_{\ell}}{1+\delta_{\ell}}\bigg{)}^{ 2}-1\bigg{]}\big{|}a_{\ell,\ell}^{(\ell)}\big{|}}{\prod_{p=\min\{i,j\}}^{ \ell-1}1+\delta_{p}}+\sum_{m=\min\{i,j\}}^{\ell-1}\frac{(\varepsilon_{m}- \delta_{m})\big{|}a_{m,m}^{(m)}\big{|}}{\prod_{p=\min\{i,j\}}^{m}1+\delta_{p}}.\] Proof.: To construct \(B\in\mathbf{CP}_{n}^{\boldsymbol{\delta}}(S)\), we iteratively define the entries \(b_{i,j}^{(k)}\), starting with \(k=n\) and working backwards from \(k=n\) to \(k=1\). The key to this construction is that we scale the row and column of the pivot entry of each matrix \(A^{(k)}\) by a fixed multiplicative factor. This operation leaves entries \(a_{i,j}^{(\ell)}\), \(i,j>k\), unchanged, and so during our procedure each entry is changed at most once. The factor depends on both the maximum magnitude entry \(|a_{i,j}^{(k)}|\) over all \(i,j>k\) and the maximum over \(i=k,j>k\) and \(j=k,i>k\). This allows error to propagate additively rather than multiplicatively. Let \(B^{(n)}:=A^{(n)}\) and \[B^{(k)}:=\begin{pmatrix}(1+\gamma_{k})\,a_{k,k}^{(k)}&\sqrt{1+\gamma_{k}}\,A_{k,k +1:n}^{(k)}\\ \sqrt{1+\gamma_{k}}\,A_{k+1:n,k}^{(k)}&A_{k+1:n,k+1:n}^{(k)}+B^{(k+1)}-A^{(k+1)} \end{pmatrix}\] for \(k=1,...,n-1\), where \(\gamma_{n}:=0\) and \[\gamma_{k}:=\max\bigg{\{}\bigg{(}\frac{1+\varepsilon_{k}}{1+\delta_{k}}\bigg{)} ^{2}-1,\frac{\varepsilon_{k}-\delta_{k}+\max_{i>k}\gamma_{i}|a_{i,i}^{(i)}|/|a _{k,k}^{(k)}|}{1+\delta_{k}}\bigg{\}}.\] The quantity \(\gamma_{k}|a_{k,k}^{(k)}|\) is monotonically decreasing with \(k\) (as \(\delta_{k}\leq 0\)), and so we may equivalently write \[\gamma_{k}|a_{k,k}^{(k)}|=\max\bigg{\{}\bigg{[}\bigg{(}\frac{1+\varepsilon_{k }}{1+\delta_{k}}\bigg{)}^{2}-1\bigg{]}|a_{k,k}^{(k)}|,\frac{(\varepsilon_{k}- \delta_{k})|a_{k,k}^{(k)}|+\gamma_{k+1}|a_{k+1,k+1}^{(k)}|}{1+\delta_{k}} \bigg{\}},\] or \[\gamma_{k}|a_{k,k}^{(k)}|=\max_{\ell\geq k}\bigg{[}\bigg{(}\frac{1+\varepsilon _{\ell}}{1+\delta_{\ell}}\bigg{)}^{2}-1\bigg{]}|a_{\ell,\ell}^{(\ell)}|\,\prod _{p=k}^{\ell-1}\frac{1}{1+\delta_{p}}+\sum_{m=k}^{\ell-1}(\varepsilon_{m}- \delta_{m})|a_{m,m}^{(m)}|\,\prod_{p=k}^{m}\frac{1}{1+\delta_{p}}.\] Our definitions of \(B^{(k)}\) are consistent with one another, as \[B_{k+1:n,k+1:n}^{(k)}-\frac{B_{k+1:n,k}^{(k)}B_{k,k+1:n}^{(k)}}{b_{k,k}^{(k)}} =A_{k+1:n,k+1:n}^{(k)}+B^{(k+1)}-A^{(k+1)}-\frac{A_{k+1:n,k}^{(k)}A_{k,k+1:n}^{ (k)}}{a_{k,k}^{(k)}}=B^{(k+1)}.\] Furthermore, as \(\varepsilon_{k}\leq\sqrt{1+\gamma_{k}}\), for \(i\leq j\) (\(j>i\) is similar), \[|b_{i,j}^{(k)}-a_{i,j}^{(k)}|=|b_{i,j}^{(i)}-a_{i,j}^{(i)}| \leq\max\{\gamma_{i}|a_{i,i}^{(i)}|,(\sqrt{1+\gamma_{i}}-1)|a_{i,j }^{(i)}|\}\] \[\leq\max\{\gamma_{i},(1+\varepsilon_{i})(\sqrt{1+\gamma_{i}}-1) \}|a_{i,i}^{(i)}|\] \[=\gamma_{i}|a_{i,i}^{(i)}|.\] What remains is to verify that \(B\in\mathbf{CP}_{n}^{\delta}(S)\). We proceed by induction from \(k=n-1\) to \(k=1\). We need only consider entries in the lower right block of \(B^{(k)}\), as, for \(i>k\), \[|b_{i,k}^{(k)}|=\sqrt{1+\gamma_{k}}|a_{i,k}^{(k)}|\leq(1+\varepsilon_{k}) \sqrt{1+\gamma_{k}}|a_{k,k}^{(k)}|=\frac{1+\varepsilon_{k}}{\sqrt{1+\gamma_{k }}}|b_{k,k}^{(k)}|\leq(1+\delta_{k})|b_{k,k}^{(k)}|,\] and the same bound holds for \(b_{k,j}^{(k)}\), \(j>k\). When \(k=n-1\), \[|b_{n,n}^{(n-1)}|=|a_{n,n}^{(n-1)}|\leq(1+\varepsilon_{n-1})|a_{n-1,n-1}^{(n-1 )}|=\frac{1+\varepsilon_{n-1}}{1+\gamma_{n-1}}|b_{n-1,n-1}^{(n-1)}|\leq(1+ \delta_{n-1})|b_{n-1,n-1}^{(n-1)}|.\] Suppose the statement holds for \(k=\ell+1,...,n-1\), \(\ell<n-1\), and consider \(b_{i,j}^{(\ell)}\), \(\ell<i\leq j\) (\(\ell<j<i\) is similar). We have \[|b_{i,j}^{(\ell)}| \leq|a_{i,j}^{(\ell)}|+|b_{i,j}^{(i)}-a_{i,j}^{(i)}|\] \[\leq(1+\varepsilon_{\ell})|a_{\ell,\ell}^{(\ell)}|+\gamma_{i}|a_{ i,i}^{(i)}|\] \[=|a_{\ell,\ell}^{(\ell)}|\big{(}(1+\varepsilon_{\ell})+\gamma_{i} |a_{i,i}^{(i)}|/|a_{\ell,\ell}^{(\ell)}|\big{)}\] \[=(1+\delta_{\ell})|b_{\ell,\ell}^{(\ell)}|\frac{(1+\varepsilon_{ \ell})+\gamma_{i}|a_{i,i}^{(i)}|/|a_{\ell,\ell}^{(\ell)}|}{(1+\delta_{\ell})( 1+\gamma_{\ell})}\] \[\leq(1+\delta_{\ell})|b_{\ell,\ell}^{(\ell)}|,\] and therefore \(B\in\mathbf{CP}_{n}^{\delta}(S)\). A simpler, but weaker version of the above result, relating the maximum growth factor under threshold complete pivoting to that of complete pivoting, is as follows. **Corollary 2.2**.: _Let \(S\) equal \(\mathbb{R}\) or \(\mathbb{C}\), and \(\boldsymbol{\varepsilon}=(\epsilon,...,\epsilon)\), \(\epsilon>0\). Then_ \[g\big{[}\mathbf{CP}_{n}(S)\big{]}\geq\frac{g\big{[}\mathbf{CP}_{n}^{\boldsymbol{ \varepsilon}}(S)\big{]}}{1+\epsilon(2+\epsilon)g\big{[}\mathbf{CP}_{n-1}^{ \boldsymbol{\varepsilon}}(S)\big{]}+\epsilon\sum_{i=1}^{n-2}g\big{[}\mathbf{ CP}_{i}^{\boldsymbol{\varepsilon}}(S)\big{]}}\] _for all \(n\in\mathbb{N}\)._ Proof.: The result follows from choosing an \(A\in\mathbf{CP}_{n}^{\boldsymbol{\varepsilon}}(S)\) that achieves the maximum growth factor and choosing \(B\in\mathbf{CP}_{n}(S)\) from Lemma 2.1 as a lower bound for \(g\big{[}\mathbf{CP}_{n}(S)\big{]}\). By Lemma 2.1, we have \[g(A)=\frac{|a_{n,n}^{(n)}|}{\max_{i,j}|a_{i,j}|}=\frac{|b_{1,1}|}{\max_{i,j}|a_ {i,j}|}\,g(B)\leq\frac{|b_{1,1}|}{|a_{1,1}|}\,g(B)=(1+\gamma_{1})\,g(B),\] where \[\gamma_{1}\leq\max_{\ell<n}\epsilon(2+\epsilon)\frac{|a_{\ell,\ell}^{(\ell)}|}{|a_ {1,1}|}+\epsilon\sum_{m=1}^{\ell-1}\frac{|a_{m,m}^{(m)}|}{|a_{1,1}|}\leq\epsilon (2+\epsilon)g\big{[}\mathbf{CP}_{n-1}^{\boldsymbol{\varepsilon}}(S)\big{]}+ \epsilon\sum_{m=1}^{n-2}g\big{[}\mathbf{CP}_{m}^{\boldsymbol{\varepsilon}}(S) \big{]}.\] In addition to being a crucial ingredient for our results, Corollary 2.2 also has some historical significance. This result, and the associated algorithm illustrates a way to convert almost completely pivoted matrices into matrices that are completely pivoted, without losing much in the growth factor. This has key similarities to Edelman's exact arithmetic extension of Gould's finite precision counterexample to Conjecture 1.1, and provides some answers to Edelman's perturbation question for growth factor [7]. ## 3. Growth Factor for Constrained Entries In this section, we study the maximum growth factor of matrices in \(\mathbf{GL}_{n}(\mathbb{R})\cap S^{n\times n}\) when \(S\) is a small set (e.g., \(\{0,1\}\)). In particular, we aim to show that the maximum growth factor for matrices with entries restricted to some subset \(S\subset\mathbb{R}\) is nearly the same as the growth factor over \(\mathbb{R}\), up to a quadratic factor in the input \(n\). To do so, we proceed as follows: First, we show that the maximum growth factor for matrices at least some prescribed distance from the boundary is almost as large as the maximum growth factor over the entire set (Lemma 3.1). Combining this result with the stability lemma of the previous section (Lemma 2.1) produces a lower bound for the maximum growth factor of sets of matrices that cover \(\mathbf{CP}_{n}(S)\) sufficiently well (Lemma 3.2). Finally, using this lower bound, we show that if our restricted set \(S\) is non-trivial (i.e., \(|S|>1\)), then we can almost achieve the maximum growth factor, up to a quadratic factor in \(n\). We begin by characterizing a subset of \(\mathbf{CP}_{n}(S)\) that is stable under entry-wise perturbations of size at most \(\varepsilon\), i.e., matrices \(A\) such that \(\{B\in S^{n\times n}\,|\,|a_{i,j}-b_{i,j}|\leq\varepsilon\}\subset\mathbf{CP}_ {n}(S)\). We have the following lemma. **Lemma 3.1**.: _Let \(A\in\mathbf{CP}_{n}(S)\), \(S\) equal \(\mathbb{R}\) or \(\mathbb{C}\), and \(\varepsilon>0\). If \(|a_{i,j}^{(k)}|\leq|a_{k,k}^{(k)}|-2\times 4^{k-1}\varepsilon\) for all \(i,j=k,...,n\) (except \(i=j=k\)), \(k=1,...,n-1\), then_ \[\{B\in S^{n\times n}\,|\,|a_{i,j}-b_{i,j}|\leq\varepsilon\}\subset\mathbf{CP}_ {n}(S),\] _and_ \[g\big{[}\{B\in S^{n\times n}\,|\,|a_{i,j}-b_{i,j}|\leq\varepsilon\}\big{]} \geq g(A)-\varepsilon(4^{n-1}+g(A))/|a_{1,1}|.\] Proof.: Let \(B\in S^{n\times n}\) satisfy \(b_{i,j}^{(1)}=a_{i,j}^{(1)}+\theta_{i,j}^{(1)}\), where \(|\theta_{i,j}^{(1)}|\leq\varepsilon\). Then \[b_{i,j}^{(2)}=\bigg{[}(a_{i,j}^{(1)}+\theta_{i,j}^{(1)})-\frac{(a_{i,1}^{(1)}+ \theta_{i,1}^{(1)})(a_{i,1}^{(1)}+\theta_{1,j}^{(1)})}{(a_{1,1}^{(1)}+\theta_ {1,1}^{(1)})}\bigg{]}+a_{i,j}^{(2)}-\bigg{[}a_{i,j}^{(1)}-\frac{a_{i,1}^{(1)}a _{1,j}^{(1)}}{a_{1,1}^{(1)}}\bigg{]}=a_{i,j}^{(2)}+\theta_{i,j}^{(2)},\] where \[\theta_{i,j}^{(2)}:=\theta_{i,j}^{(1)}+\theta_{1,1}^{(1)}\frac{a_{i,1}^{(1)}a _{1,j}^{(1)}}{a_{1,1}^{(1)}(a_{1,1}^{(1)}+\theta_{1,1}^{(1)})}-\frac{\theta_{ i,1}^{(1)}a_{1,j}^{(1)}+\theta_{1,j}^{(1)}a_{i,1}^{(1)}+\theta_{i,1}^{(1)} \theta_{1,j}^{(1)}}{a_{1,1}^{(1)}+\theta_{1,1}^{(1)}}.\] Since \(|a_{i,1}^{(1)}|,|a_{1,j}^{(1)}|\leq|a_{1,1}^{(1)}|-2\varepsilon<|a_{1,1}^{(1)}|- \varepsilon\leq|a_{1,1}^{(1)}+\theta_{1,1}^{(1)}|\), we have \[|\theta_{i,j}^{(2)}|\leq\varepsilon\bigg{(}1+\frac{2|a_{1,j}^{(1)}|+|a_{i,1}^{ (1)}+\theta_{i,1}^{(1)}|}{|a_{1,1}^{(1)}+\theta_{1,1}^{(1)}|}\bigg{)}\leq 4\varepsilon.\] Repeating this estimate for \(k=3,...,n\) with \(\varepsilon\) replaced by \(4^{k-2}\varepsilon\), we have \(|a_{i,j}^{(k)}-b_{i,j}^{(k)}|\leq 4^{k-1}\varepsilon\) for all \(i,j,k\). Suppose that \(g(A)\) is achieved by entry \(a_{i,j}^{(k)}\). Then \[g(B)\geq\frac{|a_{i,j}^{(k)}|-4^{k-1}\varepsilon}{|a_{1,1}|+\varepsilon}=g(A)- \frac{4^{k-1}\varepsilon|a_{1,1}|+\varepsilon|a_{i,j}^{(k)}|}{(|a_{1,1}|+ \varepsilon)|a_{1,1}|}\geq g(A)-\varepsilon(4^{n-1}+g(A))/|a_{1,1}|.\] Combining Lemmas 2.1 and 3.1, we are now prepared to prove a lemma regarding the maximum growth factor over sets that cover \(\mathbb{R}^{n\times n}\) (or \(\mathbb{C}^{n\times n}\)) sufficiently well. **Lemma 3.2**.: _Let \(n>1\), \(0<\varepsilon<2^{-(2n-1)}\), and \(X\subset S^{n\times n}\), \(S\) equal to \(\mathbb{R}\) or \(\mathbb{C}\), be such that for all \(A\in\mathbf{CP}_{n}(S)\) there exists an \(\alpha\in S\) and \(B\in X\) satisfying \(|a_{i,j}-\alpha\,b_{i,j}|\leq\varepsilon|a_{1,1}|\) for all \(i,j=1,...,n\). Then_ \[g\big{[}\mathbf{CP}_{n}(S)\cap X\big{]}\geq\big{(}1-\varepsilon n4^{n-1}g \big{[}\mathbf{CP}_{n}(S)\big{]}/(2\varepsilon;4)_{n}\big{)}\,g\big{[} \mathbf{CP}_{n}(S)\big{]},\] _where \((\cdot;\cdot)_{n}\) is the q-Pochhammer symbol._ Proof.: The main idea of the proof is as follows. We consider a matrix \(A\in\mathbf{CP}_{n}(S)\), \(a_{1,1}=1\), that maximizes growth factor (i.e., \(g(A)=g\big{[}\mathbf{CP}_{n}(S)\big{]}\)) and, using Lemma 2.1 applied to \(\mathbf{CP}_{n}(S)\) and \(\mathbf{CP}_{n}^{\boldsymbol{\delta}}(S)\) for \(\boldsymbol{\delta}\) entry-wise negative, find a nearby matrix \(C\in\mathbf{CP}_{n}^{\boldsymbol{\delta}}(S)\). Then, we find a matrix \(B\in X\) nearby \(C\) and, using Lemma 3.1, conclude that \(B\in\mathbf{CP}_{n}(S)\). Finally, using the bounds on \(|a_{i,j}^{(k)}-c_{i,j}^{(k)}|\) and \(|b_{i,j}-c_{i,j}|\) we argue that \(g(B)\) is fairly large. So that we may apply Lemma 3.1, we define \(\delta_{k}=-2\times 4^{k-1}\varepsilon\) and let \(C\in\mathbf{CP}_{n}^{\boldsymbol{\delta}}(S)\) be the matrix resulting from the proof of Lemma 2.1. Because \(A\) maximizes \(g(A)\), \(|a_{k,k}^{(k)}|\geq 1\) for \(k=1,...,n\) and therefore \(|c_{k,k}^{(k)}|\geq 1\) for \(k=1,...,n\) as well. In this case, \(C\) satisfies \[|c_{i,j}^{(k)}|\leq(1+\delta_{k})|c_{k,k}^{(k)}|=|c_{k,k}^{(k)}|-2\times 4^{k-1 }\varepsilon|c_{k,k}^{(k)}|\leq|c_{k,k}^{(k)}|-2\times 4^{k-1}\varepsilon,\] and so, by Lemma 3.1 combined with our lemma hypothesis, there exists a matrix \(B\in\mathbf{CP}_{n}(S)\cap X\) (w.l.o.g. \(\alpha=1\)) with \(|b_{i,j}-c_{i,j}|\leq\varepsilon\). What remains is to bound the differences \(|a_{1,1}-b_{1,1}|\) and \(|a_{n,n}^{(n)}-b_{n,n}^{(n)}|\), and compute a lower bound for \(g(B)\). By Lemmas 2.1 and 3.1, \[|a_{1,1}-b_{1,1}| \leq|a_{1,1}-c_{1,1}|+|b_{1,1}-c_{1,1}|\] \[\leq g(A)\bigg{[}\frac{\big{(}1-2\times 4^{n-2}\epsilon\big{)}^{-2 }-1}{\prod_{p=1}^{n-2}(1-2\times 4^{p-1}\varepsilon)}+\sum_{m=1}^{n-2}\frac{2 \times 4^{m-1}\varepsilon}{\prod_{p=1}^{m}(1-2\times 4^{p-1}\varepsilon)} \bigg{]}+\epsilon\] \[=\varepsilon\bigg{(}1+g(A)\bigg{[}\frac{2\times 4^{n-2}}{1-2 \times 4^{n-2}\varepsilon}(2\varepsilon;4)_{n-1}^{-1}+\sum_{m=1}^{n-1}2 \times 4^{m-1}(2\varepsilon;4)_{m}^{-1}\bigg{]}\bigg{)}\] \[\leq\varepsilon\big{(}1+2\,n\,4^{n-2}g(A)/(2\varepsilon;4)_{n} \big{)},\] and \[|a_{n,n}^{(n)}-b_{n,n}|\leq|a_{n,n}^{(n)}-c_{n,n}^{(n)}|+|b_{n,n}^{(n)}-c_{n,n} ^{(n)}|\leq 4^{n-1}\varepsilon.\] Therefore, \[\frac{|b_{n,n}^{(n)}|}{|b_{1,1}|} \geq\frac{g(A)-4^{n-1}\varepsilon}{1+\varepsilon\big{(}1+2\,n\,4^ {n-2}g(A)/(2\varepsilon;4)_{n}\big{)}}\] \[=g(A)-\varepsilon\ \frac{4^{n-1}+g(A)\big{(}1+2\,n\,4^{n-2}g(A)/(2 \varepsilon;4)_{n}\big{)}}{1+\varepsilon\big{(}1+2\,n\,4^{n-2}g(A)/(2 \varepsilon;4)_{n}\big{)}}\] \[\geq g(A)-\varepsilon\ \big{(}4^{n-1}+g(A)\big{(}1+2\,n\,4^{n-2}g(A)/(2 \varepsilon;4)_{n}\big{)}\big{)}\] \[=g(A)\big{(}1-\varepsilon(4^{n-1}/g(A)+1+2\,n\,4^{n-2}g(A)/(2 \varepsilon;4)_{n}\big{)}\big{)}\] \[\geq g(A)\big{(}1-\varepsilon n4^{n-1}g(A)/(2\varepsilon;4)_{n} \big{)}.\] The requirement on the cover that \(X\) provides in the previous lemma is quite strong; for a non-trivial result we require \(\varepsilon\) to be exponentially small in \(n\). However, given a set of matrices \(S^{n\times n}\), where \(S\) is finite, after performing many steps of Gaussian elimination, a matrix can be approximated with exponentially small error. By formalizing this concept, we prove the following theorem, relating the maximum growth of \(\mathbf{CP}_{m}(S)\), \(S\subset\mathbb{R}\), \(|S|>1\), to that of \(\mathbf{CP}_{n}(\mathbb{R})\). **Theorem 3.3**.: _If \(S\subset\mathbb{R}\), then_ \[g\big{[}\mathbf{CP}_{m}(S)\big{]}\geq\frac{\text{diam(S)}}{2\max_{s\in S}|s|}g \big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\quad\text{for all }m>4n(3n+1).\] Proof.: The main idea of the proof is to build a matrix \(B\in S^{m\times m}\), \(m=n+p\), such that iterates \(B^{(i)}\), \(i=1,...,p\), are completely pivoted, \(|b_{p+1,p+1}^{(p+1)}|\geq|b_{1,1}|\), and \(B^{(p+1)}\) approximates an arbitrary \(A\in\mathbf{CP}_{n}(\mathbb{R})\) exponentially well. If we can approximate an arbitrary \(A\) up to error \(2^{-3n}\), i.e., \(|a_{i,j}-\alpha b_{i,j}^{(k+1)}|\leq 2^{-3n}|a_{1,1}|\) for some fixed \(\alpha\), then, by Lemma 3.2 combined with Wilkinson's bound (Inequality 1.2) for \(g\big{[}\mathbf{CP}_{n}(\mathbb{C})\big{]}\) (for \(n>1\)), \[g\big{[}\mathbf{CP}_{m}(S)\big{]}\geq(1-2^{-(n+1)}n^{\ln(n)/4+3/2}/(2^{1-3n};4 )_{n})g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\geq\frac{1}{2}g\big{[} \mathbf{CP}_{n}(\mathbb{R})\big{]}.\] What remains is to construct the matrix \(B\). Given any \(s_{1},s_{2}\in S\), \(|s_{1}|<|s_{2}|\), and matrix \(C\in\mathbf{CP}_{m-1}(\{0,1\})\), the matrix \[B=\begin{pmatrix}s_{2}&s_{2}\mathbf{1}^{T}\\ s_{2}\mathbf{1}&s_{2}\mathbf{1}\mathbf{1}^{T}+(s_{1}-s_{2})C\end{pmatrix}\] is in \(\mathbf{CP}_{m}(S)\) and satisfies \(B^{(2)}=(s_{1}-s_{2})C\). Therefore, we may assume that \(S=\{0,1\}\) at the cost of one step of Gaussian elimination and a multiplicative factor of \(\operatorname{diam}(S)/\max_{s\in S}|s|\) in the growth factor. However, we would like a matrix with entries in \(\{0,1/2,1\}\). To do so, we note that three steps of Gaussian elimination applied to the \((m-1)\times(m-1)\) block matrix \[C=\begin{pmatrix}1&1&0&\mathbf{0}^{T}&0\\ 1&0&1&\mathbf{0}^{T}&0\\ 0&1&1&\mathbf{0}^{T}&1\\ \mathbf{0}&\mathbf{0}&x&E&y\end{pmatrix}\] where \(x\in\{0,1\}^{m-4}\), \(E\in\{0,1\}^{(m-4)\times(m-5)}\), and \(y\in\{0,1\}^{m-4}\), produces a \((m-4)\times(m-4)\) matrix with its first \(m-5\) columns given by \(E\) and its last column given by \(y-x/2\). Performing this operation \(\ell\) times produces a \(\ell\times\ell\)\(\{0,1/2,1\}\) matrix, where \(\ell\) must be such that \(4\ell+1\leq m\). We are now prepared to approximate an arbitrary matrix \(A\in\mathbf{CP}_{n}(\mathbb{R})\) using matrices in \(\mathbf{CP}_{\ell}(\{0,1/2,1\})\). Suppose (w.l.o.g.) that \(a_{1,1}=1\), and let \(r_{i,j,k}\) denote the \(k^{th}\) bit in the binary expansion of \(\operatorname{ceil}(a_{i,j})-a_{i,j}\) (we write \(-1\) as \(-0.\bar{1}\) in binary), and set \(r_{i,j,0}\) to be the integer part of \(a_{i,j}\) (i.e., either \(0\) or \(1\)). To obtain an approximation of \(A\) of order \(2^{-3n}\), we set \(\ell=3n^{2}+n\) and define \(E\) as follows \[E=\begin{pmatrix}I&\mathbf{0}&\cdots&\mathbf{0}&\frac{1}{2}I\\ \frac{1}{2}I&I&\ddots&\vdots&\frac{1}{2}I\\ \vdots&\ddots&\ddots&\mathbf{0}&\vdots\\ \frac{1}{2}I&\cdots&\frac{1}{2}I&I&\frac{1}{2}I\\ R_{1}&R_{2}&\cdots&R_{3n}&R_{0}\end{pmatrix},\] where each block is \(n\times n\), and \(R_{k}=(r_{i,j,k})_{i,j=1}^{n}\) for \(k=0,1,...,3n\). After \(n\) steps of Gaussian elimination, we have \[E^{(n+1)}=\begin{pmatrix}I&\mathbf{0}&\cdots&\mathbf{0}&\frac{1}{4}I\\ \frac{1}{2}I&I&\ddots&\vdots&\frac{1}{4}I\\ \vdots&\ddots&\ddots&\mathbf{0}&\vdots\\ \frac{1}{2}I&\cdots&\frac{1}{2}I&I&\frac{1}{4}I\\ R_{2}&R_{3}&\cdots&R_{3n}&R_{0}-\frac{1}{2}R_{1}\end{pmatrix},\] and finally, after \(3n^{2}\) steps we have that \(E^{(3n^{2}+1)}=R_{0}-\frac{1}{2}R_{1}-\frac{1}{4}R_{2}-...-\frac{1}{2^{3n}}R_{ 3n}\) and approximates \(A\) up to error \(2^{-3n}\). We have \(\ell=3n^{2}+n\) and require \(4\ell+1\leq m\), so we set \(m=4n(3n+1)+1\). A similar result (with a worse multiplicative constant) holds for \(\mathbb{C}\) given a set \(S\) which either contains \(\{0,1,i\}\), or can be converted to such a set after relatively few iterates of Gaussian elimination (e.g., \(\{-1,1,1+i\}\)). We leave the details to the motivated reader. ## 4. Growth Factor in Floating Point Arithmetic In this section, we aim to bound the growth factor encountered in practice in floating point arithmetic. The term "growth factor" in the literature is used ambiguously to refer to two closely related quantities: growth factor under exact arithmetic or under floating point arithmetic, leading to some confusion. The exact case is clear, and shows up in theoretical discussions. The floating point arithmetic case, by contrast refers to the largest element (in absolute value) seen during a floating point computation. As previously mentioned in Section 1, error estimates for Gaussian elimination typically involve the growth factor under floating point arithmetic rather than exact arithmetic. In this section, we show that when using sufficiently high precision (\(\omega(\log^{2}n)\) bits), the maximum growth factor for exact and floating point arithmetic are identical up to a \(1+o(1)\) multiplicative factor (Theorem 4.2). We consider the maximum growth factor when performing Gaussian elimination in base \(\beta\) with \(t\) digits of precision. For simplicity, we ignore issues of overflow and underflow. Here, we focus exclusively on real-valued matrices, but the analogous theorem for complex matrices follows quickly from the below analysis by simply adjusting the error due to multiplication and division for a given base and mantissa. We leave further details to the interested reader. Under floating point arithmetic, the procedure of Gaussian elimination is given by \[\hat{a}_{i,j}^{(1)} :=a_{i,j}(1+\phi_{i,j}^{(0)}) \text{for}\quad i,j=1,...,n,\] \[\hat{a}_{i,j}^{(k+1)} :=\big{[}\hat{a}_{i,j}^{(k)}-s_{i,k}\hat{a}_{k,j}^{(k)}(1+\theta_ {i,j}^{(k)})\big{]}(1+\phi_{i,j}^{(k)}) \text{for}\quad i,j=k,...,n,\ k=1,...,n-1.\] where \[s_{i,k}=\frac{\hat{a}_{i,k}^{(k)}}{\hat{a}_{k,k}^{(k)}}(1+\varphi_{i,k}),\] and \(|\theta_{i,j}^{(k)}|,|\phi_{i,j}^{(k)}|,|\varphi_{i,k}|\leq u:=\beta^{1-t}/2\) for all \(i,j,k\) (\(u\) is commonly referred to as the unit round-off). When partial pivoting is employed, we may assume that \(|s_{i,k}|\leq 1\) and \(|s_{i,k}(1+\theta_{i,j}^{(k)})|\leq 1\) for all \(i,j,k\). Similar to the sets \(\mathbf{CP}_{n}(S)\) and \(\mathbf{PP}_{n}(S)\) defined in Section 1, we define \[\widehat{\mathbf{PP}}_{n}(S) =\{A\in\mathbf{GL}_{n}(\mathbb{C})\cap S^{n\times n}\,|\,\hat{a}_ {k,k}^{(k)}\neq 0\text{ for all }k,\,|\hat{a}_{i,j}^{(k)}|\leq|\hat{a}_{k,k}^{(k)}| \text{ for all }i,j\geq k\},\] \[\widehat{\mathbf{PP}}_{n}(S) =\{A\in\mathbf{GL}_{n}(\mathbb{C})\cap S^{n\times n}\,|\,\hat{a}_ {k,k}^{(k)}\neq 0\text{ for all }k,\,|\hat{a}_{i,k}^{(k)}|\leq|\hat{a}_{k,k}^{(k)}| \text{ for all }i\geq k\}.\] To avoid a proliferation of indices, here and in what follows the dependence of the above sets and the growth factor on \(\beta\) and \(t\) is implicit. We note that, for any partially pivoted matrix, we may assume that \(|s_{i,k}|\leq 1\) for all \(i,k\). The growth factor under finite arithmetic is denoted by \[G(A):=\frac{\max_{i,j,k}|\hat{a}_{i,j}^{(k)}|}{\max_{i,j}|\hat{a}_{i,j}^{(1)}|},\] and we define \(G[\mathbf{X}]\) to be the maximum growth factor under finite arithmetic (with base \(\beta\) and length \(t\) mantissa) over all matrices in \(\mathbf{X}\). The quantity \(G[\mathbf{X}]\) is a key ingredient in stability theorems of Gaussian elimination (see [20, Theorem 2.6] or [17, Theorem 9.5]). In general, the best known bounds for partial, rook, and complete pivoting is given by \[G(A)\leq\big{[}1+(1+u)^{2}\big{]}^{n-1}=2^{n-1}+O(nu), \tag{4.1}\] and when \(\beta=2\), this bound can simply be replaced by \(2^{n-1}\) (see [20, Section 1.2] for details). For rook and complete pivoting, \(2^{n-1}\) is much more pessimistic than Inequalities 1.2 and 1.3 for exact arithmetic. As the mantissa length \(t\) tends to infinity, intuitively, the maximum growth factor under floating point arithmetic will converge to its exact arithmetic counterpart. However, given a single matrix, the growth factor in floating point can be very different from exact arithmetic due to "near ties" causing the elimination to follow a different branch. That branch, however, is the exact branch of some nearby matrix, as the following lemma illustrates (for partial pivoting). **Lemma 4.1**.: _For every \(A\in\widehat{\mathbf{PP}}_{n}(\mathbb{R})\), there exists a matrix \(B\in\mathbf{PP}_{n}(\mathbb{R})\) with \(b_{i,j}^{(k)}=\hat{a}_{i,j}^{(k)}\) for \(i=k\) or \(j=k\), and_ \[\big{|}\hat{a}_{i,j}^{(k)}-b_{i,j}^{(k)}\big{|}\leq u\sum_{\ell=k}^{\min\{i,j \}-1}\bigg{[}|\hat{a}_{i,j}^{(\ell)}|+|\hat{a}_{\ell,j}^{(\ell)}|(3+u)\bigg{]}\] _otherwise._ Proof.: The main idea is to iteratively update the lower right block of each matrix \(\hat{A}^{(k)}\) so that successive matrices agree exactly, i.e., \(B^{(k+1)}=B_{k+1:n}^{(k)}-B_{k+1:n,k}^{(k)}B_{k,k+1:n}^{(k)}/b_{k,k}^{(k)}\). To this end, we iteratively define \(B\) so that \(B^{(n)}=\hat{A}^{(n)}\) and \[B^{(k)}=\begin{pmatrix}\hat{a}_{k,k}^{(k)}&\hat{A}_{k,:}^{(k)}\\ \hat{A}_{:,k}^{(k)}&B^{(k+1)}+\hat{A}_{k+1:n,k}^{(k)}\hat{A}_{k,k+1:n}^{(k)}/ \hat{a}_{k,k}^{(k)}\end{pmatrix}\quad\text{for }k=1,...,n-1.\] Clearly, successive iterates of \(B\) agree with each other, and \(b_{i,j}^{(k)}=\hat{a}_{i,j}^{(k)}\) for \(i=k\) or \(j=k\). What remains is to bound the error in the lower right block. Consider the entry \(b_{i,j}^{(k)}\), where \(i,j>k\) and let \(m=\min\{i,j\}\). We have \[b_{i,j}^{(k)} =\hat{a}_{i,j}^{(m)}+\sum_{\ell=k}^{m-1}\hat{a}_{i,\ell}^{(\ell) }\hat{a}_{\ell,j}^{(\ell)}/\hat{a}_{\ell,\ell}^{(\ell)}\] \[=\big{[}\hat{a}_{i,j}^{(m-1)}-s_{i,m-1}\hat{a}_{m-1,j}^{(m-1)}(1+ \theta_{i,j}^{(m-1)})\big{]}(1+\phi_{i,j}^{(m-1)})\] \[\quad+\hat{a}_{m-1,j}^{(m-1)}\big{(}s_{i,m-1}-\varphi_{i,m-1}\hat {a}_{i,m-1}^{(m-1)}/\hat{a}_{m-1,m-1}^{(m-1)}\big{)}+\sum_{\ell=k}^{m-2}\hat{ a}_{i,\ell}^{(\ell)}\hat{a}_{\ell,j}^{(\ell)}/\hat{a}_{\ell,\ell}^{(\ell)}\] \[=\bigg{[}\phi_{i,j}^{(m-1)}\hat{a}_{i,j}^{(m-1)}-s_{i,m-1}\hat{a }_{m-1,j}^{(m-1)}(\theta_{i,j}^{(m-1)}+\phi_{i,j}^{(m-1)}+\theta_{i,j}^{(m-1)} \phi_{i,j}^{(m-1)})\] \[\quad-\hat{a}_{m-1,j}^{(m-1)}\varphi_{i,m-1}\hat{a}_{i,m-1}^{(m-1 )}/\hat{a}_{m-1,m-1}^{(m-1)}\bigg{]}+\bigg{[}\hat{a}_{i,j}^{(m-1)}+\sum_{\ell=k }^{m-2}\hat{a}_{i,\ell}^{(\ell)}\hat{a}_{\ell,j}^{(\ell)}/\hat{a}_{\ell,\ell}^{ (\ell)}\bigg{]}.\] Repeating this procedure, we have \[b_{i,j}^{(k)}=\hat{a}_{i,j}^{(k)}+\sum_{\ell=k}^{m-1}\bigg{[}\phi_{i,j}^{(\ell)} \hat{a}_{i,j}^{(\ell)}-s_{i,\ell}\hat{a}_{\ell,j}^{(\ell)}(\theta_{i,j}^{(\ell) }+\phi_{i,j}^{(\ell)}+\theta_{i,j}^{(\ell)}\phi_{i,j}^{(\ell)})-\hat{a}_{\ell, j}^{(\ell)}\varphi_{i,\ell}\hat{a}_{i,\ell}^{(\ell)}/\hat{a}_{\ell,\ell}^{( \ell)}\bigg{]}.\] Because our matrix is partially pivoted, \(|s_{i,\ell}|\) and \(|\hat{a}_{i,\ell}^{(\ell)}|/|\hat{a}_{\ell,\ell}^{(\ell)}|\) are at most one, and so \[\big{|}\hat{a}_{i,j}^{(k)}-b_{i,j}^{(k)}\big{|}\leq u\sum_{\ell=k}^{\min\{i,j \}-1}\bigg{[}|\hat{a}_{i,j}^{(\ell)}|+|\hat{a}_{\ell,j}^{(\ell)}|(3+u)\bigg{]}.\] By combining the above lemma with Lemma 2.1, we obtain a bound on growth factor for complete pivoting. **Theorem 4.2**.: _Let \(0<C<1\) and_ \[t\geq 1+\log_{\beta}\bigg{[}\frac{(1+C)(4+5C)}{C}\sum_{m=1}^{n-1}\sum_{\ell=1}^{ n-m}g\big{[}\mathbf{CP}_{\ell}(\mathbb{R})\big{]}g\big{[}\mathbf{CP}_{m}( \mathbb{R})\big{]}\bigg{]}.\] _Then \(G\big{[}\widehat{\mathbf{CP}}_{n}(\mathbb{R})\big{]}\leq(1+C)\,g\big{[} \mathbf{CP}_{n}(\mathbb{R})\big{]}\)._ Proof.: Suppose \(A\in\widehat{\mathbf{CP}}_{n}(\mathbb{R})\) maximizes growth, i.e., \(G(A)=G\big{[}\widehat{\mathbf{CP}}_{n}(\mathbb{R})\big{]}\), and let \(B\in\mathbf{PP}_{n}(\mathbb{R})\) be a matrix satisfying \(b_{k,k}^{(k)}=\hat{a}_{k,k}^{(k)}\) for all \(k\), and the bounds of Lemma 4.1. Then \(B\in\mathbf{CP}_{n}^{\boldsymbol{\varepsilon}}(\mathbb{R})\), \(\boldsymbol{\varepsilon}=(\varepsilon_{1},...,\varepsilon_{n-1})\), for \(\epsilon_{k}:=(4+u)u\sum_{\ell=1}^{n-k}G\big{[}\widehat{\mathbf{CP}}_{\ell}( \mathbb{R})\big{]}\), as \[|b_{i,j}^{(k)}|\leq|\hat{a}_{i,j}^{(k)}|+u\,\sum_{\ell=k}^{n-1}\bigg{[}|\hat{ a}_{i,j}^{(\ell)}|+|\hat{a}_{\ell,j}^{(\ell)}|(3+u)\bigg{]}\leq|b_{k,k}^{(k)}| \bigg{(}1+u\,\sum_{\ell=k}^{n-1}\frac{|\hat{a}_{i,j}^{(\ell)}|}{|\hat{a}_{k,k} ^{(k)}|}+\frac{|\hat{a}_{\ell,j}^{(\ell)}|}{|\hat{a}_{k,k}^{(k)}|}(3+u)\bigg{)}.\] In addition, \(G\big{[}\widehat{\mathbf{CP}}_{n}(\mathbb{R})\big{]}=G(A)=|b_{n,n}^{(n)}|/|b_ {1,1}|\). Using Lemma 2.1 applied to \(B\), we can find a matrix \(C\in g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\) that satisfies \(b_{n,n}^{(n)}=c_{n,n}^{(n)}\) and \[|c_{1,1}|=\bigg{(}1+\max_{\ell\leq n-1}\varepsilon_{\ell}(2+\varepsilon_{\ell })|\hat{a}_{\ell,\ell}^{(\ell)}|/|\hat{a}_{1,1}|+\sum_{m=1}^{\ell-1}\varepsilon _{m}|\hat{a}_{m,m}^{(m)}|/|\hat{a}_{1,1}|\bigg{)}.\] For the sake of space, we define \(\gamma:=(4+u)u\), \(g(n):=g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\), and \(G(n):=G\big{[}\widehat{\mathbf{CP}}_{n}(\mathbb{R})\big{]}\), and note that \[G(n) \leq g(n)\bigg{(}1+\max_{\ell\leq n-1}\varepsilon_{\ell}(2+ \varepsilon_{\ell})G(\ell)+\sum_{m=1}^{\ell-1}\varepsilon_{m}G(m)\bigg{)} \tag{4.2}\] \[\leq g(n)\bigg{(}1+\gamma\,\max_{\ell\leq n-1}G(\ell)\bigg{[}1+ \gamma\sum_{p=1}^{n-\ell}G(p)\bigg{]}\sum_{p=1}^{n-\ell}G(p)+\sum_{m=1}^{\ell} \sum_{p=1}^{n-m}G(p)G(m)\bigg{)}\] \[\leq g(n)\bigg{(}1+\gamma\,\bigg{[}2+\gamma\sum_{\ell=1}^{n-1}G( \ell)\bigg{]}\sum_{m=1}^{n-1}\sum_{p=1}^{n-m}G(p)G(m)\bigg{)}.\] The result follows from noting that if \[\frac{1}{\gamma}\geq\frac{(1+C)^{2}}{C}\big{(}2+C/2(1+C)\big{)}\sum_{m=1}^{n-1} \sum_{\ell=1}^{n-m}g(\ell)g(m) \tag{4.3}\] for some \(C>0\), then \(G(k)\leq(1+C)g(k)\) for all \(k=1,...,n\). Indeed, we have \(G(1)=g(1)\), and, assuming \(G(\ell)\leq(1+C)g(\ell)\) for \(\ell=1,...,k\), \[\frac{G(k+1)}{g(k+1)} \leq 1+\gamma(1+C)^{2}\left[2+\gamma(1+C)\sum_{\ell=1}^{n-1}g( \ell)\right]\sum_{m=1}^{n-1}\sum_{p=1}^{n-m}g(p)g(m)\] \[\leq 1+\gamma(1+C)^{2}\left[2+\frac{C}{2(1+C)}\right]\sum_{m=1}^{n -1}\sum_{p=1}^{n-m}g(p)g(m)\leq 1+C.\] The above theorem is incredibly pessimistic, but nevertheless still provides useful some useful information. First, by using Wilkinson's bound, we note that \(G\big{[}\widehat{\mathbf{CP}}_{n}(\mathbb{R})\big{]}\leq(1+1/\text{Poly(n)})\,g \big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\) for \(t=\omega(\log_{\beta}^{2}(n))\), and, under the assumption that \(g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\) is bounded by a polynomial, only a \(t=\omega(\log_{\beta}(n))\) length mantissa is required. In Table 4, we include a number of possible bounds on the growth factor (including Wilkinson's Inequality 1.2), and list lower bounds on the largest value of \(n\) for which Theorem 4.2 guarantees that \(G\big{[}\widehat{\mathbf{CP}}_{n}(\mathbb{R})\big{]}\leq(3/2)\,g\big{[} \mathbf{CP}_{n}(\mathbb{R})\big{]}\). ## 5. Computer-Assisted Lower Bounds In this section, we detail lower bounds for growth factor found using computer search, and discuss how such computer-generated matrices in finite arithmetic lead to mathematically provable lower bounds for growth factor in exact arithmetic (Theorem 5.2). ### Computer-Assisted Lower Bounds for Small Dimension We are indebted to the early pioneering numerical optimization given by Day & Peterson [5] and Gould [15]. We are the beneficiary of more readily usable quality software (JuMP [6]), the ready availability of faster processors, and also modern parallel computing. Our methodology is to run 64 threads each with a random \(n\times n\) starting matrix of standard normals which has rows and columns permuted so that the matrix is completely pivoted. We then normalize by dividing by the \((1,1)\) element. Our optimization is over the \(1+2^{2}+\ldots+n^{2}\) elements that are seen by Gaussian Elimination as suggested by Gould [15]. Therefore the starting point requires using all of these \(\approx n^{3}/3\) elements. We store the variables in a 3-d array \(x_{i,j,k},1\leq k\leq n,k\leq i,j\leq n\). Thus \(k=1\) is the original matrix, \(k=2\) is the \(n-1\times n-1\) matrix obtained after one step of Gaussian elimination. We include a listing of the high level function from the online repository [10] that performs the optimization in Figure 3; it is quite easy to read as it is similar to the mathematics. We invite readers to note the six lines of code that indicate the nonlinear constraints (@NLconstraint) and linear constraints (@constraint), the first one of which is the constraint of Gaussian elimination: Very importantly we also wish to discuss the line towards the bottom that begins B = convert_to_cp(Rational... as this line turns a floating point answer to a rigorous mathematical answer. A simple observation is that an output of optimization software is not yet a theoretical lower bound because of floating point effects. In particular it Figure 3. The run_model function is possible that an output of a program is not completely pivoted. The examples from Gould [15, 14] were close in the floating point sense to being optimums and some minor tweaking was needed [7] for the purpose of exact mathematics. In [7] the first author asked if there would always be a nearby floating point matrix. In this paper, we show that Lemma 2.1 theoretically states there would be nearby matrix, and Algorithm 1 as embodied in the convert_to_cp function working on a rational form allows us to state that our computer assisted solutions constitute exact rigorous mathematics rather than a floating point approximation. For the smaller values of \(n\), we tend to believe that the the lower bounds found may well be close to the true values as we have on occasion rerun these values, and found the same answers. For larger values of \(n\), we imagine that the lower bounds are just that, lower bounds. ### Provable Lower Bounds Next, we prove global lower bounds for the growth factor under complete pivoting by converting estimates for growth factor for small \(n\) into estimates for all \(n\). We begin with the following extrapolation lemma. **Lemma 5.1**.: _Let \(S\) equal \(\mathbb{R}\) or \(\mathbb{C}\). Then_ 1. \(g\big{[}\mathbf{CP}_{n}(S)\big{]}\) _is non-decreasing,_ 2. \(g\big{[}\mathbf{CP}_{2n}(S)\big{]}\geq 2\,g\big{[}\mathbf{CP}_{n}(S)\big{]}\) _for all_ \(n\in\mathbb{N}\)_,_ 3. _if_ \(g\big{[}\mathbf{CP}_{n}(S)\big{]}\geq Cn\) _for_ \(n=k,...,2k-1\)_, then_ \(g\big{[}\mathbf{CP}_{n}(S)\big{]}\geq\frac{(1/k;1/2)_{\infty}}{1-1/k}\,Cn\) _for all_ \(n\geq k\)_, where_ \((\cdot;\cdot)_{\infty}\) _is the q-Pochhammer symbol._ Figure 4. The above figure shows that we can go from matrices that are completely pivoted in floating point to matrices that are completely pivoted in exact arithmetic. Lemma 2.1 proves that this is theoretically possible and Algorithm 1 provides a pseudocode implementation while a Julia implementation may be found in the online repository [10]. Proof.: Properties \((i)\) and \((ii)\) follow simply from the operations \[\begin{pmatrix}1&0_{n}^{T}\\ 0_{n}&A\end{pmatrix}\qquad\text{and}\qquad A\otimes H_{1},\text{ where }H_{1}:= \begin{pmatrix}1&1\\ 1&-1\end{pmatrix},\] applied to a matrix \(A\in\mathbf{CP}_{n}(S)\), respectively (Property \((ii)\) is also proved in [31]). If \(g\big{[}\mathbf{CP}_{n}(S)\big{]}\geq C\,n\) for all \(n\in[k,2k)\), then by Properties \((i)\) and \((ii)\), \[g\big{[}\mathbf{CP}_{2n+1}(S)\big{]}\geq g\big{[}\mathbf{CP}_{2n}(S)\big{]} \geq 2Cn=\frac{2n}{2n+1}\,C(2n+1)\geq\frac{2k}{2k+1}\,C(2n+1)\] for all \(n\in[k,2k)\), i.e., \(g\big{[}\mathbf{CP}_{n}(S)\big{]}\geq\frac{2k}{2k+1}\,Cn\) for all \(n\in[k,4k)\). Repeating this argument, we obtain the lower bound \[g\big{[}\mathbf{CP}_{n}(S)\big{]}\geq Cn\,\prod_{i=1}^{j}\frac{2^{i}k}{2^{i}k+ 1}\geq Cn\,\prod_{i=1}^{j}\left(1-\frac{1}{2^{i}k}\right)=\frac{(1/k;1/2)_{j+1 }}{1-1/k}\,Cn\] for \(n\in\big{[}k,2^{j+1}k\big{)}\), where \((\cdot;\cdot)_{j}\) is the \(q\)-Pochhammer symbol. Noting that \((\cdot;\cdot)_{j}\) is monotonically non-increasing with respect to \(j\) for non-negative inputs of magnitude at most one completes the proof of Property \((iii)\). Combining Lemma 5.1 with the computer-assisted (and mathematically provable) lower bounds of Table 3 immediately implies a lower bound for all values of \(n\). **Theorem 5.2** (Restatement of Theorem 1.2).: \(g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}\geq 1.0045\,n\) _for all \(n\geq 11\), and \(\limsup_{n}\big{(}g\big{[}\mathbf{CP}_{n}(\mathbb{R})\big{]}/n\big{)}\geq 2.525\)._ Proof.: The lower bound for all \(n\geq 11\) follows from checking \(n=11,12,13\) by hand and applying Property \((iii)\) of Lemma 5.1 to \(k=14\) (with \(C=1.08\)). The asymptotic bound follows directly from our lower bound for \(n=100\) combined with Property \((ii)\) of Lemma 5.1. ## 6. Rook Pivoting The majority of this work focuses on complete pivoting, due its theoretical and practical importance. Rook pivoting by comparison is relatively understudied, yet the quasi-polynomial bound on growth factor combined with a reduced computational complexity compared to complete pivoting in practice makes this an attractive technique. Many of the results of this paper also apply to rook pivoting, sometimes leading to even stronger results. These details are left to the interested reader. Through a stability lemma, tensor argument, and numerically computed lower bounds for a fixed value of \(n\), we provide improved lower bounds for the maximum growth factor with rook pivoting. Let \[\mathbf{RP}_{n}^{\boldsymbol{\varepsilon}}(S)=\{A\in\mathbf{GL}_{n}(\mathbb{C}) \cap S^{n\times n}\,|\,|a_{i,k}^{(k)}|,|a_{k,j}^{(k)}|\leq(1+\varepsilon_{k})| a_{k,k}^{(k)}|\text{ for all }i,j\geq k\}.\] We have the following proposition (in the spirit of Lemma 2.1). **Proposition 6.1**.: _For every \(A\in\mathbf{RP}_{n}^{\boldsymbol{\varepsilon}}(S)\), where \(S\) equals \(\mathbb{R}\) or \(\mathbb{C}\) and \(\varepsilon_{i}\geq 0\) for \(i=1,...,n-1\), there exists a matrix \(B\in\mathbf{RP}_{n}(S)\) such that_ \[a_{n,n}^{(k)}=b_{n,n}^{(k)}\qquad\text{for}\quad k=1,...,n,\] _and_ \[\big{|}a_{i,j}^{(k)}-b_{i,j}^{(k)}\big{|}\leq(2+\varepsilon_{\ell})\varepsilon _{\ell}\,\big{|}a_{\ell,\ell}^{(\ell)}\big{|},\quad\ell:=\min\{i,j\},\] _for all \(i,j=k,...,n\), \(k=1,...,n-1\)._ Proof.: Given \(A\in\mathbf{RP}_{n}^{\boldsymbol{\varepsilon}}(S)\), the result follows immediately from the construction \(B^{(n)}:=A^{(n)}\) and \[B^{(k)}:=\begin{pmatrix}(1+\varepsilon_{k})^{2}\,a_{k,k}^{(k)}&(1+\varepsilon _{k})\,A_{k,k+1:n}^{(k)}\\ (1+\varepsilon_{k})\,A_{k+1:n,k}^{(k)}&A_{k+1:n,k+1:n}^{(k)}+B^{(k+1)}-A^{(k+ 1)}\end{pmatrix}\] for \(k=1,...,n-1\). Similar to Lemma 2.1, the construction of \(B\in\mathbf{RP}_{n}(S)\) is algorithmic in nature, and this procedure (a variant of Algorithm 1) converts inexact numerically computed instances of large growth into provable lower bounds. In particular, through the combination of numerical computation and an algorithmic implementation of the procedure of Lemma 6.1, we have the following lower bound (see Subsection 5.1 and our repository [10] ). **Proposition 6.2**.: \(g\big{[}\mathbf{RP}_{48}(\mathbb{R})\big{]}>640.4861\)_._ Next, we prove the following extrapolation lemma, from which lower bounds for rook pivoting immediately follow. **Lemma 6.3**.: _Let \(S\) equal \(\mathbb{R}\) or \(\mathbb{C}\). Then_ 1. \(g\big{[}\mathbf{RP}_{n}(S)\big{]}\) _is non-decreasing,_ 2. _for all_ \(m,n\in\mathbb{N}\)_,_ 3. _if_ \(g\big{[}\mathbf{RP}_{k}(S)\big{]}\geq k^{\alpha}\) _for some_ \(k\)_, then_ \(g\big{[}\mathbf{RP}_{n}(S)\big{]}\geq k^{-\alpha}n^{\alpha}\) _for all_ \(n\in\mathbb{N}\)_._ Proof.: Property \((i)\) follows from the construction \(\begin{pmatrix}1&0_{n}^{T}\\ 0_{n}&A\end{pmatrix}\). Property \((ii)\) follows from the fact that if \(A\in\mathbf{RP}_{m}(S)\), \(B\in\mathbf{RP}_{n}(S)\), and \(S\) is closed under addition and multiplication, then \(A\otimes B\in\mathbf{RP}_{m\,n}(S)\), where \(\otimes\) is the matrix Kronecker product, which we now prove. Let \(C=A\otimes B\), and, for the sake of space, define the following three auxillary matrices, consisting of \(B^{(k)}\) for some \(k=2,...,n\) and some zeros: \[B_{r}^{(k)}=\begin{pmatrix}0_{n-k+1,k-1}&B^{(k)}\end{pmatrix},\quad B_{c}^{(k )}=\begin{pmatrix}0_{k-1,n-k+1}\\ B^{(k)}\end{pmatrix},\quad B_{f}^{(k)}=\begin{pmatrix}0_{k-1,k-1}&0_{k-1,n-k+1 }\\ 0_{n-k+1,k-1}&B^{(k)}\end{pmatrix},\] so that \(B_{r}^{(k)}\in S^{n\times(n-k+1)}\), \(B_{c}^{(k)}\in S^{(n-k+1)\times n}\), and \(B_{f}^{(k)}\in S^{n\times n}\). It suffices to complete \(n\) steps of Gaussian elimination, show that at each step the rook pivoting condition holds \((|c_{k,k}^{(k)}|\geq|c_{i,k}^{(k)}|,|c_{k,j}^{(k)}|\) for \(k=1,...,n\)), and note that \(C^{(n+1)}=A^{(2)}\otimes B\). Initially, we have \[C^{(1)}=A\otimes B=\begin{pmatrix}a_{1,1}B&\cdots&a_{1,m}B\\ \vdots&\ddots&\vdots\\ a_{m,1}B&\cdots&a_{m,m}B\end{pmatrix},\] and the rook pivoting condition holds initially for any Kronecker product \(A\otimes B\) of rook pivoted matrices \(A\) and \(B\), as \[|a_{1,1}b_{1,1}|=|a_{1,1}|\,|b_{1,1}|\geq\max_{i,j=1,...,m}\{|a_{i,1}|,|a_{1,j} |\}\ \max_{i,j=1,...,n}\{|b_{i,1}|,|b_{1,j}|\}.\] On the \(k^{th}\) step of Gaussian elimination, we have \[C^{(k)}=\begin{pmatrix}a_{1,1}B^{(k)}&a_{1,2}B^{(k)}_{r}&\cdots&a_{1,m}B^{(k)} _{r}\\ a_{2,1}B^{(k)}_{c}&a_{2,2}^{(2)}B+(a_{2,2}-a_{2,2}^{(2)})B^{(k)}_{f}&\cdots&a_ {2,m}^{(2)}B+(a_{2,m}-a_{2,m}^{(2)})B^{(k)}_{f}\\ \vdots&\vdots&\ddots&\vdots\\ a_{m,1}B^{(k)}_{c}&a_{m,2}^{(2)}B+(a_{m,2}-a_{m,2}^{(2)})B^{(k)}_{f}&\cdots&a_ {m,m}^{(2)}B+(a_{m,m}-a_{m,m}^{(2)})B^{(k)}_{f}\end{pmatrix},\] and still the rook pivoting condition holds, as both \(A\) and \(B^{(k)}\) are rook pivoted. Finally, after the \(n^{th}\) step, we note that the remainder term \((a_{i,j}-a_{i,j}^{(2)})B^{(n)}_{f}\) disappears, as \[(a_{i,j}-a_{i,j}^{(2)})B^{(n)}_{f}-\frac{a_{i,1}a_{1,j}}{a_{1,1}}\frac{B^{(n)} _{c}\,B^{(n)}_{r}}{b^{(n)}_{n,n}}=0_{n\times n},\] and so \(C^{(n+1)}=A^{(2)}\otimes B\). Property \((iii)\) follows quickly from Properties \((i)\) and \((ii)\). Let \(n>k\) (if \(n\leq k\), the result trivially holds), and let \(\ell\in\mathbb{N}\) be the largest number such that \(k^{\ell}\leq n\). We have \[g\big{[}\mathbf{RP}_{n}(S)\big{]}\geq g\big{[}\mathbf{RP}_{k^{\ell}}(S)\big{]} \geq k^{\alpha\ell}=\big{[}k^{\ell}/n\big{]}^{\alpha}\,n^{\alpha}\geq k^{- \alpha}n^{\alpha}.\] Using Proposition 6.2 and Lemma 6.3, we obtain our desired lower bound. **Theorem 6.4** (Restatement of Theorem 1.3).: \(g\big{[}\mathbf{RP}_{n}(\mathbb{R})\big{]}>\frac{1}{641}n^{1.669}\) _for all \(n\in\mathbb{N}\)._ ## Acknowledgements This material is based upon work supported by the Institute for Advanced Study and the National Science Foundation under Grant No. DMS-1926686. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper. This material is based upon work supported by the National Science Foundation under grant no. OAC-1835443, grant no. SII-2029670, grant no. ECCS-2029670, grant no. OAC-2103804, and grant no. PHY-2021825. We also gratefully acknowledge the U.S. Agency for International Development through Penn State for grant no. S002283-USAID. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0000121 and DE-AR0001222. We also gratefully acknowledge the U.S. Agency for International Development through Penn State for grant no. S002283-USAID. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. This material was supported by The Research Council of Norway and Equomer ASA through Research Council project "308817 - Digital wells for optimal production and drainage". Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA87570-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. We further thank Juan-Pablo Vielma, for very helpful discussions. In addition, we would like to thank Nick Higham for useful comments on an earlier draft.
2309.00477
Privacy Attacks and Defenses for Digital Twin Migrations in Vehicular Metaverses
The gradual fusion of intelligent transportation systems with metaverse technologies is giving rise to vehicular metaverses, which blend virtual spaces with physical space. As indispensable components for vehicular metaverses, Vehicular Twins (VTs) are digital replicas of Vehicular Metaverse Users (VMUs) and facilitate customized metaverse services to VMUs. VTs are established and maintained in RoadSide Units (RSUs) with sufficient computing and storage resources. Due to the limited communication coverage of RSUs and the high mobility of VMUs, VTs need to be migrated among RSUs to ensure real-time and seamless services for VMUs. However, during VT migrations, physical-virtual synchronization and massive communications among VTs may cause identity and location privacy disclosures of VMUs and VTs. In this article, we study privacy issues and the corresponding defenses for VT migrations in vehicular metaverses. We first present four kinds of specific privacy attacks during VT migrations. Then, we propose a VMU-VT dual pseudonym scheme and a synchronous pseudonym change framework to defend against these attacks. Additionally, we evaluate average privacy entropy for pseudonym changes and optimize the number of pseudonym distribution based on inventory theory. Numerical results show that the average utility of VMUs under our proposed schemes is 33.8% higher than that under the equal distribution scheme, demonstrating the superiority of our schemes.
Xiaofeng Luo, Jinbo Wen, Jiawen Kang, Jiangtian Nie, Zehui Xiong, Yang Zhang, Zhaohui Yang, Shengli Xie
2023-09-01T14:14:33Z
http://arxiv.org/abs/2309.00477v1
# Privacy Attacks and Defenses for Digital Twin Migrations in Vehicular Metaverses ###### Abstract The gradual fusion of intelligent transportation systems with metaverse technologies is giving rise to vehicular metaverses, which blend virtual spaces with physical space. As indispensable components for vehicular metaverses, Vehicular Twins (VTs) are digital replicas of Vehicular Metaverse Users (VMUs) and facilitate customized metaverse services to VMUs. VTs are established and maintained in RoadSide Units (RSUs) with sufficient computing and storage resources. Due to the limited communication coverage of RSUs and the high mobility of VMUs, VTs need to be migrated among RSUs to ensure real-time and seamless services for VMUs. However, during VT migrations, physical-virtual synchronization and massive communications among VTs may cause identity and location privacy disclosures of VMUs and VTs. In this article, we study privacy issues and the corresponding defenses for VT migrations in vehicular metaverses. We first present four kinds of specific privacy attacks during VT migrations. Then, we propose a VMU-VT dual pseudonym scheme and a synchronous pseudonym change framework to defend against these attacks. Additionally, we evaluate average privacy entropy for pseudonym changes and optimize the number of pseudonym distribution based on inventory theory. Numerical results show that the average utility of VMUs under our proposed schemes is 33.8\(\%\) higher than that under the equal distribution scheme, demonstrating the superiority of our schemes. Metaverse, vehicular twin, privacy protection, migration, inventory theory. ## I Introduction The emergence of advanced technologies, such as Web 3.0 and metaverses, has spurred an increased interest in intelligent transportation systems from both industry and academia [1]. Especially, the synergy between transportation systems and metaverses has given rise to the concept of vehicular metaverses. The vehicular metaverse is regarded as a blended immersive realm that integrates extended reality technologies and real-time vehicular data to provide diverse and personalized in-vehicle services for Vehicular Metaverse Users (VMUs) (i.e., drivers and passengers within vehicles) [1]. Based on the role of digital twins in metaverses, which are virtual representations of real-world entities, Vehicular Twins (VTs), highly accurate and large-scale digital replicas that cover the lifecycle of the vehicle and VMUs, serve as the foundation for vehicular metaverses, enabling emerging vehicular applications such as Augmented Reality (AR) navigation [2]. To achieve physical-virtual synchronization, VTs are continuously updated with real-time sensing data from surrounding environments, making vehicular metaverses autonomous and sustainable [1]. Since the construction and maintenance of VTs require significant computing resources at the network edge [3], VMUs normally offload large-scale rendering tasks of creating and updating VTs to nearby edge servers (e.g., RoadSide Units (RSUs)) [2]. However, due to the limited communication coverage of RSUs and the high mobility of VMUs, VTs need to be migrated among RSUs along the moving trajectories of their associated VMUs to provide real-time and uninterrupted metaverse services. Traditionally, in vehicular metaverses, vehicles communicate with others and periodically broadcast safety messages (including pseudonyms and location information) to ensure driving security, with the pseudonyms serving as temporary identifiers for identity anonymization [4]. Despite the use of pseudonyms, privacy and security threats remain a big concern for VT migrations in vehicular metaverses. To be specific, since VTs constantly interact with other VTs and request immersive services from Virtual Service Providers (VSPs) in virtual spaces [5], attackers can observe the location information of VTs before and after migrations among RSUs. Combined with safety messages in the physical space, they can establish mapping relationships between VMUs and VTs. In this case, the identity and location privacy of VMUs and VTs may be leaked and easily exploited by attackers for malicious purposes, potentially compromising the security of VMUs. Therefore, it is necessary to study privacy issues and develop efficient defense schemes for VT migrations in vehicular metaverses. Some efforts have been conducted to investigate privacy issues in the metaverse [5, 6]. For example, the authors in [6] comprehensively summarized the privacy and security threats in the Internet of digital twins from several perspectives, including data-related, communication-related, and privacy threats, and then discussed key research challenges to defend them. In addition, they examined effective countermeasures against these threats and assessed their feasibility in the Internet of digital twins. However, the existing work ignores potential security and privacy threats during digital twin migrations in metaverses, especially in vehicular metaverses. To address the aforementioned challenges, we aim to investigate the privacy issues and develop reliable defense strategies for VT migrations in vehicular metaverses. _To the best of our knowledge, this is the first research work to study the privacy issues and defenses for VT migrations in vehicular metaverses_. Our contributions are summarized as follows: * We introduce the VT migration process in vehicular metaverses and present four kinds of new attacks that can compromise the identity and location privacy of VMUs and VTs during VT migrations. * To defend against these attacks, we propose an efficient VMU-VT dual pseudonym scheme, in which we use VT pseudonyms to achieve identity anonymization of VT communications in virtual spaces. * Furthermore, to combat a special threat resulting from asynchronous pseudonym changes between VMUs and VTs, we further propose a synchronous pseudonym change framework to resolve the privacy leakage issues during VT migrations. * We derive average privacy entropy to quantify the increased degree of privacy protection after pseudonym changes, and then utilize inventory theory to optimize the number of pseudonym distribution. Numerical results demonstrate that our proposed schemes can effectively ensure the privacy preservation of VMUs during VT migrations in vehicular metaverses. ## II Privacy Attacks for Vehicular Twin Migrations and Corresponding Defenses In this section, we first introduce the VT migration process in vehicular metaverses and study potential privacy attacks. Then, we present our defense schemes to counter these attacks. ### _Vehicular Twin Migrations_ We first introduce four key components of the vehicular metaverse as follows: * **Vehicular Twins (VTs):** As highly accurate and large-scale digital replicas of vehicles and VMUs, VTs can analyze the status of vehicles and VMUs and facilitate vehicle decision making through real-time interactions between virtual spaces and the physical space [2]. Moreover, VTs can interact with other VTs for data sharing, helping VMUs obtain global environment information [6]. Therefore, VTs make the vehicular metaverses autonomous and durable. * **Vehicular Metaverse Users (VMUs):** By using lightweight devices like Head-Mounted Displays (HMDs), VMUs can access vehicular metaverses to obtain immersive and lower-latency metaverse services, such as AR navigation and virtual games [1, 2]. For real-time updates of VTs in the virtual space, VMUs collect real-time sensing data (e.g., real-time vehicular status and traffic condition information) from surrounding environments by vehicular sensors [2]. * **Roadside Units (RSUs):** RSUs are generally treated as vehicular communication devices mounted along the roadside. Empowered by edge computing technology [3], RSUs have sufficient computing and storage resources to construct VTs and deliver ultra-reliable and low-latency metaverse services to VMUs [2]. To ensure seamless and immersive experiences for VMUs, VTs in the virtual space are migrated from the source RSUs to the destination RSUs along driving trajectories of corresponding VMUs in the physical space. In addition, RSUs can serve as pseudonym caching stations responsible for the storage, management, and distribution of pseudonyms [7]. * **Virtual Service Providers (VSPs):** VSPs are third-party entities (e.g., companies) that can provide high-quality metaverse services for VTs [5]. For instance, VSPs can provide location-based metaverse services for VTs based on their personalized demands, such as AR games and navigation. In this case, VSPs would collect the private information of VTs, including their previous contents of interest and current locations of corresponding VMUs. As shown in Fig. 1, vehicles periodically broadcast safety messages to ensure driving security during VT migrations. The safety message generally includes the VMU pseudonym Fig. 1: An overview of privacy attacks during VT migrations in vehicular metaverses. The left part introduces the VT migration process in vehicular metaverses. The right part provides a detailed description of four kinds of privacy attacks during VT migrations. and real-time sensing data from surrounding environments. When communicating with other VMUs, VMUs leverage pseudonyms to conceal their true identities and constantly change their pseudonyms through driving to ensure privacy protection in vehicular metaverses. For the convenience of explanation, we take \(VMU_{1}\) as an illustration. We consider that the Certificate Authority (CA) and RSUs are trusted entities in line with the assumption in [8, 9]. The CA maintained by government agencies first generates a specific number of pseudonyms and allocates them to RSUs in the form of pseudonym sets [4, 10]. When VMU pseudonyms are running out, \(VMU_{1}\) requests a specific number of pseudonyms from the nearest RSU. Then, the RSU distributes a pseudonym set \(\{PID^{w}_{VMU_{1}}\}\), where \(w\) represents the number of pseudonyms and \(PID^{l}_{VMU_{1}}\) is one of the pseudonyms in \(\{PID^{w}_{VMU_{1}}\}\). To ensure driving security, the vehicle of \(VMU_{1}\) broadcasts safety messages {_Pseudonym_, _Location_, _Velocity_, _Content_, _Time_} to nearby vehicles and RSUs [4]. After driving for a while, \(VMU_{1}\) decides to change the current pseudonym \(PID^{j}_{VMU_{1}}\) to avoid being tracked by attackers. The \(VMU_{1}\) selects a new pseudonym \(PID^{j+1}_{VMU_{1}}\) from the pseudonym set \(\{PID^{w}_{VMU_{1}}\}\) and changes its pseudonym from \(PID^{j}_{VMU_{1}}\) to \(PID^{j+1}_{VMU_{1}}\). Finally, \(VMU_{1}\) uses the changed pseudonym to communicate with other VMUs and repeats this process throughout the journey, thus reducing the risk of identity leakage. ### _Privacy Attacks for Vehicular Twin Migrations_ Despite the application of VMU pseudonyms, VT migrations can still arouse several unprecedented privacy concerns in vehicular metaverses. As shown in the right part of Fig. 1, we specifically present four kinds of privacy attacks during VT migrations as follows: * _VMU to VT (V2T) Attack_: The _V2T attack_ occurs during authentication between VMUs and VTs. Attackers exploit system flaws to gain unauthorized access to transmitted data between VMUs and VTs [11], posing a serious threat to the identity privacy of VMUs. Specifically, the vehicle of \(VMU_{1}\) collects real-time sensing data, e.g., the distance from the vehicle in front, and uploads collected data to \(VT_{1}\) to forecast \(VMU_{1}\)'s actions through a trained machine learning model [3]. However, the attackers lurking in the open transmission channel can capture and tamper with these sensing data, e.g., modifying the safe following distance from \(100\:\mathrm{m}\) to \(10\:\mathrm{m}\), and then send the forged data to the virtual space resulting in the miscalculation of \(VT_{1}\). In this case, \(VT_{1}\) will send incorrect feedback to misguide the driving decision making of \(VMU_{1}\), which probably leads to a severe traffic accident. * _VT to VT (T2T) Attack_: The _T2T attack_ is launched by the purposeful interaction of malicious VTs during inter-twin communications (i.e., communications between VTs) [6] in the virtual space, which can lead to unwitting identity privacy leakages of legitimate VTs. Given the low-expense nature of socializing in vehicular metaverses, malicious VTs are prone to interact with legitimate VTs and steal their private information [6] to conduct purposeful activities such as precisely advertising or even committing crimes with stolen identities. To be specific, a malicious VT may impersonate an intimate friend of target VMUs to deliberately interact with target VTs, thereby defrauding their sensitive privacy illegally. Besides, malicious VTs can provide fake news to satisfy their own needs. For instance, a malicious VT broadcasts a non-existent accident to \(VT_{1}\), compelling \(VMU_{1}\) to take an alternate route. * _VMU to VMU (V2V) Attack_: Ever-changing VMU pseudonyms are used to protect privacy when vehicles broadcast safety messages. However, attackers can still leverage radio equipment installed on roadside infrastructures (e.g., traffic lights) to launch _V2V attacks_[9]. Specifically, an attacker first eavesdrops safety messages of \(VMU_{1}\) from \(Location_{1}\) and then eavesdrops safety messages again from \(Location_{2}\) after a period of \(VMU_{1}\) migration. Although the pseudonym of \(VMU_{1}\) has been changed from \(PID^{l}_{VMU_{1}}\) to \(PID^{p}_{VMU_{1}}\), the attacker can still track \(VMU_{1}\) by analyzing similar features of safety messages (e.g, velocity and time) from two geographically adjacent locations. Therefore, the V2V attack causes a sharp decline in the location privacy of VMUs. * _VT to VSP (T2VSP) Attack_: To immerse themselves in location-based metaverse services (e.g., AR video games), VTs would supply location-related information to VSPs (e.g., AR game companies). This process may trigger _T2VSP attacks_ launched by malicious VSPs. The communication coverage of the source RSU and the destination RSU are denoted as \(Region_{1}\) and \(Region_{2}\), respectively. The malicious VSPs first steal the privacy of \(VT_{1}\) deployed in the source RSU within \(Region_{1}\). When \(VMU_{1}\) leaves from \(Region_{1}\) to \(Region_{2}\), \(VT_{1}\) is migrated from the source RSU to the destination RSU correspondingly. Afterwards, the malicious VSP can obtain the privacy of target \(VT_{1}\) within \(Region_{2}\). By analyzing the spatio-temporal factors of information (e.g., driving directions and timestamp) from different regions, the malicious VSP can locate and track the target \(VT_{1}\), indicating that the T2VSP attack seriously violates the location privacy of VTs. If multiple colluded attackers not only eavesdrop safety messages including pseudonyms and locations of target VMUs but also obtain identity information of VTs, the attackers can infer mapping relationships between VMUs and their associated VTs [4]. Under this circumstance, the attackers can keep track of the target VMUs due to the immutability of VT identities, causing critical damage to the identity and location privacy of both VMUs and VTs. Furthermore, these external attackers can exploit the privacy to conduct targeted advertising, or even use stolen identities to commit crimes to avoid liability. Consequently, it is necessary to design a reliable defense scheme against the four kinds of attacks enumerated above to safeguard the privacy of both VMUs and VTs. ### _Proposed Defenses: A VMU-VT Dual Pseudonym Scheme_ To defend against the aforementioned attacks, we design a VMU-VT dual pseudonym scheme, in which the VT pseudonym is used to assure the identity anonymity of VTs in virtual spaces as well. As shown in Fig. 2, VMU and VT pseudonyms are stored in VMU pseudonym pools and VT pseudonym pools within RSUs, respectively. Our proposed scheme uses varying VMU and VT pseudonyms to conceal the real identities of VMUs and VTs. However, if the pseudonyms are single-use, i.e., VMUs and VTs discard the old pseudonyms after pseudonym changes, the pseudonyms will quickly run out. Once pseudonyms in pools are exhausted, the CA needs to allocate pseudonyms again, which leads to high pseudonym generation and communication overhead. To tackle this challenge, the proposed scheme adopts a blockchain-based VMU-VT dual pseudonym management approach, where shuffling operations enable the reuse of both VMU and VT pseudonyms [7]. Firstly, the source RSUs distribute VMU and VT pseudonyms to VMUs and VTs. Moreover, our proposed scheme encompasses four corresponding modules to defend against the four kinds of privacy attacks. More details are described as follows: * _Module 1: Mutual authentication: The mutual authentication module can defend against V2T attacks._ We consider that \(VMU_{1}\) and \(VT_{1}\) first receive a VMU pseudonym set \(\{PID_{VMU_{1}}^{v}\}\) and a VT pseudonym set \(\{PID_{VT_{1}}^{v}\}\), respectively. Here \(u\) is the number of VT pseudonyms. Then, \(VMU_{1}\) and \(VT_{1}\) initiate the mutual authentication process, where they verify the pseudonym of their counterpart in \(\{PID_{VT_{1}}^{u}\}\) and \(\{PID_{VMU_{1}}^{v}\}\), respectively [10]. When completing the mutual authentication, both \(VMU_{1}\) and \(VT_{1}\) obtain a shared secret key and create a secure communication channel for subsequent data transmission [10]. In this case, \(VT_{1}\) receives physical sensing data only uploaded by \(VMU_{1}\) while \(VMU_{1}\) receives feedback only sent from \(VT_{1}\). Therefore, external attackers can neither capture data from \(VMU_{1}\) nor send erroneous data to \(VT_{1}\). * _Module 2: VT blacklist: The VT blacklist module can defend against V2T attacks._ Malicious VTs often impersonate legitimate VTs to perpetrate misbehaviors such as defrauding, or sharing fake news. However, malicious VTs can be accused by legitimate VTs and then reported by RSUs to CA [7]. By examining the evidence in reports and historical requests of pseudonym changes in log files, the CA is competent to evaluate the validity of these reports. If the reported behaviors are genuine, the CA will revoke the use of malicious VTs' pseudonyms while revealing their true identities to all VTs in the virtual space [7]. Finally, the malicious VTs will be added to the VT blacklist to prevent them from interacting with legitimate VTs. * _Module 3: Group pseudonym change for VMUs: The group pseudonym change for VMUs module can defend against V2V attacks._ After driving for a while, privacy levels of VMUs decrease to the anticipated threshold. Thus, VMUs decide to change new VMU pseudonyms for improving privacy levels. We consider that attackers have eavesdropped safety messages with \(PID_{VMU_{1}}^{1}\) broadcast by the vehicle of \(VMU_{1}\). The \(VMU_{1}\) chooses to change the pseudonym in a social hot spot (e.g., a busy intersection), where more legitimate VMUs jointly change their VMU pseudonyms with a higher frequency for enhancing the overall privacy level [9]. Then, \(VMU_{1}\) replaces the pseudonym with \(PID_{VMU_{1}}^{2}\). Since masses of nearby VMUs in the group that change pseudonyms together have similar features (e.g., locations and velocity), attackers will lose track of the target \(VMU_{1}\). * _Module 4: Group pseudonym change for VTs: The group pseudonym change for VTs module can defend against T2VSP attacks._ With the aid of VT groups, our scheme has a positive effect on defending against malicious VSPs in the virtual space. Specifically, \(VT_{1}\) deployed in the source RSU first utilizes VT pseudonym \(PID_{VT_{1}}^{1}\) to request location-based metaverse services. To maintain seamless experiences for \(VMU_{1}\), \(VT_{1}\) is migrated from the source RSU to the destination one. Meanwhile, \(VT_{1}\) is qualified to join a VT group formed on the destination RSU where legitimate VTs within the communication coverage assemble for collective pseudonym changes, and then changes its pseudonym from Fig. 2: A VMU-VT dual pseudonym scheme consisting of four modules for defending against four kinds of privacy attacks illustrated in Fig. 1. Note that blockchain technology is utilized to securely manage both VMU and VT pseudonyms by recording pseudonym shuffling transactions. to \(PID_{VT_{1}}^{2}\) together with other members' changing in the group. In this scenario, malicious VSPs will lose the target \(VT_{1}\). After pseudonym changes, both VMUs and VTs return used or expired pseudonyms to corresponding pseudonym pools in the destination RSUs when the pseudonyms stored in sets are about to run out [7]. Furthermore, these recycled pseudonyms are shuffled and allocated by distributed consensus (e.g., Proof-of-Pseudonym [7]) among different RSUs for reuse. Specifically, as RSUs are confidential and authorized, consortium blockchains [12] can be leveraged to ensure the security of pseudonym management and distribution relying on encryption technologies and consensus algorithms. The pseudonym shuffling transactions are packed into the blocks (i.e., distributed ledgers) among RSUs, guaranteeing the immutability and integrity of both VMU and VT pseudonyms. Therefore, the blockchain-based VMU-VT dual pseudonym management approach contributes to identity traceability and accountability in vehicular metaverses whenever a dispute or a report occurs. ## III Linkage Mapping Threat and Synchronous Pseudonym Change Framework ### _VMU-VT Linkage Mapping Threat_ Although the VMU-VT dual pseudonym scheme plays a significant role in defending against the attacks, there still exists a latent safety hazard that leads to severe location privacy breaches. Specifically, we further consider an underlying threat resulting from asynchronous VMU-VT pseudonym changes. The attackers may eavesdrop safety messages from the vehicles of target VMUs in the physical space while stealing sensitive information including VT pseudonyms from target VTs in virtual spaces. By analyzing spatio-temporal background information of both, the attackers can establish mapping relationships between the identities of target VMUs and VTs [4]. As VMUs and VTs change pseudonyms asynchronously, the attackers can re-identify the target by linking VMU pseudonyms with VT pseudonyms, which is called _VMU-VT linkage mapping threat_ in this article. As shown in Fig. 3, we use a timeline of pseudonym changes to describe the threat in detail. Without loss of generality, attackers are prone to launch attacks in vehicular metaverses, because they are restricted by spatial locations in the physical world but easy to access boundless virtual spaces anywhere. Additionally, since VTs are deployed in RSUs while VT pseudonym pools are also stored in RSUs, changing VT pseudonyms incurs lower communication overhead and is more cost-effective than changing VMU pseudonyms. Therefore, we consider that \(VT_{1}\) changes its pseudonyms four times as often as \(VMU_{1}\) changes within a certain time period to reduce the risk of being tracked. Besides, for ease of expression, we consider that both \(VMU_{1}\) and \(VT_{1}\) change their pseudonyms evenly, namely with \(VMU_{1}\) changing VMU pseudonyms every time period \(T_{1}\) while \(VT_{1}\) changing VT pseudonyms every time period \(T_{2}\)[4]. Here we present a concrete example to introduce the VMU-VT linkage mapping threat in Fig. 3. We consider that attackers have observed the pseudonym of target \(VMU_{1}\) (i.e., \(PID_{VMU_{1}}^{1}\)) by eavesdropping safety messages before \(t_{0}\). During the first time period \(T_{1}\) (i.e., from \(t_{0}\) to \(t_{1}\)), \(VT_{1}\) changes its pseudonym sequentially from \(PID_{VT_{1}}^{1}\) to \(PID_{VT_{1}}^{5}\), while \(PID_{VMU_{1}}^{1}\) remains unchanged. In a limited road network, the location-related features of VMUs and VTs (e.g., geographical locations and surrounding landscape) partially overlap. By analyzing these common features, attackers can establish a mapping relationship between VMU pseudonyms and VT pseudonyms (i.e., \(PID_{VMU_{1}}^{1}\) corresponding to \(\{PID_{VT_{1}}^{1}\), \(PID_{VT_{1}}^{2}\), \(PID_{VT_{1}}^{3}\), \(PID_{VT_{1}}^{1}\), \(PID_{VT_{1}}^{5}\}\)), allowing them to track the target \(VT_{1}\)[4]. Likewise, even though \(VMU_{1}\) replaces its pseudonym with \(PID_{VMU_{1}}^{2}\) at \(t_{1}\), the VT pseudonym \(PID_{VT_{1}}^{5}\) stays invariable. As attackers already know the correspondence between \(PID_{VMU_{1}}^{5}\) and \(PID_{VT_{1}}^{5}\), they can easily re-identify the \(\{VMU_{1}}\) by building a mapping relationship between \(PID_{VT_{1}}^{5}\) and \(\{PID_{VMU_{1}}^{1}\), \(PID_{VMU_{1}}^{2}\}\). Therefore, the attackers can keep track of the target by continually linking VMU pseudonyms with VT pseudonyms, such as linking \(PID_{VMU_{1}}^{2}\) with \(\{PID_{VT_{1}}^{5},\ldots,PID_{VT_{1}}^{5}\}\). As long as the VMU pseudonyms and VT pseudonyms are changed asynchronously, strong attackers can always follow up their targets precisely [4], resulting in serious location privacy disclosures of both VMUs and VTs in vehicular metaverses. ### _Synchronous VMU-VT Pseudonym Change Framework_ To address this threat, we propose a synchronous VMU-VT pseudonym change framework based on intra-twin communications (i.e., data synchronization between VMUs and VTs) [6] in Fig. 4. Notably, VMUs are equipped with various vehicular sensors (e.g., in-car cameras, Light Detection and Ranging (LiDAR), and Inertial Measurement Unit (IMU) suits) for real-time data acquisition and immersive devices (e.g., HMDs, windshields, and side windows) for metaverse service displays. Besides, communication, computing, and storage resources within vehicles also facilitate the processes of pseudonym changes and metaverse service experiences. To promote immersion and satisfaction in vehicular metaverses, the low-latency data flowing between VMUs and VTs is vital. Empowered by global navigation satellite system receivers in vehicles, VMUs can pre-synchronize their internal clocks with the master clocks in nearby RSUs, thus realizing accurate time synchronization with their VTs in virtual spaces [13]. Under the premise of time synchronization, VMUs can experience immersive metaverse services and conduct synchronous VMU-VT pseudonym changes via intra-twin communications, as shown in Fig. 4. VMUs upload metaverse Fig. 3: VMU-VT linkage mapping threat. service requests along with real-time sensing data to VTs for updates. Afterwards, the VTs process these data and provide feedback to instruct the performances of VMUs, by which VMUs can immerse themselves in splendid metaverse services through immersive devices [1, 3]. In addition to helping VMUs enjoy metaverse services, the intra-twin communication also supports the synchronous VMU-VT pseudonym change framework. The key steps are listed as follows: * **Step 1. Initialization and request record:**\(VMU_{i}\), which is ready to perform a synchronous VMU-VT pseudonym change, first checks whether there are available pseudonyms in the pseudonym set \(\{PID_{TMU_{i}}^{w}\}\). If yes, the \(VMU_{i}\) will send a VMU pseudonym change request attached with the current timestamp to the nearest RSU through a secure channel established by mutual authentication (see 1 in Fig. 4). Then, the RSU transfers this request and the timestamp to the CA for recording. Logging this information enables CA to trace VMUs' true identities in the events of disputes or accusations in the future, thus maintaining the accountability in vehicular metaverses [4]. If no, the \(VMU_{i}\) will apply for a new pseudonym set from the nearest RSU. * **Step 2. Preparation for synchronous pseudonym changes:** When receiving the pseudonym change request, the RSU starts preparing pseudonym changes for both VMUs and VTs (see 2 in Fig. 4). The RSU first updates the number of VT pseudonyms in \(\{PID_{T_{i}}^{w}\}\). If there are no extra pseudonyms, \(VT_{i}\) will request a new set from the RSU where it is deployed. If adequate pseudonyms are available, \(VT_{i}\) will choose a suitable pseudonym for replacement. To ensure synchronous changes of VMU and VT pseudonyms, the RSU presets a pseudonym change time \(t^{*}\) for \(VT_{i}\) in the timer [4], which is also output as feedback to instruct \(VMU_{i}\) to change pseudonyms. * **Step 3. Synchronous pseudonym changes:** After receiving feedback from the RSU through the intra-twin communication (see 3 in Fig. 4), the \(VMU_{i}\) selects a proper pseudonym from the VMU pseudonym set stored in the vehicle to perform the pseudonym change task (see 4 in Fig. 4). Under the guidance of pseudonym change time included in feedback, both \(VMU_{i}\) and \(VT_{i}\) synchronously change their respective pseudonyms at the predetermined time \(t^{*}\)[4]. As VMU and VT pseudonyms of targets are changed in synchronization, attackers lose the physical and virtual identities simultaneously, thus losing track of their targets. Therefore, the proposed synchronous VMU-VT pseudonym change framework can resist the VMU-VT linkage mapping threats, protecting the location privacy of legitimate participants in vehicular metaverses effectively. ## IV Case Study In this section, we investigate a scenario where VMUs request a specific number of pseudonyms for change. We first Fig. 4: The synchronous VMU-VT pseudonym change framework. Both changing pseudonyms and experiencing metaverse services are realized by intra-twin communications. derive average privacy entropy to quantify the increased degree of privacy protection after a pseudonym change. Then, we optimize the number of pseudonym distribution based on the inventory theory. ### _Scenario Description_ As shown in Fig. 5, we consider that RSUs obtain pseudonyms from the CA at a constant rate \(\theta\). At the beginning of a time period \(T\), VMUs first request pseudonyms from the RSU in which their VTs are deployed. Then, the RSU distributes a certain number of pseudonyms to VMUs according to estimated future pseudonym demands of VMUs, where future pseudonym demands can be estimated by the pseudonym change frequency of VMUs based on historical observation records. After receiving pseudonyms, VMUs need to change their pseudonyms timely to ensure privacy protection. ### _On-Demand Pseudonym Distribution based on Inventory Theory_ We formulate the pseudonym distribution problem between the RSU and VMUs by inventory theory. The inventory theory aims to optimize the inventory management of a business by determining the appropriate timing and quantity of orders for specific goods [14]. In our pseudonym distribution model, the RSU aims to develop an optimal pseudonym distribution strategy by maximizing the sum of VMU utilities. #### Iv-B1 Average privacy entropy of VMUs As an effective metric that measures the degree of privacy protection for VMUs, privacy entropy for \(VMU_{i}\) is defined as \(H_{i}=-\log_{2}p_{i}\), where \(p_{i}\in(0,1]\) is the probability of \(VMU_{i}\) being tracked after a pseudonym change [4]. We consider that the privacy entropy of \(VMU_{i}\) decreases linearly over time with slope \(\alpha\) before it reaches the minimum privacy entropy \(H_{min}\). After \(VMU_{i}\) synchronously changing pseudonyms with \(VT_{i}\), the privacy entropy of \(VMU_{i}\) increases to \((H_{max}-p_{i}H_{0})\), where \(H_{max}\) is the maximum privacy entropy. Therefore, the privacy entropy function over time is sawtooth. To better assess the increased degree of privacy protection after a pseudonym change, we derive the average privacy entropy \(\overline{H}\) for VMUs. As shown in Fig. 5, the average privacy entropy is the area under the sawtooth function normalized by the time interval. #### Iv-B2 VMU utility We denote \(R_{i}^{T}\) as the number of pseudonyms requested by \(VMU_{i}\) at the beginning of the time period \(T\) and \(D_{i}^{T}\) as future pseudonym demands in the time period \(T\), respectively [15]. As shown in Fig. 5, the utility of \(VMU_{i}\) is denoted as \(U_{i}^{T}\), which consists of pseudonym change profits, pseudonym storage costs, and insufficient change penalties. Specifically, \(VMU_{i}\) can obtain profits from the increased degree of privacy protection after each pseudonym change. However, if \(R_{i}^{T}>D_{i}^{T}\), the redundant pseudonyms have to be stored in vehicles for a certain time, leading to storage costs [15]. Note that the storage cost per pseudonym is less than the change cost per pseudonym. If \(VMU_{i}\) cannot satisfy pseudonym demands, i.e., \(R_{i}^{T}<D_{i}^{T}\), \(VMU_{i}\) will Fig. 5: The pseudonym distribution problem between the RSU and VMUs based on the inventory theory. Fig. 6: The performance of the proposed on-demand pseudonym distribution scheme. bear the penalties of being exposed to privacy risk due to the reduction in average privacy entropy. #### Iv-B3 Problem formulation To obtain the optimal pseudonym distribution set \(\mathcal{R}_{T}^{*}=\{R_{i}^{T}\}\), we maximize the global utility \(\sum_{i=1}^{m}U_{i}^{T}\), where \(\mathcal{R}_{T}^{*}\) exists only if \(\sum_{i=1}^{m}R_{i}^{T}\leq\theta T\). Note that the global utility function is concave, indicating that there exists the maximum value of this function, which can be solved approximately using the genetic algorithm [15]. ### _Numerical Results_ To prove the efficiency of the on-demand pseudonym distribution scheme, we compare the proposed scheme with an equal distribution scheme, where pseudonyms are equally distributed to VMUs. The pseudonym change strategy complies with the ETSI TR 103 415 standard1. Specifically, we set unit time as per minute and consider that the RSU obtains \(10\) pseudonyms per minute, i.e., \(\theta=10\), and suppose there exist six VMUs requesting pseudonyms from the RSU, where the process of pseudonym requests from VMUs follows the Poisson process in an observation time period [15], set to \(1\) hour. Besides, we consider that \(p_{i}\) follows a uniform distribution in \((0.0.5]\) and factors \(H_{max}\), \(H_{0}\), \(H_{min}\), \(\alpha\), \(h\), and \(r\) are set to \(1.5\), \(1\), \(0.25\), \(1\), \(0.1\), and \(0.3\), respectively. Footnote 1: [https://www.etsi.org/deliver/etsi_tr/103400_103499/10341501.01.01_60v_103415v010101p.pdf](https://www.etsi.org/deliver/etsi_tr/103400_103499/10341501.01.01_60v_103415v010101p.pdf) Figure 6(a) presents the respective utility of six VMUs under the on-demand pseudonym distribution scheme and the equal distribution scheme. Without loss of generality, the larger the serial number of VMU, the more frequently the pseudonym is changed. We can observe that, for each VMU, the utility under the proposed scheme is higher than that under the equal distribution scheme, and the average utility of VMUs under the proposed scheme is \(33.8\%\) higher than that under the equal distribution scheme. The reason is that the utilization of pseudonyms is maximized by distributing pseudonyms to VMUs based on their actual demands. Figure 6(b) illustrates the global utility of three VMUs under the proposed scheme, where each group consists of three VMUs. We can find that, for the fixed unit profit for pseudonym changes \(\beta\), the global utility of VMU group 3 with the highest average pseudonym change frequencies is maximum, indicating that VMUs can better enhance the degree of privacy protection by changing pseudonyms in VMU groups with a higher average pseudonym change frequency. ## V Conclusion In this article, we studied privacy attacks and defenses for Vehicular Twin (VT) migrations in vehicular metaverses. We systematically introduced the VT migration process and presented four kinds of specific privacy attacks compromising the identity and location privacy of both Vehicular Metaverse Users (VMUs) and VTs. To defend against these attacks, we proposed a VMU-VT dual pseudonym scheme consisting of four corresponding modules. Furthermore, we proposed a synchronous VMU-VT pseudonym change framework to address an underlying threat resulting from asynchronous pseudonym changes between VMUs and VTs. Finally, we carried out a case study to demonstrate the significant efficiency of the on-demand pseudonym distribution strategy compared with the equal distribution strategy. In the future, we will further explore the average privacy entropy model to better quantify the degree of privacy protection for pseudonym changes, and delve into the use of artificial intelligence tools (e.g., deep reinforcement learning) to optimize the pseudonym distribution in vehicular metaverses.
2301.12889
The rotational disruption of porous dust aggregates from ab-initio kinematic calculations
Context: The sizes of dust in the interstellar medium follows a distribution where most of the dust mass is in smaller grains. However, the re-distribution from larger grains towards smaller sizes especially by means of rotational disruption is poorly understood. Aims: We aim to study the dynamics of porous grain aggregates under accelerated ration. Especially, we determine the deformation of the grains and the maximal angular velocity up to the rotational disruption event by caused by centrifugal forces. Methods: We pre-calculate aggregates my means of ballistic aggregation analogous to the interstellar dust as input for subsequent numerical simulations. In detail, we perform three-dimensional N-body simulations mimicking the radiative torque spin-up process up to the point where the grain aggregates become rotationally disrupted. Results: Our simulations results are in agreement with theoretical models predicting a characteristic angular velocity $\omega_{\mathrm{disr}}$ of the order of ${ 10^8 - 10^9\ \mathrm{rad\ s^{-1}} }$, where grains become rotationally disrupted. In contrast to theoretical predictions, we show that for large porous aggregates ($< 300\ \mathrm{nm}$) $\omega_{\mathrm{disr}}$ reaches a lower asymptotic value. Hence, such grains can withstand an accelerated ration more efficiently up to a factor of 10 because the displacement of mass by centrifugal forces and the subsequent mechanical deformation supports the buildup of new connections within the aggregate. Furthermore, we report that the rapid rotation of grains deforms an ensemble with initially 50:50 prolate and oblate shapes, respectively, preferentially into oblate shapes. Finally, we present a best fit formula to predict the average rotational disruption of an ensemble of porous dust aggregates dependent on internal grain structure, total number of monomers, and applied material properties.
Stefan Reissl, Philipp Nguyen, Lucas M. Jordan, Ralf S. Klessen
2023-01-30T13:48:53Z
http://arxiv.org/abs/2301.12889v1
# The rotational disruption of porous dust aggregates from ab-initio kinematic calculations ###### Abstract Context:The sizes of dust grains in the interstellar medium follows a distribution where most of the dust mass is in smaller grains. However, the re-distribution from larger grains towards smaller sizes especially by means of rotational disruption is poorly understood. Aims:We aim to study the dynamics of porous grain aggregates under accelerated ration. Especially, we determine the deformation of the grains and the maximal angular velocity up to the rotational disruption event by caused by centrifugal forces. Methods:We pre-calculate porous grain aggregate my means of ballistic aggregation analogous to the interstellar dust as input for subsequent numerical simulations. In detail, we perform three-dimensional N-body simulations mimicking the radiative torque spin-up process up to the point where the grain aggregates become rotationally disrupted. Results:Our simulations results are in agreement with theoretical models predicting a characteristic angular velocity \(\omega_{\rm flux}\) of the order of \(10^{8}-10^{9}\) rad s\({}^{-1}\), where grains become rotationally disrupted. In contrast to the theoretical predictions, we show that for large porous grain aggregates (\(\gtrsim 300\) nm) \(\omega_{\rm dust}\) does not strictly decline but reaches a lower asymptotic value. Hence, such grains can withstand an accelerated ratio more efficiently up to a factor of 10 because the displacement of mass by centrifugal forces and the subsequent mechanical deformation supports the build up of new connections within the aggregate. Furthermore, we report that the rapid rotation of grains deforms an ensemble with initially 50:50 prolate and oblate shapes, respectively, preferentially into oblate shapes. Finally, we present a best fit formula to predict the average rotational disruption of an ensemble of porous dust aggregates dependent on internal grain structure, total number of monomers, and applied material properties. ## 1 Introduction Dust is a key component of the interstellar medium (ISM). It is important for regulating the properties of astrophysical objects across a wide range of scales: from the cooling of collapsing molecular clouds and subsequent star-formation down to the formation of planetary systems (Spitzer & Arny 1978; Dorschner & Henning 1995). However, the origin of dust, its initial physical properties and the redistribution of grain sizes is still a field of ongoing research (O'Donnell & Mathis 1997; Ormel et al. 2009; Birnstiel et al. 2010; Guillet et al. 2018; Draine & Hensley 2021a). Dust composition and grain size distribution may be derived from the observed interstellar extinction curve and starlight polarization (Mathis et al. 1977; Draine & Lee 1984; Guillet et al. 2018; Draine & Hensley 2021a). Initially, the ISM is enriched by intermediate-mass stars at the asymptotic giant branch (AGB) supernova ejecta coming with a certain grain size distribution (Nozawa et al. 2007; Gail et al. 2009; Barlow et al. 2010; Matsuura 2011; Karovicova et al. 2013; Zhukovska et al. 2015; Bevan & Barlow 2016). Later, the grains grow in dense molecular clouds by accretion of abundant elements and coagulation (Spitzer & Arny 1978; Chokshi et al. 1993; O'Donnell & Mathis 1997). Naturally, this process results in porous dust aggregates rather than solid bodies (Ossenkopf 1993; Dominik & Tielens 1997; Wada et al. 2007). Dust destruction processes such as gas-grain sputtering or grain-grain collision (shattering) may redistribute the grown grains towards smaller sizes (Dwek & Scalo 1980; Tielens et al. 1994; Hirashita & Yan 2009). More recently, the pioneering work presented by Hoang et al. (2019) describes a new dust destruction mechanism. Here, a radiation field causes radiative torques (RAT) acting on dust grains which lead to an angular acceleration(see e.g. Lazarian & Hoang 2007a; Hoang et al. 2014). Given a sufficiently luminous environment the grains would inevitably be disrupted by the emerging centrifugal force (Silbsee & Draine 2016; Hoang et al. 2019; Hoang 2020). The disruption process is usually quantified by the maximal tensile strength \(\mathcal{S}_{\rm max}\) which is a measure how material as respond to stretching (Silbsee & Draine 2016; Tatsuuma & Kataoka 2021). For porous materials and dust grain analogs the tensile strengths may be determined by numerical N-body simulations (Kataoka et al. 2013b; Seizinger et al. 2013b; Tatsuuma et al. 2019). However, what simulations of \(\mathcal{S}_{\rm max}\) are missing is that individual building blocks (so called monomers) do not just feel the stretching force between its neighbours, but additionally the global centrifugal force acting on the entire aggregate. Hence, up to this point, it remains unclear if the parameter \(\mathcal{S}_{\rm max}\) describes accurately the internal processes of material displacement within the rotating aggregate. The rotational disruption of fractal grain aggregates was already studied indirectly in (see Reissl et al. 2022, RMK22 hereafter) by evaluating \(\mathcal{S}_{\rm max}\) in the context of rapid grain ration caused by a differential gas-dust velocity. However, in this paper we aim to simulate the dynamics of rapidly rotating interstellar grains directly by taking the time evolution of the internal aggregate structure into account. The aim is to develop a model of the average disruption process of large ensembles of porous grains. This paper is structured as follows: In Sect. 2 we discuss the most likely composition of elements and minerals to be present in dust aggregates. An algorithm to mimic the growth of porous dust aggregates is outlined in Sect. 3 in detail. Here, we also introduce the methods to quantify the grain shape and porosity. In Sect. 4 we discuss the processes and forces acting between the monomers connected within the aggregates. The spin-up process of grains by means of RATs is outlined in Sect. 5. In Sect. 6 we discus the numerical implementation of the set of equations that governs the internal aggregate dynamics under rapid rotation. In Sect. 7 we present and discuss our N-body simulation results. Finally, in Sect. 8 we summarize our findings. ## 2 Dust grain composition The exact composition of dust remains an open question, with a large variety of observed materials and sizes within the galaxy being similarly possible. Interstellar dust is usually modelled by grains with a spectrum of different sizes with silicate and carbonaceous components (Mathis et al., 1977; Weingartner & Draine, 2001; Zhukovska et al., 2008; Voshechinnikov & Henning, 2010; Guillet et al., 2018). Small spherical monomers condensate by depletion of the most abundant elements C, Mg, Si, Fe, and O from their immediate surrounding (see e.g. Kim et al., 2021). Elements such Ti, Al, S Ca, Ni, respectively, are less abundant and may only contribute a few percent of the total dust mass (Hensley & Draine, 2021). The abundance of molecules within the grains may be determined by observing the spectral dust absorption features. Silicate minerals from the olivine and pyroxene group are the most likely candidates to be present in order to account for the observed characteristic features (Mathis et al., 1977; Wada et al., 1999; Li & Draine, 2001; Hensley & Draine, 2021). However, it remains yet inconclusive if these minerals form dust in a crystalline or amorphous structure and to what extend iron is present in its pure form (Draine & Li, 2007; Rogantini et al., 2019; Do-Duy et al., 2020). Carbonaceous grains may consist of a regular graphite lattice or amorphous structures and be partly hydrogenated (Wada et al., 1999; Goto et al., 2003; Mennella, 2006). Possibly, carbonaceous and silicate grains are not even separate dust populations but baked into a single composite material (Draine & Hensley, 2021). For mimicking the optical properties of dust grains models based on the refractive indices of various materials are developed and the data is publicly available1(see e.g. Draine & Li, 2007; Jones, 2012; Draine & Hensley, 2021). However, complementary well constrained models of the mechanical properties particularly for the composition of interstellar dust grain materials are still missing. Commonly, grain analogs consisting exclusively of icy or pure quartz (SiO\({}_{2}\)) materials are utilized to simulate grain growth processes (e.g. Dominik & Tielens, 1997; Wada et al., 2007; Seizinger et al., 2012). This is simply due to the fact that quartz is an easily available material on the market with well constrained properties by laboratory experiments (Kendall et al., 1987; Heim et al., 1999; Israelachvili, 2011; Krijt et al., 2013). Footnote 1: For the interested reader we refer to the website e.g. of Bruce Draine [https://www.astro.princeton.edu/](https://www.astro.princeton.edu/) draine/dust/dust.dielt.html and the THEMIS model [https://www.ias.u-psud.fr/themis/](https://www.ias.u-psud.fr/themis/) In this paper, however, we explore the rotational disruption of three distinct grain materials labeled a-C, q-S, and co-S, respectively. Carbonaceous grains are represented by the mechanical parameters of amorphous carbon (a-C) and for compression silicate grains are considered built of pure quartz (q-S). In addition, we aim to approximate the mechanical properties of composite silicate (co-S) grains more precisely. Here, we assume the minerals forsterite (Mg\({}_{2}\)SiO\({}_{4}\)) and fayalite (Fe\({}_{2}\)SiO\({}_{4}\)) of the olivine series as well as enstatite (MgSiO\({}_{3}\)) and ferrosilite (FeSiO\({}_{3}\)) of the pyroxene series to be among the most likely ingredients of silicate grains (Petrovic, 2001; Zolensky et al., 2006; Zhukovska et al., 2008; Gail et al., 2009; Takigawa & Tachibana, 2012; Min et al., 2007; Kimura et al., 2015; Fogery et al., 2016; Hoang et al., 2019; Escatlar et al., 2019; Kimura et al., 2020; Hensley & Draine, 2021; Draine & Hensley, 2021). In order to get an approximation of the material mixture, we match the abundance of individual elements within the considered minerals with the abundance of elements typical for the ISM (Min et al., 2007; Voshechinnikov & Henning, 2010; Compiegne et al., 2011; Hensley & Draine, 2021). In Table 1 we present the relative abundances of elements within the ISM in comparison with the composition of our co-S model. For our best fit co-S model we get that each individual silicate monomer consist of a mixture of 31 % forsterite, 29 % fayalite, 20 % enstatite, 20 % ferrosilite, respectively, and average the mechanical material properties accordingly. The abundance of elements within the co-S agrees well with the overall observations within the Milky Way but our co-S model shows a slight overabundance of Si. Naturally, the composition of co-S may locally be vastly different e.g. in the vicinity of oxygen or carbon rich AGB stars where dust grains are newly formed (Zhukovska & Henning, 2013). Thus, we remain agnostic concerning the actual composition of interstellar dust but consider our co-S model to be an improvement compared to simulations using pure quartz monomers. ## 3 Dust grain growth and aggregation Dust grain aggregates may grow by ballistic hit-and-stick processes of monomers onto a grain's surface. This process is usually called ballistic particle-cluster aggregation (BCPA) (Kozasa et al., 1992; Bertini et al., 2007). Newly formed grains may then grow to even larger aggregates by ballistic cluster-cluster aggregation (BCCA) Ossenkopf (1993). Such grain-grain collisions result significantly compressed aggregates and, subsequently, gas pressure may compress an aggregate even more (see e.g Dominik & Tielens, 1997; Kataoka et al., 2013; Michoulier & Gonzalez, 2022, and references therein). In our study, we model such compression effects by the ballistic aggregation with migration (BAM) model introduced in Shen et al. (2008). Here, grains simply grow by means of ballistic aggregation (BA) 2 of monomers hitting the aggregate from random directions. In the BAM model grain model monomers may migrate along the surface once (BAM1) or twice (BAM2). Consequently, for BAM1 each monomer has at least two connections with the aggregate and for BAM2 each monomer has at least three Shen et al. (2008); Seizinger et al. (2013). Footnote 2: BA and BCPA are used synonymous in literature. Commonly, dust aggregation models utilize only a constant monomer size (see e.g. Kozasa et al., 1992; Shen et al., 2008; Wada et al., 2007; Bertini et al., 2007; Seizinger et al., 2012). However, it seems unlikely that a condensation process of elements in nature would lead to exactly one monomer size. In fact, numer ous laboratory experiments clearly indicate that grown aggregates consist of monomers with a variable radius (Karasev et al., 2004; Chakrabarty et al., 2007; Slobodrian et al., 2011; Kandilian et al., 2015; Salamch et al., 2017; Paul et al., 2017; Baric et al., 2018; Kelesidis et al., 2018; Bauer et al., 2019; Wu et al., 2020; Zhang et al., 2020; Kim et al., 2021). In this study we focus on a polydisperse system of monomers, where the exact distribution of the monomer radii within an aggregate may be approximated by a log-normal distribution (Koxylu & Faeth, 1994; Lehre et al., 2003; Slobodrian et al., 2011; Bescond et al., 2014; Kandilian et al., 2015; Liu et al., 2015; Bauer et al., 2019; Wu et al., 2020; Zhang et al., 2020). We realize the BAM model with a Monte-Carlo approach in order to create an ensemble of pre-calculated grain analogs resembling the observed parameters of dust in the circumstellar and interstellar medium. The radius of the i-th monomer \(a_{\rm mon,i}\) is sampled from a range of \(a_{\rm mon,i}\in[10\ {\rm nm},100\ {\rm nm}]\). The log-normal distribution has a typical average of \(\langle a_{\rm mon}\rangle=20\ {\rm nm}\) and a standard deviation of 1.65 nm. Successively, each newly sampled monomer is shot on a random trajectory into the simulation domain until it hits the aggregate. For simplicity we assume that each monomer sticks onto the surface when colliding. For BAM1 and BAM2, respectively, the i-th monomer migrates along the the surface of the initially hit monomer in a random direction to establish additional connections. In order to create an aggregate in equilibrium the monomer position \(\@vec{X}_{\rm i}\) is corrected in such a way that each overlap between connected monomers agrees with the material dependent equilibrium compression length \(\delta_{0}\) (see Sect. 4 for details). The aggregation process is repeated until the dust aggregate reaches a certain volume of \[V_{\rm agg}=\frac{4\pi}{3}\sum_{\rm i=1}^{N_{\rm mon}}a_{\rm mon,i}^{3}\,. \tag{1}\] Ballistic aggregates are usually quantified by the total number of monomers \(N_{\rm mon}\)(see e.g. Wada et al., 2007; Shen et al., 2008; Seizinger et al., 2012). However, dust observations are tightly connected to the effective size of the grains (Mathis et al., 1977; Weingartner & Draine, 2001). Hence, in our study we rather control for an exact effective radius of \[a_{\rm eff}=\left(\frac{3\chi_{\rm agg}}{4\pi}\right)^{\frac{1}{3}} \tag{2}\] by an biased sampling of the last three monomer radii instead of getting the grain size indirectly from \(a_{\rm eff}\approx N_{\rm mon}^{1/3}\ \langle a_{\rm mon}\rangle\). Finally, we calculate the inertia tensor for each aggregate in order to determine the characteristic moments of inertia \(I_{\rm a1}>I_{\rm a2}>I_{\rm 3}\), along the unit vectors \(\hat{a}_{1}\), \(\hat{a}_{2}\), and \(\hat{a}_{3}\) (we refer t to RMK22 for the exact procedure). In order to connect the rotational disruption of the grain ensemble to distinct quantities associated with the internal structure of fluffy dust grains we introduce the porosity \(\mathcal{P}\), volume filling factor \(\phi\), and fractal dimension \(D_{\rm f}\) as well as the semi major axes \(a<b<c\) unique for each individual aggregate. The porosity quantifies the empty space within an aggregate where \(\mathcal{P}=0\) is for a solid object and \(\mathcal{P}=1\) for vacuum. Its value for an individual object depends on the definition of the surface that envelopes the aggregate. In our study we follow the procedure of determining \(\mathcal{P}\) by utilizing the moments of inertia of an aggregate as outlined in Shen et al. (2008). Here, the quantity \[\alpha_{\rm i}=\frac{5}{4\pi}\frac{I_{\rm ai}}{\rho_{\rm mat}a_{\rm eff}^{5}} \tag{3}\] is the ratio of the moment of inertia \(I_{\rm ai}\) to that of an sphere with equivalent volume where the index \({\rm i}\in\{1,2,3\}\) denotes the three spatial directions. The corresponding semi major axes are \[a=a_{\rm eff}\sqrt{\alpha_{2}+\alpha_{3}-\alpha_{1}}\,, \tag{4}\] \[b=a_{\rm eff}\sqrt{\alpha_{3}+\alpha_{1}-\alpha_{2}}\,, \tag{5}\] and \[c=a_{\rm eff}\sqrt{\alpha_{1}+\alpha_{2}-\alpha_{3}}\,, \tag{6}\] respectively. Finally, the porosity of an aggregate may then be written as \[\mathcal{P}=1-\frac{a_{\rm eff}^{3}}{abc}\,, \tag{7}\] whereas the complementary quantity \(\phi=1-\mathcal{P}\) is the volume filling factor Shen et al. (2008). The fractal dimension is a measure of the shape of an aggregate where \(D_{\rm f}=1\) represents a one dimensional line and \(D_{\rm f}=3\) a compact sphere. We determine the fractal dimension \(D_{\rm f}\) of each dust aggregate by the correlation function \[C\chi^{D_{\rm f}-3}=\frac{n(\chi)}{4\pi\chi^{2}lN_{\rm mon}} \tag{8}\] as outlined in Skorupski et al. (2014). Here, \(C\) is a scaling factor, \(\chi\) is the distance from the center of mass, \(l\) is a length with \(l\ll a_{\rm eff}\), and \(n(\chi)\) is the number density of connected monomers within the shell \([\chi-l/2;\chi+l/2]\). We create dust grains with effective radii in the range \(a_{\rm eff}=50\ {\rm nm}-550\ {\rm nm}\) in steps of \(50\ {\rm nm}\). For each \(a_{\rm eff}\) we repeat the MC dust growth process with 30 random seeds for the BA, BAM1, and BAM2 configurations and the a-C, q-S, co-S materials. In total we pre-calculate an ensemble of 2970 individual \begin{table} \begin{tabular}{|c c c c c|c|} \hline Mg/O & Si/O & Fe/O & Mg/Fe & \(({\rm Mg+Fe})/{\rm Si}\) & References \\ \hline 0.31 & 0.26 & 0.15 & 2.03 & 1.82 & Min et al. (2007) \\ X & X & X & 1.09 & 2.25 & Voshchinnikov \& Henning (2010) \\ 0.23 & 0.19 & 0.19 & 1.25 & 2.25 & Compiègne et al. (2011) \\ 0.19 & 0.15 & 0.17 & 1.07 & 2.34 & Hensley \& Draine (2021) \\ 0.23 & 0.27 & 0.21 & 1.08 & 1.60 & this work \\ \hline \end{tabular} \end{table} Table 1: Ratio of different element abundances as observed in the ISM in comparison with the composition of the co-S material applied in this work. grains as input for our three-dimensional N-body simulations. We note that the grain growth by BA may heavily be impacted by grain charge (Matthews et al., 2012), high impact velocities (Dominik & Tielens, 1997; Ormel et al., 2009), a preferential impact direction by means of grain alignment with the magnetic field (Lazarian & Hoang, 2007; Hoang, 2022) or a gas-dust drift Figure 1: Exemplary selection from the total ensemble of porous BA (top row), BAM1 (moddle row), and BAM2 (bottom row) aggregates for the effective grain radii \(a_{\rm eff}=200\) nm (left column), \(a_{\rm eff}=350\) nm (middle column), and \(a_{\rm eff}=500\) nm (right column) with the corresponding numbers of monomers \(N_{\rm mono}\) and neighbourhood connections \(N_{\rm con}\). Monomers are sampled to guarantee an exact effective radius \(a_{\rm eff}\). The radius \(a_{\rm out}\) is associated to the smallest sphere enclosing the entire aggregate. The axes \(\hat{a}_{1}\), \(\hat{a}_{2}\), and \(\hat{a}_{3}\) are defined by the grain’s moments of inertia \(I_{\rm a1}>I_{\rm a2}>I_{\rm a3}\), where \(\hat{a}_{1}\) is the designated axis of grain rotation. (RMK22). Hence, the resulting shapes in our grain ensemble may not be representative when compared for grain growth processes e.g. in the vicinity of AGB stars (Zhukovska & Henning 2013) or in protostellar envelopes (Galametz et al. 2019). However, our grain model cover a sufficiently broad variety of grain shapes to allow conclusions about their rotational stability even though each individual shape is not equally likely to be realize in nature. An exemplary selection of the grains is shown in Fig. 1. In Fig. 2 we present the characteristic quantities of the entire grain ensemble as introduced above. The number of monomers \(N_{\rm mon}\) as well as the number of connections \(N_{\rm con}\) within each aggregate increase with effective radius \(a_{\rm eff}\). By design BA grains have generally less connections compared to BAM grains. Compared to the BAM grains with a fixed monomer size (Shen et al. 2008) the porosity \(\mathcal{P}\) is not strictly increasing but stagnates for higher \(a_{\rm eff}\) because smaller monomers may easily migrate towards the center making our aggregates overall more compact. The fractal dimension \(D_{\rm f}\) shown in Fig. 2 increases slightly towards larger grains but the resulting grain BA, BAM1, and BAM2 grains of different sizes shapes in this work are not well correlated with the fractal dimension \(D_{\rm f}\). This is in contrast to the dust models of RMK22 where the grains are explicitly constructed to get an exact pre-determined \(D_{\rm f}\) and are not to be compared with the BAM grains as depicted in Fig. 1. ## 4 Inter-monomer contact effects In this section we outline the forces and torques acting between the monomers of an aggregate in detail. In Fig. 3 we provide a schematic illustration of all considered monomer interactions. Two individual monomers in physical contact establish a common contact surface area and experience an attraction because of the van der Waals force. For some materials stronger attractions such as Coulomb forces between charged monomers, dipole-dipole interaction within ices, or metallic binding between iron pallets may become of relevance. The attraction is quantified by the material dependent energy per surface area \(\gamma\). Assuming monomers act like elastic spheres with an radius of \(a_{\rm mon,i}\) and \(a_{\rm mon,j}\), respectively, their elastic deformation causes an repulsive force. An analytical description of these forces was first presented in Johnson et al. (1971) with the so Figure 2: The distribution of the characteristic quantities of total number of monomers \(N_{\rm mon}\) (top left), number of connected neighbours \(N_{\rm con}\) (top right), porosity \(\mathcal{P}\) (bottom left), and fractal dimension \(D_{\rm f}\) (bottom right), for all BA (red), BAM1 (green), and BAM2 (blue) aggregates, respectively, dependent on effective radius \(a_{\rm eff}\). Dots are the ensemble average over all shapes and materials while vertical bars represent the minima and maxima. Note that the data points have a small offset for better visibility. called JKR model (see also Hertz 1896) where the equilibrium radius of the contact surface is \[r_{0}=\left(\frac{9\pi\gamma R^{2}}{E^{*}}\right)^{1/3} \tag{9}\] when no external force are acting between monomers i.e. the attractive and repulsive forces are in balance. Here, the quantity \(R=a_{\rm mon,i}a_{\rm mon,i}/(a_{\rm mon,i}+a_{\rm mon,i})\) is the reduced monomer radius whereas the elastic parameter \(E^{*}=E/(2-2\nu^{2})\) is determined by the material specific constants of the Poisson number \(\nu\) and Young's modulus \(E\), respectively (we refer to Johnson 1987, for further details). Utilizing that a monomer contact breaks for the characteristic pulling force of \[F_{\rm C}=3\pi\gamma R\,, \tag{10}\] as outlined in (Johnson et al. 1971), the normal force along the unit vector \[\mathbf{n}_{\rm c}=\frac{\mathbf{X}_{\rm i}-\mathbf{X}_{\rm i}}{\left|\mathbf{X}_{\rm i}-\mathbf{X }_{\rm j}\right|}\,. \tag{11}\] may be written as \[\mathbf{F}_{\rm N,ij}=4F_{\rm C}\left[\left(\frac{\nu}{r_{0}}\right)^{3}-\left( \frac{r}{r_{0}}\right)^{3/2}\right]\mathbf{n}_{\rm c} \tag{12}\] Consequently, for \(F_{\rm N,ij}>0\) the i-th monomer is exerting a pushing force onto its j-th neighbour. Otherwise, for \(F_{\rm N,ij}<0\), the j-th monomer is pulling on the i-th monomer. Later, the JKR model was extended by Dominik & Tielens (1997) considering the mechanics of rolling (Dominik & Tielens 1995, 1997), sliding (Dominik & Tielens 1996, 1997), and twisting motions (Dominik & Tielens 1997) in between connected \begin{table} \begin{tabular}{|c|c|c|c c c c c|} \hline & carbon: & silicate: & silicate: & forsterite & fayalite & enstatite & ferrosilite \\ & amorphous (a-C) & quartz (q-S) & composite (co-S) & 31 \% & 29 \% & 20 \% & 20 \% \\ \hline \(\gamma\) [mJ m\({}^{-2}\)] & 50 (11) & 20 (1) & 70 & 70 (4) & 70 (4) & 70 (4) & 70 (4) \\ \(E\) [GPa] & 168 (9,10) & 54 (1,3) & 169 & 204 (6,8) & 140 (6) & 180 (6) & 142 (7) \\ \(\nu\) & 0.16 (9) & 0.17 (1,3) & 0.27 & 0.24 (2,6,8) & 0.32 (2,6) & 0.21 (2,6) & 0.31 (7) \\ \(\rho_{\rm mat}\) [kg m\({}^{-3}\)] & 2368 (9,11) & 2650 (1,3) & 3707 & 3213 (2,5,14) & 4393 (2,5,14) & 3209 (2,5,14) & 4014 (5,7,14) \\ \hline \end{tabular} \end{table} Table 2: Material parameters of the surface energy \(\gamma\), Young’s modulus \(E\), Poisson number \(\nu\), and material density \(\rho_{\rm mat}\) for the monomer materials of amorphous carbon (a-C), pure quartz (q-S), and composite silicate (co-S) considered in our N-body simulations. The co-S material is assumed to consist of a mixture of the different minerals of forsterite, fayalite, enstatite, and ferrosilite, respectively. For all the materials we assume a critical rolling displacement of \(\xi_{\rm crit}=0.2\) nm (1,3) and a viscous dumping time of \(T_{\rm vis}=5\) ps (1,12,13). Figure 3: Schematic representation of the forces and torques acting in between the i-th and the j-th monomer at the positions \(\mathbf{X}_{\rm i}\) and \(\mathbf{X}_{\rm j}\), respectively. The common contact surface with radius \(r_{\rm ij}\) is shaded in red. (a) Each individual monomer experiences external centrifugal \(\mathbf{F}_{\rm cent}\), Coriolis \(\mathbf{F}_{\rm cor}\), and Euler forces \(\mathbf{F}_{\rm est}\) as a result of the aggregate’s accelerated rotation with angular velocity \(\omega_{\rm agg}\). (b) The normal force \(\mathbf{F}_{\rm N,ij}\) acts normal to the contact surface of two monomers because of surface attraction and mechanical deformation with the contact pointers remaining anti-parelle \(\mathbf{n}_{\rm i}=-\mathbf{n}_{\rm j}\). (c) The sliding force \(\mathbf{F}_{\rm N,ij}\) is parallel to the sliding displacement \(\zeta\). Both \(\mathbf{F}_{\rm N,ij}\) and the corresponding sliding torque \(\mathbf{\Gamma}_{\rm S,ij}\) work tangential to the contact surface. (d) The same for the rolling of monomers withing the aggregate: The rolling displacement \(\mathbf{\zeta}\), the force \(\mathbf{F}_{\rm N,ij}\), and the \(\mathbf{\Gamma}_{\rm R,ij}\) are tangential to the contact surface. (e) The twisting of monomers in contact is not associated with a net force. The resulting torque \(\mathbf{\Gamma}_{\rm T,ij}\) points is parallel to the normal vector \(\mathbf{\phi}\) of the twisting. We note that the depicted quantities are not to scale since \(r_{\rm ij}\ll a_{\rm mon}\). monomers. In order to track the relative motion of monomers over we use the formulation of the contact pointers \(\mathbf{n}_{\rm i}\) and \(\mathbf{n}_{\rm j}\) as outlined in Dominik & Nubold (2002). These vectors point initially towards the centers of neighboring monomers when a new contact is established (see Fig. 3). The force acting on the i-th monomer because of the sliding motion of the j-th monomer may the be written as \[\mathbf{F}_{\rm S,ij}=-8r_{0}G^{*}\mathbf{\zeta}\frac{\left(a_{\rm mon,i}\mathbf{n}_{\rm i }+a_{\rm mon,i}\mathbf{n}_{\rm i}\right)\mathbf{n}_{\rm c}}{\left|\mathbf{X}_{\rm i}-\mathbf{X} _{\rm j}\right|} \tag{13}\] where \[\mathbf{\zeta}=a_{\rm mon,i}\mathbf{n}_{\rm i}+a_{\rm mon,i}\mathbf{n}_{\rm j}-\left(a_{ \rm mon,i}\mathbf{n}_{\rm i}\mathbf{n}_{\rm c}-a_{\rm mon,i}\mathbf{n}_{\rm c}\mathbf{n}_{\rm i }\right)\mathbf{n}_{\rm c} \tag{14}\] is the sliding displacement and \[\mathbf{\Gamma}_{\rm S,ij}=-8r_{0}G^{*}a_{\rm mon,i}\mathbf{n}_{\rm i}\times\mathbf{\zeta} \tag{15}\] is the associated sliding torque (Johnson 1987; Dominik & Tielens 1996, 1997). Here, the material constant \(G^{*}=G/(2-2\nu_{\rm i}^{2})\) depends on the shear modulus \(G=E/(2+2\nu)\) (see e.g. Cardarelli 2008). In contrast to sliding a rolling motion does not result in a force (see e.g. Dominik & Tielens 1997; Wada et al. 2007) but only in a torque of \[\mathbf{\Gamma}_{\rm R,ij}=-4F_{\rm c}\mathbf{n}_{\rm i}\times\mathbf{\xi}\,, \tag{16}\] where the rolling displacement depends on the contact pointers via \(\mathbf{\xi}=R(\mathbf{n}_{\rm i}+\mathbf{n}_{\rm i})\). The same for the twisting between monomers with a torque of \[\mathbf{\Gamma}_{\rm T,ij}=\frac{16}{3}G^{*}r_{0}^{3}\mathbf{\phi}\,. \tag{17}\] whereas the motion of the twisting follows the direction of the vector \[\mathbf{\phi}=\mathbf{n}_{\rm c}(t)\int_{0}^{t}\left(\omega_{\rm i}(t^{\prime})- \omega_{\rm j}(t^{\prime})\right)\mathbf{n}_{\rm c}(t^{\prime}){\rm d}t^{\prime}\,. \tag{18}\] Here, the angular velocities \(\omega_{\rm i}(t)\) and \(\omega_{\rm j}(t)\), respectively, describe the relative rotation of two monomers in contact (Dominik & Tielens 1997). In this study we extend the monomer contact physics pioneered by Johnson et al. (1971), Dominik & Tielens (1995, 1996, 1997), and Wada et al. (2007) by introducing additional forces emerging from the accelerated rotation of the grain aggregate. In the notation of our grain model the centrifugal force acting on each monomer may be write as \[\mathbf{F}_{\rm cent,i}=-m_{\rm mon,i}\;\omega_{\rm agg}\times\left(\omega_{ \rm agg}\times\mathbf{X}_{\rm i}\right) \tag{19}\] where \(m_{\rm mon,i}=4\pi/3\;\rho_{\rm mat}a_{\rm mon,i}^{3}\) is the mass of the i-th monomer. We emphasize that our numerical setup operates in a co-rotating coordinate system with its origin coinciding with the center of mass of the most massive fragment. Hence, the Coriolis effect acts as an additional fictive force on each individual monomers via \[\mathbf{F}_{\rm cor,i}=-2m_{\rm mon,i}\;\omega_{\rm agg}\times\frac{{\rm d}\mathbf{ X}_{\rm i}}{{\rm d}t}\,. \tag{20}\] Furthermore, our aggregates are initially at rest and are gradually spun-up. Consequently, the acceleration of each monomer within the aggregate leads to an Euler force of \[\mathbf{F}_{\rm eul,i}=-m_{\rm mon,i}\frac{{\rm d}\omega_{\rm agg}}{{\rm d}t} \times\mathbf{X}_{\rm i} \tag{21}\] acting on each monomer where \({\rm d}\omega_{\rm agg}/{\rm d}t\) is the angular acceleration. When monomers establish a new contact they start to oscillate around the equilibrium position driven by the surface attraction and monomer deformation. In nature it is expected that the oscillation becomes dampened because the deformation dissipates energy. A weak damping force may be introduced based on the relative velocity of the monomers (Kataoka et al. 2013b; Seizinger et al. 2012) to artificially reduce oscillations. However, as shown in Seizinger et al. (2012), such a force may come with numerical instabilities. However, the damping model presented in Krijt et al. (2013) shows that the elastic dampening can significantly increase the dissipation of energy. In this study we apply the viscoelastic damping force \[\mathbf{F}_{\rm D,ij}=\frac{2E^{*}}{\nu_{\rm i}^{2}}\frac{{\rm d}\delta}{{\rm d}t }r_{\rm ij}T_{\rm vis}\mathbf{n}_{\rm c}\,, \tag{22}\] as introduced by Seizinger et al. (2013a). Here, the dampening depends not on the relative velocity, but instead on the time evolution of the compression lengths \[\delta=a_{\rm i}+a_{\rm j}-\left|\mathbf{X}_{\rm i}-\mathbf{X}_{\rm j}\right|\,, \tag{23}\] i.e. the overlap between two spherical monomers with a distance of \(\left|\mathbf{X}_{\rm i}-\mathbf{X}_{\rm j}\right|\) apart from each other. For an aggregate in equilibrium the compression length is \(\delta_{0}=r_{0}/(3R)\). The characteristic viscoelastic timescale \(T_{\rm vis}\) in the order of \(1\,{\rm ps}-10\,{\rm ps}\)(Krijt et al. 2013) but poorly constrained for specific grain materials. For smaller values of \(T_{\rm vis}\) the material effectively behaves elastically and no damping is expected. For simplicity, we adapt a fixed value of \(T_{\rm vis}=5\,{\rm ps}\) for all considered grains independent of material. The full set of material parameters applied in our simulations is listed in Table 2. We emphasize that all parameters are best estimates for ISM conditions i.e. assuming a cold and dry environment. In general, these parameters (especially the surface energy \(\gamma\)), have a wide range because they are highly sensitive to temperature (Bogdan et al. 2020), monomer size (Bauer et al. 2019), and humidity (Heim et al. 1999; Fuji et al. 1999; Kimura et al. 2015; Steinpila et al. 2019; Bogdan et al. 2020). Subsequent studies on this matter must complement the mechanical parameters utilized in this paper. ## 5 Radiative torque disruption (RAT) In this section we briefly outline the radiative torques (RAT) that lead to rapid grain rotation and potentially to the disruption by centrifugal forces. It is well established that grains with an irregular shape acquire a certain amount of angular velocity when exposed to directed radiation (Dolginov & Silantev 1976; Draine & Weingartner 1996, 1997; Weingartner & Draine 2003; Lazarian & Hoang 2007a). Here, the gain of angular momentum over time follows \[\omega_{\rm agg}(t)=\omega_{\rm RAT}\left[1-\exp\left(-\frac{t}{\tau_{\rm drag }}\right)\right]\,, \tag{24}\] where \(\tau_{\rm drag}\) is the characteristic timescale of the rotational drag by means of gas collisions (Draine 1996) and photon emission (Draine & Lazarian 1998) and \[\omega_{\rm RAT}=\frac{\Gamma_{\rm RAT}\tau_{\rm drag}}{I_{\rm a1}} \tag{25}\] is the terminal angular velocity. The torque \(\Gamma_{\rm RAT}\) is characteristic for individual grains and depends on the spectrum of the radiation field as well as the shape of a particular grain and its material composition (see e.g. Hoang et al., 2014). A solution of \(\Gamma_{\rm RAT}\) may be calculated by numerical approximations (Draine, 1996; Draine & Flatau, 2013; Herranen et al., 2019) or analytical toy models (Lazarian & Hoang, 2007). However, we emphasize that we do not evaluate \(\omega_{\rm RAT}\) explicitly within the scope of this paper but assume a maximal grain rotation to guarantee the disruption of all aggregates. For the corresponding angular acceleration follows then \[\frac{{\rm d}\omega_{\rm agg}(t)}{{\rm d}t}=\frac{\omega_{\rm RAT}}{\tau_{\rm drag }}\exp\left(-\frac{t}{\tau_{\rm drag}}\right)\,. \tag{26}\] In principle, grains would also spin-up when exposed to an gaseous flow (Lazarian & Hoang, 2007; Das & Weingartner, 2016; Hoang et al., 2018,, RMK22) leading to an mechanical torque (MET) \(\Gamma_{\rm MET}\). However, the time evolution and acceleration of the angular velocity would still follow the same curves governed by Eq. 24 and Eq. 26, respectively, but evaluated with \(\Gamma_{\rm MET}\) instead of \(\Gamma_{\rm RAT}\). A dust aggregate cannot simply be modelled as a rigid body. Internal relaxation processes such as Barnett relaxation (Purcell, 1979; Lazarian & Roberge, 1997), nuclear relaxation (Lazarian & Draine, 1999) or inelastic relaxation (Purcell, 1979; Lazarian & Efroimsky, 1999) would dissipate rotational energy. Subsequently, the dissipation would re-orient the direction of the angular velocity \(\omega_{\rm agg}\). For typical interstellar conditions the total relaxation time is much smaller than the drag time \(\tau_{\rm drag}\)(Weingartner & Draine, 2003) and the grain axis \(\hat{a}_{\rm i}\) becomes the most likely axis of grain rotation (compare Fig. 1). For simplicity we assume in this study that the angular velocity points always in the direction of \(\hat{a}_{\rm i}\) i.e. \(\omega_{\rm agg}(t)=\hat{a}_{\rm i}\omega(t)\) for all of our three-dimensional N-body simulations. Potentially, the aggregate becomes ripped apart by centrifugal forces long before reaching the terminal angular velocity \(\omega_{\rm RAT}\)(Silsbee & Draine, 2016; Hoang et al., 2019). A mathematical framework for the radiative torque disruption (RATD) of grains was presented in Hoang et al. (2019). Here, the critical angular velocity for grain suggested for RATD is \[\omega_{\rm S}=\frac{2}{a_{\rm eff}}\left(\frac{\mathcal{S}_{\rm max}}{\rho_{ \rm mat}}\right)^{1/2}\,. \tag{27}\] Any dust aggregate exceeding the rotational limit \(\omega_{\rm agg}>\omega_{\rm S}\) becomes inevitably destroyed. This theoretical upper limit is derived from the material density \(\rho_{\rm mat}\), the effective radius \(a_{\rm eff}\), and the maximal tensile strength \(\mathcal{S}_{\rm max}\) of the aggregate (see Hoang 2020, for further details). However, the tensile strength of porous dust aggregates is not well constrained and may be lower by several orders of magnitude compared to solid bodies because individual substructures are not fully connected. In Greenberg et al. (1995) a means to estimate the tensile strength of aggregates was provided by evaluating \[\mathcal{S}_{\rm max}=\frac{3}{2}\left(\mathcal{N}_{\rm con}\right)\frac{ \phi E}{a_{\rm mon}h} \tag{28}\] (see also Li & Greenberg, 1997). Here, \(E\) is the binding energy, \(h\) is related to the overlap of monomers, \(\left<N_{\rm con}\right>\) is the average number of connections between all monomers, and \(a_{\rm mon}\) is the monomer size of a monodisperse aggregate. Complementary N-body simulations by Seizinger et al. (2013) suggest that the tensile strength is not directly related to the initial volume filling factor \(\phi\) as assumed e.g. by Greenberg et al. (1995) or Blum et al. (2006). Instead, the tensile strength is \(\mathcal{S}_{\rm max}\propto\phi^{1.6}\) or \(\mathcal{S}_{\rm max}\propto\phi^{1.9}\), respectively, for different quartz BA, BAM, and hexagonal aggregates. Comparable results are presented by Tatsuuma et al. (2019) where the relation \(\mathcal{S}_{\rm max}\propto\phi^{1.8}\) was suggested in particular for icy and quartz BCCA aggregates and \(\mathcal{S}_{\rm max}\propto\phi^{2/(1-\omega_{\rm S})}\) for arbitrary grain shapes with fractal dimension \(D_{\rm f}\). We emphasize that in these simulations the aggregates got merely stretched or compressed, respectively, upon breaking, but do not experience any effects associated with rotation at all. ## 6 Numerical setup We implement the physical effects as outlined above into a C++ code that combines the physics of Seizinger et al. (2012, 2013) with the code presented in RMK22 for the growth of aggregates. The time evolution of the net force acting on each monomer, \[m_{\rm mon,i}\frac{{\rm d}\omega_{\rm i}}{{\rm d}t}=\mathbf{F}_{\rm cent,i}+\mathbf{F }_{\rm cor,i}+\mathbf{F}_{\rm out,i}+\sum_{\rm j=1,j\rm j}^{N_{\rm mon}}\left(\bm {F}_{\rm N,ij}+\mathbf{F}_{\rm S,ij}\right), \tag{29}\] and that of the corresponding torque \[I_{\rm mon,i}\frac{{\rm d}\omega_{\rm i}}{{\rm d}t}=\sum_{\rm j=1,j\rm j,j}^{N _{\rm mon}}\left(\mathbf{\Gamma}_{\rm S,ij}+\mathbf{\Gamma}_{\rm R,\bar{\rm j}}+\mathbf{ \Gamma}_{\rm T,\bar{\rm j}}\right), \tag{30}\] is calculated with second order accuracy by an symplectic Leap-Frog integration scheme. Here, the quantity \(\mathbf{v}_{\rm i}\) is the velocity of an individual monomer within the simulation domain, and \(\omega_{\rm i}\) is the angular velocity caused by sliding, rolling, and twisting motions of its neighbors, whereas \(I_{\rm mon,i}=2/5\)\(m_{\rm mon,i}a_{\rm mon,i}^{2}\) is the moment of inertia of a particular monomer. In order to account for the dissipation of energy we assume that sliding, rolling, and twisting operate in the elastic limit until the corresponding displacement reaches some characteristic critical limit. The theoretical limits are \(\zeta_{\rm c}=r_{0}(1-\nu)/(16\pi)\) for sliding and \(\phi_{\rm c}=1/(16\pi)\) for twisting (compare Eq. 14 and Eq. 18). However, the critical limit of rolling \(\xi_{\rm c}\) (see Eq. 16) is still highly debated. For silicate monomers of different sizes this limit is within the range of \(\xi_{\rm c}\in[0.2\,{\rm nm},3.2\,{\rm nm}]\)(Dominik & Tielens, 1997; Heim et al., 1999; Paszun & Dominik, 2008). In this study we apply the conservative value of \(\xi_{\rm c}=0.2\) nm for the entire ensemble of aggregates independent of size and material. When the displacements because of the relative motion of monomers in contact exceed their critical values, energy is dissipated and our code modifies the contact pointers \(\mathbf{n}_{\rm i}\) and \(\mathbf{n}_{\rm j}\) to ensure that \(|\xi|\!-\!\zeta_{\rm c}\), \(|\mathbf{\psi}|\!=\!\phi_{\rm c}\), and \(|\xi|\!-\!\xi=\!\xi_{\rm c}\), respectively (see Dominik & Nubold, 2002; Wada et al., 2007, for details). Individual monomers operate on a characteristic time scale that may be write as (Wada et al., 2007) \[t_{\rm dis,i}=\frac{1}{6^{2/3}}\,\sqrt{\frac{m_{\rm mon,i}r_{0}^{2}}{\pi\,\gamma R ^{2}}}\,. \tag{31}\] Furthermore, it is not allowed for an monomer to move a distance larger than its own radius limiting the time step further by \(t_{\rm rot,i}=a_{\rm mon,i}/v_{\rm i}\). The final time step for our Leap-Frog integration scheme is then the minimum of these characteristic times \(\Delta t=0.02\,\min\left(t_{\rm dis,i},t_{\rm rot,i}\right)\) of all the monomers as well as monomer connections within the simulations domain. The factor of \(\Delta t\) is a little smaller than the one suggested by Wada et al. (2007) that guarantees conservation of energy below an error of \(10^{-3}\). However, monomers may roll along the surface of the rapidly rotating aggregate and choosing a smaller time step allows for a more accurate spatial resolution of such displacement of monomer within the aggregate. The rotational damping acts usually on short timescales \(\tau_{\rm drag}\) in the order of days up to several hundred years compared to typical astronomical processes in the ISM (see e.g. Weingartner & Draine 2003; Tazaki et al. 2017; Hoang 2022). A computational burden arises, because integration time \(\Delta t\) is only a few ns. Consequently, some simulations may take up to \(10^{18}\) time steps to terminate. Hence, a much smaller drag time \(\tau_{\rm drag}\) in the order of hours instead of years is selected to reduce the total run-time of the code. However, this does not impact the result of the rotational disruption simulations as long as the change in centrifugal forces per time step remains much smaller than the displacement processes for individual monomers to reach a new equilibrium position within the rotating aggregate. Even for a \(\tau_{\rm drag}\) of a few hours it is still guaranteed that centrifugal forces are the dominant cause of grains disruption because the spin-up remains slow enough that the shear within the aggregate by Euler forces remains negligible. In detail, a \(\tau_{\rm drag}\) is selected such that the aggregate reaches an angular velocity where the destruction is guaranteed within a few hours of simulation time but the Centrifugal force remains always the dominant force exerted on each monomer i.e. \(F_{\rm eul,i}\ll F_{\rm cent,i}\). The simulations setup starts at \(\omega_{\rm agg}=0\) rad s\({}^{-1}\) and follows the curve of Eq.24 up to a given terminal angular velocity \(\omega_{\rm RA}\).rar by default. Alternatively the code terminates for the condition \(N_{\rm con}=1\) i.e. only two connected monomers remain within the simulation domain. For detecting monomer collisions with variable sizes and the tracking of contact breaking events we use a loose octree data structure (see e.g. Ulrich 2000; Raschdorf & Kolonko 2009, for further details) in order to optimize the runtime of the code even more. A new contact is established when two moving monomers touch each other, which means \(\delta\leq 0\). When individual monomers or even entire sub-structures of the initial aggregate become unconnected we only track the evolution and spin-up process of the most massive remaining fragment while smaller fragments are allowed to leave the simulation domain. After each time step the remaining connected fragments are identified with a recursive flood fill algorithm on an undirected graph that represents the monomer connection relations. For every thousandth time step, the affiliation of monomer to a certain fragment, the number of monomers \(N_{\rm mono}\) remaining within the simulation domain and the monomer positions \(\mathbf{X}_{\rm i}\) as well as the forces and connections in between monomers are recorded. In order to quantify the total stress in between all connected monomers within the largest fragment we introduce the quantity of the total stress \[\sigma_{\Sigma}=\sum_{\rm i=1}^{N_{\rm mono}}\sum_{\rm j>i}^{N_{\rm max}}\left[ \frac{F_{\rm Nj}}{\pi r_{\rm j}^{2}}\right], \tag{32}\] where \(F_{\rm N,ij}=0\) for non-connected monomers. ## 7 Results and discussion We perform numerical N-body simulations for each individual pre-calculated grain aggregate with our numerical N-body setup upon rotational disruption. All grains are disrupted at an angular velocity of \(\omega_{\rm disr}\) before the overall spin-up process reaches the terminal angular velocity of \(\omega_{\rm RAT}=3\cdot 10^{10}\) rad s\({}^{-1}\). We emphasize once again that the exact values of \(\omega_{\rm RAT}\) is of minor relevance for our N-body simulations as long as the disruption of each aggregate is guaranteed and the centrifugal force and the Euler force follow strictly the relation \(F_{\rm eul,i}\ll F_{\rm cent,i}\) for the entire spin-up process as governed by Eq. 24 and Eq. 26, respectively. ### Time evolution of the rotational disruption process A typical simulation result for a q-S BAM2 grain with an effective radius of \(a_{\rm eff}=350\) nm is shown in Fig. 4. The evolution of the mass \(M(\omega_{\rm agg})\) during the spin up process remains constant up to \(\omega_{\rm agg}\approx 3\cdot 10^{8}\) rad s\({}^{-1}\). Once the grain spins even faster, contacts break and individual monomers as well as smaller fragments start to break from the aggregates surface. Simultaneously, monomers wander outwards driven by centrifugal forces. For a small period the total number connections exceeds even the initial value and drops then steadily. This features is most pronounced for BAM2 grains. Eventually, the simulations reach the condition of \(N_{\rm con}=1\) and terminate. The breaking of contacts and the subsequent mass loss is continuous in nature while the angular velocity \(\omega_{\rm agg}\) increases by roughly one order of magnitude. To designate a characteristic angular velocity \(\omega_{\rm disr}\) of rotational disruption based on mass loss or broken connections would be arbitrary. In Fig. 4 we show also the time evolution of the total stress \(\sigma_{\Sigma}(\omega_{\rm agg})\) of each aggregate. In contrast to the mass loss and the braking of contacts the stress \(\sigma_{\Sigma}(\omega_{\rm agg})\) rises steadily but drops then abruptly. Hence, we define this unique feature in the time evolution of each aggregate to be associated with the angular velocity of \(\omega_{\rm disr}\) of rotational disruption. The value of \(\omega_{\rm disr}\) is close to but not identical with a mass loss of 25 %. ### The fragmentation of rotating grains In Fig. 5 we show snapshots of a typical spin-up process and rotational disruption event for one representative grain. The grain is identical to the BAM2 with \(a_{\rm eff}=500\) nm depicted to the bottom right of Fig. 1. Once the grain has spun up to an angular velocity of \(\omega_{\rm agg}/\omega_{\rm disr}=0.1\) the aggregate starts to experience a stretching force and first connections break. However, all monomers are still connected to the same aggregate. Compared to its original configuration as depicted in Fig. 1 the aggregate changed its shape already, because monomers are rearranged within the aggregate by means of rolling. The aggregate fragments for \(\omega_{\rm agg}/\omega_{\rm disr}=0.8\) and new connections are established as monomers move into a new equilibrium position driven by pulling forces and pushing forces between connected neighbours. We report that in this phase small disconnected fragments may rarely reconnect to the most massive fragment. For an angular velocity of \(\omega_{\rm agg}/\omega_{\rm disr}=1.0\) larger fragments are separated from the most massive cluster. The remaining cluster goes through a phase of relaxation, where its total stress \(\sigma_{\Sigma}\) declines while simultaneously the mass loss is still an ongoing process. We note that if the spin-up process of the grain were to stop at an angular velocity \(\omega_{\rm agg}/\omega_{\rm disr}\leq 1.0\) the aggregate would still fragment, but the mass loss would stop eventually as the remaining most massive fragment reaches a new equilibrium configuration. Dust destruction processes such as shattering efficiently redistribute larger grains sizes towards smaller ones (Dwek & Scalo 1980; Tielens et al. 1994). In modelling this process it is usually assumed that the new size distribution of the fragments follows a power-law (Hellyer 1970; Hirashita & Yan 2009; Kirchschlager et al. 2019). A similar power-law is assumed in models of the grain redistribution by means of RATD (e.g. Giang et al. 2020). However, the fragmentation process of rapidly rotating porous dust aggregates remains poorly constrained. In fact, our N-body simulation results so far indicate that the considered BAM aggregates almost completely break down into their individual building blocks. Hence, the resulting size distribution of the fragments is expected to be almost identical to the initial size distribution of the monomers. However, we note that our numerical setup does not track smaller fragments once they become separated from the most massive fragment. Furthermore, each individual fragment will experience its own own individual spin-up and drag (see Sect. 5). Smaller fragments may also eventually become too small for an RAT effective for a further spin-up upon the disruption point (Lazarian & Hoang 2007a). Such effects need to be taken into account in forthcoming studies to answer question about the resulting size distribution of an ensemble of rotationally disrupted grains conclusively. ### Rotational deformation of grain shapes In Fig. 6 we present the time evolution of the shapes of a-C grains with different sizes up to the breaking point. Initially, for \(\omega_{\rm agg}/\omega_{\rm disr}=0.0\) the ensemble of grains with an effective radius of \(a_{\rm eff}=200\) nm are almost oblate and prolate shapes in equal parts. Larger grains are with \(a_{\rm eff}=350\) nm and \(a_{\rm eff}=500\) nm are slightly more prolate where most of the axis ratios are \(c/b<1.7\). At the breaking point i.e. \(\omega_{\rm agg}/\omega_{\rm disr}=1.0\) the a-C grains become deformed where a oblate shapes are the most likely outcome with an axis ratio up to \(b/a\leq 5.0\) for BAM grains. The exception is the ensemble of small BA grains with \(a_{\rm eff}=200\) nm where a prolate shape seem to be the more favourable configuration. We note that BA grains reach a smaller axis ratios because they break more easily with a much shorter deformation phase compared to BAM1 and BAM2 grains. We report similar trends for the deformation of grains with q-S and co-S materials, respectively, where oblate grains are the most likely shape of rotational disruption. Quantifying the grain shape by the fractal dimension \(D_{\rm f}\) during the spin-up process reveals no clear trend. As depicted in Fig. 2 the fractal dimension is not well correlated with the grain shapes of the initial ensemble. Up to the braking point the variation of \(D_{\rm f}\) becomes even larger. The same for the porosity \(\mathcal{P}\). At the beginning of the spin up process \(\mathcal{P}\) starts to increase slightly for BAM1 and BAM2 grains while the BA ones break without an increase in \(\mathcal{P}\). Close to the breaking point the porosity distribution has a large variation and becomes virtually identical for BAM1 and BAM2 grains. Eventually, the analysis of the fractal dimension \(D_{\rm f}\) as well as the porosity \(\mathcal{P}\) does no longer apply because the calculation of these quantities fails as the grain aggregates break down into its individual monomers (see Sect. 3). The shape and porosity of interstellar dust is still a matter of debate. For example BA grain growth processes favor roundish grain aggregates with a fractal dimension of about \(D_{\rm f}=2.0\) where the principle axis are \(a\approx b\approx c\). Guillet et al. (2018) developed a grain model based on spheroids of amorphous silicate and amorphous carbon to reproduce both starlight polarization and polarized sub-millimeter emission. The best fit model suggest prolate grains with an axes ratio of \(b/a=1/3\) and a porosity of \(\mathcal{P}=0.2\). More recently, Draine & Hensley (2021b) suggest prolate grains with an axis ratio of \(b/a=0.6\) or oblate grains with \(b/a=1.5\) with a porosity of about \(\mathcal{P}=0.4\). Comparing the initial porosity of our ensemble growth by BAM (see Fig. 2) reveals that only the BAM2 ensemble would match the limitation in porosity with values \(\mathcal{P}=0.4-0.5\). Elongated grains may be the result of hit and stick processes of BCPA with preferential direction, e.g. for magnetohydrodynamic turbulence (Yan & Lazarian 2003) or for grain aggregates aligned with the magnetic field direction (Hoang 2022). However, the latter effect requires rapid grain rotation. Subsequently, the spin-up process would increase the relative velocity between the surface of the rotating aggregate and imping Figure 4: Evolution of the mass \(M(\omega_{\rm agg})\) (top panel), number of connections \(N_{\rm con}(\omega_{\rm agg})\) (middle panel), and total stress \(\sigma_{\Sigma}(\omega_{\rm agg})\) (bottom panel) within the largest fragment dependent on the increasing angular velocity \(\omega_{\rm agg}\). The blue lines represent the exemplary ensemble of q-S BAM2 grains with \(a_{\rm eff}=350\) nm. An arrow points to the peak value of the number of connections \(N_{\rm con}\). Red dots indicate the characteristics angular velocity \(\omega_{\rm agg}\) up to which an individual aggregate has lost 25 % of its initial mass, initial number of connections, or reached its peak stress. ing monomers may potentially destroy the entire aggregate in a catastrophic disruption event (Benz & Asphaug 1999; Morris & Burchell 2017; Schwartz et al. 2018). What we find with our N-body simulations is that accelerated rotation deforms initially more roundish aggregates preferentially into oblate shapes. For RATs the axis ration of the grains is linked to the radiation field via the maximal angular velocity \(\omega_{\rm RAT}\). The more extreme cases, as depicted in Fig. 6 with \(b/a=4-5\) would require for the grains to be in close vicinity of a supernova or active galactic nucleus (Hoang et al. 2019; Giang et al. 2020; Figure 5: Three exemplary snapshots of a BAM2 grain aggregate with \(a_{\rm eff}=500\) nm of an particular rotational disruption simulation with the corresponding number of connection \(N_{\rm con}\) for the angular velocities of \(\omega/\omega_{\rm disk}=0.1\) (top row), \(\omega/\omega_{\rm dist}=0.8\) (bottom row), \(\omega/\omega_{\rm disk}=1.0\) (bottom row). We emphasize that the exemplary grain is identical to the one in the lower right corner of Fig. 1. _Left column:_ Color coded is the affiliation of the distinct connected fragments. Smaller fragments are arbitrarily colored whereas the most massive fragment is always depicted in blue. _Right column:_ The monomers are colored according to the largest magnitude of the normal force \(F_{\rm N}\) exerted from its connected neighbours where \(F_{\rm N}>0\) (green) represents pushing forces while \(F_{\rm N}<0\) (red) are pulling forces, respectively. Giang & Hoang 2021). We emphasize that the torques \(\Gamma_{\rm RAT}\) and \(\Gamma_{\rm MET}\) (see Sect. 5) are tightly connected to the grain shape (Lazarian & Hoang 2007a; Das & Weingartner 2016, RMK22). Consequently, the terminal angular velocity \(\omega_{\rm RAT}\) would be marginally increased during the deformation phase accelerating the grain rotation even more (see Eq. 26). However, this effect would be continuous over a long time span and not considerably impact the internal grain dynamics as far as the final value of \(\omega_{\rm dist}\) is concerned. More severe is the change in \(\omega_{\rm RAT}\) during the fragmentation phase. Here, the RAT would decrease as the aggregate fragments and subsequently the rotation would slow down and the grain stabilizes by reaching a new equilibrium configuration. However, considering a dynamical spin-up process in our N-body simulation would not only require to calculate the torque \(\Gamma_{\rm RAT}\) but also the rotational drag timescale \(\tau_{\rm drag}\) for each time step by means of time consuming numerical approximate methods (Draine & Flatau 2013, RMK22). Potentially, both quantities \(\Gamma_{\rm RAT}\) and \(\tau_{\rm drag}\) may be parameterized based on the shape and material parameters of each individual BAM grain. For now a dynamical spin-up goes beyond the scope of this paper. ### Average rational disruption of the grain ensemble In Fig. 7 we present the characteristic angular velocity \(\omega_{\rm dif}\) for the entire set of simulation results. The magnitude of \(\omega_{\rm dif}\) is roughly in the range of about \(5\cdot 10^{8}-5\cdot 10^{9}\) rad s\({}^{-1}\) for different materials and grain sizes. This result agrees well with the predictive model presented in Hoang et al. (2019). However, the exact value of \(\omega_{\rm dif}\) depends on the exact material properties and the internal structure of the initial grain. While rapidly rotating solid bodies break into few fragments by driven by centrifugal forces, a porous material may fragment at lower angular velocity and into a larger number of pieces. The maximal tensile strength \(\mathcal{S}_{\rm max}\) is usually utilized to quantify the response of a material during stretching. However, what we find that utilizing the maximal tensile strength \(\mathcal{S}_{\rm max}\) as introduced in Eq. 28 to calculate the critical angular velocity \(\omega_{\rm S}\) with Eq. 27 does not reproduce the average rotational disruption \(\omega_{\rm dif}\) resulting from our N-body simulations. We argue that \(\mathcal{S}_{\rm max}\) is insufficient to describe the dynamical behavior of rotating aggregates because in a stretching aggregate each monomers experience only the local forces in between its connected neighbours. In a rotating aggregate, however, each individual monomer experiences also the centrifugal force. The maximal tensile strength \(\mathcal{S}_{\rm max}\) is determined by stretching of granular materials performed along a distinct axis, whereas the centrifugal force is radial with respect to the rotation axis. Furthermore, considering only stretching produces acting on an aggregate does not not lead to a large displacement of monomers compared to the aggregate's scale. It is important to note that, our N-body simulations reveal that monomers are rolling outwards within the grain aggregates. This movement of \(\mathcal{N}_{\rm mono}\) monomers allows potentially to establish up to \(N_{\rm mono}(N_{\rm mono}-1)\approx N_{\rm mono}^{2}\) new connections. Naturally, some monomers are already connected and only a small fraction of all unconnected monomers is close enough within the aggregate to newly connect. Hence, the relation cannot be quadratic but with an much smaller exponent \(\alpha\ll 2\). In order to match our data we suggest to extend the definition of Eq. 28 by the number of monomers \(N_{\rm mono}\). By utilizing Eq. 27 the characteristic angular velocity of rotation disruption for a polydisperse grain aggregate \begin{table} \begin{tabular}{|l|l l l|l l l|l l l|} \hline & & \multicolumn{2}{c|}{a-C} & \multicolumn{3}{c|}{q-S} & \multicolumn{3}{c|}{co-S} \\ \cline{3-10} & BA & BAM1 & BAM2 & BA & BAM1 & BAM2 & BA & BAM1 & BAM2 \\ \cline{2-10} A & 1.97 & 2.28 & 2.25 & 2.13 & 2.18 & 2.22 & 1.50 & 1.65 & 1.57 \\ \(\alpha\) & 0.10 & 0.12 & 0.13 & 0.10 & 0.14 & 0.14 & 0.10 & 0.14 & 0.15 \\ \hline \end{tabular} \end{table} Table 3: Best fit parameters of \(\omega_{\rm dif}\) based on our N-body disruption simulation results. Figure 6: Evolution of grain shapes for a-C BA (red), BAM1 (green), and BAM2 (blue) grains with an effective radius of \(a_{\rm eff}=200\) nm (left panel), \(a_{\rm eff}=350\) nm (middle panel), and \(a_{\rm eff}=500\) nm (right panel), respectively. Dots represent the initial grain shape i.e. \(\omega_{\rm agg}/\omega_{\rm dif}=0\) whereas crosses are the grain shape at disruption for \(\omega_{\rm agg}/\omega_{\rm dif}=1\). We note the clear tendency of the porous grains to become oblate in shape with increasing \(\omega_{\rm agg}\). reads then \[\omega_{\rm distr}=\frac{A}{a_{\rm eff}}\,\sqrt{\frac{\gamma}{\rho_{\rm mat}\, \langle a_{\rm mono}\rangle}}\Big{(}\,\langle N_{\rm con}\rangle\,N_{\rm mono} \Phi\,\Big{)}^{\alpha} \tag{33}\] where \(\langle a_{\rm mono}\rangle\) is the average monomer radius and \(A\) and \(\alpha\), respectively, are fit parameters3 to match the simulation results. Assigning a separate exponent to each of the individual quantities \(\langle N_{\rm con}\rangle\), \(N_{\rm mono}\), and \(\Phi\), respectively, does not improve the accuracy of the fit. Best fit results of \(\omega_{\rm distr}\) are plotted in Fig. 7. The fit matches very well with the ensemble average of the different considered grain sizes. Footnote 3: Note that \(\langle N_{\rm con}\rangle=N_{\rm con}/N_{\rm mono}\) and thus Eq. 33 may also be written in an equivalent form with a factor \((\langle N_{\rm con}\rangle\,N_{\rm mono}\Phi\,)^{\alpha}=(N_{\rm con}\,\Phi\,)^ {\alpha}\) In Fig. 8 we show a comparison of our best fit of \(\omega_{\rm distr}\) for co-S BAM 2 grains with that calculated with different parameterizations of the tensile strength \(\mathcal{S}_{\rm max}\) given in literature (see also Sect. 5). These parameterizations of \(\mathcal{S}_{\rm max}\) may not necessarily be evaluated with the parameters provided by our grain models. Hence, we scale the resulting to match our results of \(\omega_{\rm distr}\) at for an effective radius of \(a_{\rm eff}=100\) nm. The comparison reveals that previous attempts of modelling \(\omega_{\rm distr}\) by a volume filling factor \(\phi\) dependent tensile strength \(\mathcal{S}_{\rm max}\) cannot reproduce the asymptotic behavior of our simulation results towards larger grain radii \(a_{\rm eff}\). Finally, we emphasize that the presented results of rotational disruption are calculated for aggregates of carbonaceous and silicate monomers loosely connected by van der Waals force. However, materials other than carbon or silicates may form much stronger bonds (Dominik & Tielens 1997). For instance, pure iron in the form of small pallets may be present in the interstellar dust that would create metallic bonds between monomers (Dominik & Tielens 1997; Draine & Hensley 2021a). The same Figure 8: The same as Fig. 7 but only for co-S BAM2 grains (blue) in comparison with the predictions of \(\omega_{\rm agg}\) based on the tensile strengths models presented in Greenberg et al. (1995) (G95), Seizinger et al. (2013b) (S13), and Tatsuuma et al. (2019) (T19), respectively (gray). The latter are re-scaled to match our simulation results for \(a_{\rm eff}=100\) nm. Figure 7: The angular velocity of disruption \(\omega_{\rm distr}\) dependent on grain size for the ensembles of co-S (top panel), q-S (biddle panel), and a-C (bottom panel) grain materials dependent on the effective radius \(a_{\rm eff}\). Color coded are the BA (red), BAM1 (green), and BAM2 (blue) grains. Vertical bars are the range between minimal and maximal values of \(\omega_{\rm distr}\) resulting from our N-body disruption simulations while solid lines represent the best fit model. for ices covering the surface of the grain aggregate where dipole-dipole interactions may increase the resistance against rotational disruption. Furthermore, high impact collisions of monomers or compressive stress acting on the aggregate may lead to sintering, an effect where the monomers fuse at the contact surface. Subsequently, sintering leads to a neck between monomers (Maeno & Ebinuma 1983; Blackford 2007; Sirono & Ueno 2017) strengthening the connection between monomers as well but at the same time it makes the aggregate a whole more brittle, because as a neck would not allow for rolling motions between monomers. Moreover, an effect that may weaken monomer connections is dust heating. A higher dust temperature decreases the surface energy \(\gamma\) up to one order of magnitude (Bogdan et al. 2020). Any luminous environment with a radiation field strong enough to drive grain rotation up to \(\approx 10^{9}\) rad s\({}^{-1}\) would inevitably heat the dust grains up to several hundredth of Kelvin. Thus \(\gamma\) and subsequently \(\omega_{\rm disr}\) would decrease. Altogether, such effects need to be taken into consideration in forthcoming studies in order to complement our numerical setup of the rotational disruption of porous dust aggregates. ### Impact of critical rolling displacement and viscous damping The critical rolling displacement \(\xi_{\rm crit}\) and the viscous damping time scale \(T_{\rm vis}\) are the least constrained parameters in our N-body simulations. Laboratory data indicates a huge variation in these parameters of about one order of magnitude for silicates (Dominik & Tielens 1995; Heim et al. 1999; Krijt et al. 2013). In order to quantify the impact \(\xi_{\rm crit}\) and \(T_{\rm vis}\), respectively, on rational disruption we repeat our N-body simulation for a subset of the pre-calculated grain aggregates varying \(\xi_{\rm crit}\) in the range 0.2 nm to 6.4 nm and \(T_{\rm vis}\) from 1 ps to 9 ps. In Fig. 9 we present the exemplary results of the parameter test of a-C BA and BAM2 grains. Variations in the rolling displacement \(\xi_{\rm crit}\) show little effect. The resulting angular velocity of rotational disruption \(\omega_{\rm disr}\) shows small fluctuations but no clear trend with an increase in \(\xi_{\rm crit}\). Hence, we attribute the fluctuation to numerical effects (see also appendix A). In contrast to \(\xi_{\rm crit}\) an increase in \(T_{\rm vis}\) leads to a clear increase of \(\omega_{\rm disr}\). This effect is most evident for BAM2 grains with \(\omega_{\rm disr}\) being about 11 % higher for the largest applied viscous time of \(T_{\rm vis}=9\) ps. Similar trends can be reported for the q-S and co-S materials. Further improvements in the predictive accuracy of our rational disruption simulations cannot be achieved upon forthcoming laboratory data especially for the viscoelastic damping of oscillations within carbonaceous aggregates. ## 8 Summary We aimed to study the rotational disruption of interstellar dust aggregate analogs. An ensemble of porous dust aggregates by means of ballistic aggregation and migration (BAM) is pre-calculated where for BA, BAM1, and BAM2 grains each monomer has at least one, two, or three connections, respectively, with its neighbors. Here, we modified the original BAM algorithm presented in Shen et al. (2008) to work with variable monomer sizes. We estimated the composition of the grain aggregates based on the abundance of elements in the ISM to approximate their mechanical properties. Numerical three-dimensional N-body simulations are performed to determine characteristic angular velocity of rational disruption \(\omega_{\rm disr}\) for each aggregate individually. The numerical setup is based on the work of Dominik & Tielens (1997), Wada et al. (2007), Seizinger et al. (2012), and RMK22, respectively. We modified their setup by intruding additional forces associated with an accelerated rotation of an aggregate. A subsequent analysis of the disruption event allows to describe the average rotational disruption of porous grain ensembles. The findings of this study are summarized as follows: * Compared to the original BAM algorithm considering only single a monomer size we report that a grain growth by BAM with polydisperse monomers results in aggregates that are more compact and less porous as smaller monomer tend to migrate deeper into the aggregate. For the same reason the porosity \(\mathcal{P}\) stagnates with an increasing effective radius \(a_{\rm eff}\) with values up to \(\mathcal{P}=0.4-0.85\) for BA whereas BAM1 Figure 9: The same as Fig. 7 but only for a-C BA (dashed lines) and BAM2 (dotted lines) grains. The left panel shows the impact of critical rolling displacement \(\xi_{\rm crit}\in[0.2\) nm, \(6.4\) nm] on \(\omega_{\rm agg}\) while the viscous dumping time \(T_{\rm vis}\) remains constant. The right panel is for a constant \(\xi_{\rm crit}\) but with \(T_{\rm vis}\in[1\) ps, \(9\) ps]. Red lines represent the default parameters of our N-body disruption simulations. and BAM2 grains are less porous with \(\mathcal{P}=0.35-0.7\) and \(\mathcal{P}=0.35-0.5\), respectively. * Our simulation results reveal that rotating porous dust aggregates are not disrupted by a single abrupt event characteristic for a brittle material but rather by a continuous process where aggregates experience a continuous mass loss. Subsequently, the initial aggregate breaks ultimately down into fragments not larger than a few monomers. * The initial distribution of the pre-calculated polydisperse BAM grains have oblate and prolate shapes, respectively, almost in equal parts. However, under accelerated rotation the grain aggregates enter a phase of deformation and, subsequently, the grain shapes are finally preferentially redistributed towards oblate shapes. * We introduce the quantity of the total stress \(\sigma_{\Sigma}\) as a measure for the internal aggregate dynamics and time evolution. We report that \(\sigma_{\Sigma}\) increases up to a peak value characteristic for individual aggregates while the rotation accelerates and subsequently starts to loose mass. This peak roughly coincides with a mass loss of 25 % with respect to the initial grain mass. For higher angular velocities \(\sigma_{\Sigma}\) drops sharply while the mass loss continues. We utilize this peak in \(\sigma_{\Sigma}\) to define the breaking point on a physical basis and finally to determine the characteristic angular velocity \(\omega_{\mathrm{disr}}\) of rotational disruption. * In the deformation phase of BAM aggregates individual monomers are moving along the grain surface driven by the centrifugal force and new connections may be established. Consequently, the additional connections stabilize the BAM aggregates against the disruptive centrifugal forces acting on the monomers. * Our N-body simulations reveal that the angular velocity \(\omega_{\mathrm{disr}}\) reaches an asymptotic limit towards larger grain sizes of \(a_{\mathrm{eff}}\gtrsim 300\,\mathrm{nm}\). This finding is in contrast to previous attempts to describe the rational disruption of porous aggregates analytically based on the maximal tensile strength where \(\omega_{\mathrm{disr}}\) would continuously decrease for an increasing \(a_{\mathrm{eff}}\). ## Appendix A Connection breaking test In this sections we provide a test scenario for the accuracy of our code. Due to the complexity of N-body simulation it is not feasible to derive an analytical solution for an entire aggregate. However, the problem can exactly be solved for two connected monomer with of equal size i.e. \(a_{\rm moe,i}=a_{\rm moe,i}\) where the reduced radius simply becomes \(R=a_{\rm moe,i}/2\). In this case the centrifugal force (see Eq. 19) may be written as \[F_{\rm cent,i}=-m_{\rm i}\omega_{\rm agg}^{2}a_{\rm moe,i} \tag{1}\] by using the criterion of a broken contact \(F_{\rm N,ij}=-F_{\rm C}\) the corresponding critical contact radius becomes \(r_{\rm C}=(1/6)^{2/3}r_{0}\). Putting \(r_{\rm C}\) into Eq. 12 allows the solve for the exact angular velocity \[\omega_{\rm ref}=\sqrt{\frac{5\pi}{6}\frac{\gamma}{m_{\rm moe,i}}} \tag{2}\] where the two equally sized monomers become separated by centrifugal forces. We use this quantity for reference to evaluate the accuracy of our code. A number of 80 benchmark runs are perform with randomly selected monomer radii \(a_{\rm moe,i}\in[10\ {\rm nm},100\ {\rm nm}]\). In Fig. 1 we compare the resulting angular velocity \(\omega_{\rm disr}\) of our numerical setup introduced in Sect. 6 with Eq. 2. The resulting error \((\omega_{\rm ref}-\omega_{\rm disr})/\omega_{\rm ref}\) is below 1.8 % with a trend of underpredicting \(\omega_{\rm ref}\). An error of 1.8 % is comparable to the range we observe for the parameters test of the rolling displacement \(\xi_{\rm crit}\) as presented in Fig. 9. Hence, we assume a numerical accuracy of about 1.8 % for our numerical N-body simulations. Consequently, the uncertainties in our reported results of the angular velocity of rotational disruption \(\omega_{\rm ref}\) is most impacted by the lack of exact laboratory material parameters rather than numerical errors. ###### Acknowledgements. Special thanks goes to Wilhelm Kley for numerous fruitful discussion about N-body simulations of aggregates. The authors thank Thiem Hoang, Cornelis P Dullemond, and Bruce T. Draine for useful insights into the topic of dust composition and dynamics. S.R., P.N., and R.S.K. acknowledge financial support from the Heidelberg cluster of excellence (EXC 2181 - 390900948) "_STRUCTURES_: A unifying approach to emergent phenomena in the physical world, mathematics, and complex data", specifically via the exploratory project EP 4.4. S.R. and R.S.K. also thank for support from Deutsche Forschungsgemeinschaft (DFG) via the Collaborative Research Center (SFB 81, Project ID 138713538) "The Milky Way System' (subprojects A01, A06, B01, B02, and B08). And we thanks for funding from the European Research Council in the ERC synergy grant "_COGAL_ -- Understanding our Galactic ecosystem: From the disk of the Milky Way to the formation sites of stars and planets" (project ID 855130). The project made use of computing resources provided by _The LaInd_ through bwHEC and by DFG through grant INST 35/1134-1 FUGG. Data are in part stored at SDSS/Bd supported by the Ministry of Science, Research and the Arts and by DFG through grant INST 35/1314-1 FUGG.
2306.06683
To be a pro-vax or not, the COVID-19 vaccine conundrum on Twitter
The most surprising observation reported by the study in (arXiv:2208.13523), involving stance detection of COVID-19 vaccine related tweets during the first year of pandemic, is the presence of a significant number of users (~2 million) who posted tweets with both anti-vax and pro-vax stances. This is a sizable cohort even when the stance detection noise is considered. In this paper, we tried to get deeper understanding of this 'dual-stance' group. Out of this group, 60% of users have more pro-vax tweets than anti-vax tweets and 17% have the same number of tweets in both classes. The rest have more anti-vax tweets, and they were highly active in expressing concerns about mandate and safety of a fast-tracked vaccine, while also tweeted some updates about vaccine development. The leaning pro-vax group have opposite composition: more vaccine updates and some posts about concerns. It is important to note that vaccine concerns were not always genuine and had a large dose of misinformation. 43% of the balanced group have only tweeted one tweet of each type during our study period and are the less active participants in the vaccine discourse. Our temporal study also shows that the change-of-stance behaviour became really significant once the trial results of COVID-19 vaccine were announced to the public, and it appears as the change of stance towards pro-vax is a reaction to people changing their opinion towards anti-vax. Our study finished at Mar 23, 2021 when the conundrum was still going strong. The dilemma might be a reflection of the uncertain and stressful times, but it also highlights the importance of building public trust to combat prevalent misinformation.
Zainab Zaidi, Mengbin Ye, Shanika Karunasekera, Yoshihisa Kashima
2023-06-11T13:57:58Z
http://arxiv.org/abs/2306.06683v2
# To be a pro-vax or not, the COVID-19 vaccine conundrum on Twitter ###### Abstract The most surprising observation reported by the study in [1], involving stance detection of COVID-19 vaccine related tweets during the first year of pandemic, is the presence of a significant number of users (\(\sim\)2 million) who posted tweets with both anti-vax and pro-vax stances. This is a sizable cohort even when the stance detection noise is considered. In this paper, we tried to get deeper understanding of this _dual-stance_ group. Out of this group, 60% of users have more pro-vax tweets than anti-vax tweets and 17% have same number of tweets in both classes. The rest have more anti-vax tweets, and they were highly active in expressing concerns about mandate and safety of a fast-tracked vaccine, while also tweeted some updates about vaccine development. The leaning pro-vax group have opposite composition: more vaccine updates and some posts about concerns. It is important to note that vaccine concerns were not always genuine and had a large dose of misinformation. 43% of the balanced group have only tweeted one tweet of each type during our study period and are the less active participants in the vaccine discourse. Our temporal study also shows that the change-of-stance behaviour became really significant once the trial results of COVID-19 vaccine were announced to the public, and it appears as the change of stance towards pro-vax is a reaction to people changing their opinion towards anti-vax. Our study finished at Mar 23, 2021 when the conundrum was still going strong. The dilemma might be a reflection of the uncertain and stressful times, but it also highlights the importance of building public trust to combat prevalent misinformation. COVID-19 vaccinees, anti-vax, pro-vax, vaccine hesitancy ## I Introduction _work in pharma and I'm not getting it either. I know how to read journal articles and clinical trial data. The risk versus potential reward isn't worth it, especially with the unknowns of an RNA vaccine. But I'm still called a conspiracy theorist."_ _"As the virus gets worse, people get more nonchalant about it. It makes no sense. We have vaccines. There's a light at the end of the tunnel. Be safe for another few months and we'll be out of this. We can all be heroes just by being careful. Saving lives has never been easier!"_ The first message arguably expresses an anti-vaccination (anti-vax) stance, whereas the second message, a pro-vaccination (pro-vax) stance. One may think two people with opposing opinions sent out these tweets. In fact, they were posted by the same user, and it is not an isolated occurrence. There are numerous examples of a user tweeting pro- and anti-vax messages in the Twitter study [1]. This study used OpenAI's GPT based stance detection tool to classify \(75\) million COVID-19 vaccine related English tweets, from Mar 2020-Mar 2021, into anti-vax, pro-vax, or neutral stances [1]. It was found that a majority, close to 2/3rd, of Twitter users posting anti-vax content also tweeted in favour of vaccination [1]. These _dual-stance_ users are sizable and prolific. Dual-stance users constitute 22% of the unique users in our dataset, but contributed a majority of tweets in the study, 66% and 85% of the total pro-vax and anti-vax tweets, to be precise. Why do these active participants in the Twitter discussion send tweets that signal opposing stances? Were they in a state of _confusion_ about the new COVID-19 vaccine? The safety risks and mandate of fast-paced vaccine development and lack of trust in authorities and pharmaceutical companies were among major topics expressed in anti-vax tweets [1, 2, 3, 4]. Expressing concerns about the experimental drug is understandable. However, they often shared misinformation as well. This mixture of genuine concerns and misinformation may be a reflection of the chaotic time, when the world was going through an unprecedented and uncertain period and people were more susceptible to misinformation and conspiracy theories. They may simply engage with all and any discussion, propagating any information they could get about the COVID-19 vaccine. We cannot fully understand the users' intentions with the information available in the Twitter data set under study, but we can infer some generalized collective trends. Specifically, we formulated the following research questions to understand the puzzling phenomenon of dual-stance tweeting: * Are dual-stance users noises in stance detection? [1] concluded that, even when noises in stance-detection are considered, there are simply too many dual-stance users who sent tweets of putatively opposing stances. Here, we provide a more principled treatment of the issue and conclude that dual-stance tweeting is not a noise, but a robust phenomenon. * and anti-stances? Our analysis shows that _anti-leaning_ dual-stance users - those dual-stance users who sent more anti-vax than pro-vax tweets - were very active in expressing concerns about the vaccine mandate and safety of the vaccine, but were also posting some vaccine updates. In contrast, _pro-leaning_ dual-stance users - those who sent more pro- than anti-vax tweets - have a mirror-image profile. They tweeted about vaccine updates, but also commented on the concerns. Many (43%) _balanced_ dual-stance users - those who sent the same number of pro- and anti-vax tweets - were not active, tweeting only one tweet each of either stance during the year-long study period. * whether they changed to pro - stance or anti-stance - yield some insight into dual-stance users? Because dual-stance users must have changed their stances at least once, if not more often, change-of-stance may provide some insight about why they did so. In particular, we explore the following questions: 1 - does their change of stance correlate with COVID-19 related events, and 2 - are there causal connections between changes to pro - and anti-vax stance? Our analysis suggests that stance changes occurred prominently when COVID-19 vaccine trial results were made public. Both time series of changes into pro-stance and into anti-stance show high correlation with each other. Preliminary causal analysis indicates that changes into pro-vax stance could possibly be a reaction to or driven by the change into anti-vax stance. A number of recent studies analysed the tweets related to COVID-19 vaccine, mostly focusing on understanding the factors behind vaccine hesitancy. We will discuss the related work in the Section II in detail. None of the papers we know, discussed the dual-stance users explicitly, however, few have hinted about their presence [5, 6, 7, 8, 9]. In our previous paper [1], we provided some preliminary results of content analysis through topic modelling but were not able to go deeper into exploring this specific cohort of dual-stance users, which we address now in this paper. The rest of the paper is organised as follows: Section II provides a summary of the relevant previous publications, Section III summarises the background information about the dataset and stance detection method used in this study and described in detail in [1]. It also summarises the preliminary results from [1]. Section IV discusses the user classification into anti-leaning, balanced, and pro-leaning groups. Section V deals with the topic modelling analysis for dual-stance users' tweets, Section VI presents the temporal dynamics of stance changes, and Section VII explores the role retweet and reply threads had in changing of stance. Finally, Section IX summarises our findings and concludes the paper. ## II Related Work There has been a substantial amount of work studying public behaviour towards COVID-19 vaccines since the beginning of the pandemic. These papers have looked at the issue from various angles, such as, discovering the leading factors behind vaccine hesitancy [2, 3, 4, 10, 11, 12, 13], correlation between political polarisation and vaccine stance [14], coordinated behaviour in propagation of misinformation [15], etc. They also have used a broad range of techniques to classify tweets as pro-vax or anti-vax, such as, unsupervised learning methods of sentiment analysis and topic modelling or supervised learning techniques where a subset of tweets, hashtags, or users are labelled by annotators and the labels are used to automatically detect stances for tweets or users. So far, we are unaware of any works which explicitly discuss the dual-stance cohort or highlight the phenomenon, but there are some publications which have touched briefly on their presence. [5] observed that the most retweeted users were only moderately polarized in the anti-vax vs pro-vax debate when they manually classified 7004 Italian tweets from the period Oct 2020-Jan 2021. Another study of Italian tweets [6] classified users as 'vaccine supporters', 'vaccine hesitant', and 'others' which have no clear stance. These users in 'others' were often observed to be communicating with the users of opposing stances and appear as bridge between the polarised communities [6]. The study in [7] of Italian, French, and German tweets for Nov 2020 to Nov 2021, separated hashtags into 'anti-vax' and 'pro-vax' and found a small group with hashtags of both and termed them as 'undecided'. [8] used a BERT-based stance detection tool, trained with a combination of 1,700 labelled tweets and publicly available labels, and detected stances of English tweets from 4 million distinct users over the duration Jan 2018-Mar 2021. They classified users as 'anti-vaxxer' or 'pro-vaxxer' if 70% of their tweets were anti-vax or pro-vax, respectively. [9] developed a framework of Vaccine Hesitancy Framings (VHF), which involved defining specific factors behind vaccine hesitancy, identification of tweets' stances towards these VHFs using linguistic features, and classifying users based on their stances into different profiles. 22% of the users were classified into the profile 'undecided' which have almost balanced stances (accept or reject) towards VHFs and 8% are 'concerned' who are supportive of vaccines but have minor concerns [9]. The study in [1] and some of the preliminary results are discussed in the next section. ## III Preliminaries This paper extends the study presented in [1], where publicly available tweet dataset collected and maintained by R. Lamsal [16] are used. The dataset tweets from Mar 20, 2020 to Mar 23, 2021 are further filtered with vaccine related keywords, and the stances of each tweet are predicted as 'favour', 'against', or 'none' with respect to the topic 'vaccine hesitancy' by using the stance detection tool created from OpenAI's GPT transformer model [17]. In order to make this yearlong study possible with reasonable confidence, [1] picked around \(46\)K+ tweets to label, sampling \(100+\) tweets each day. As the conversation was changing throughout the year, it was critical to have a relevant labelled set of tweets to fine tune the GPT model, which can then be used for detecting stances for the tweets of that specific period. This fine turning was sufficient to give reasonable composite F-scores of 0.6 or above for 20 test sets, picked and labelled from the yearlong dataset [1]. The composite F-scores for the 20 test sets are in the range of 0.67-0.87, precision for anti-vax stance classification was between 0.52-0.92 and that of pro-vax class was in the range of 0.68-0.95. For details, see [1]. ### _Overall Statistics_ As presented in [1], the stance detection tool predicted \(37,047,378\) pro-vax, \(10,567,955\) anti-vax, and \(28,322,526\) neutral or irrelevant tweets in this dataset. Moreover, of the total of \(8,637,015\) unique user IDs (for anti-vax and pro-vax tweets), \(5,571,946\) (\(64.5\%\)) tweeted only pro-vax messages, but \(1,171,837\) (\(13.6\%\)) tweeted only anti-vax messages and \(1,893,232\) (\(21.9\%\)) dual-stance users sent out both pro- and anti-vax tweets. The dual-stance users contributed \(85\%\) of the anti-vax tweets for the yearlong study period, and they accounted for \(62\%\) of the anti-vax users (1,893,232 out of total 3,053,626). On the other hand, they contributed more than half of the pro-vax tweets (\(66\%\)) during the study period while they accounted for only a quarter of pro-vax users (7,465,178 total pro-vax users) [1]. Almost \(60\%\) of the dual-stance users have more pro-vax tweets than anti-vax, whereas \(23\%\) of the total dual-stance users have contributed more anti-vax tweets than pro-vax tweets and close to \(17\%\) of dual-stance users are found with balanced number of posts in both classes during the study period. Among the dual-stance users with more pro-vax tweets, the percentage of pro-vax tweets is generally quite high, and more than 2/3rd of these users have \(70\%\) or more pro-vax tweets than anti-vax tweets. We also compared the possibility of the dual-stance users being bots with that of users with only anti-vax and pro-vax tweets. That analysis is presented in Appendix A. ### _Role of Stance Detection Noise_ [1] used OpenAI's GPT transformer-based stance detection tool [17, 18] and classified \(75\) million English language tweets related to COVID-19 vaccines from March 2020 to March 2021 into anti-vax, pro-vax, and neutral tweets. Although as many as \(22\%\) of the \(8,563,466\) unique user IDs were classified as dual-stance and close manual inspection revealed examples such as the one mentioned at the start of this paper, the stance detection tool has its limitations, including inadequacy in classifying satire and convoluted expressions. Furthermore, because some expressions are ambiguous, it is impossible to avoid variability in the stance classification. Given these potential sources of measurement error, we provide a more principled consideration of the issue of noise in stance classification. Let \(p_{i}\) be the probability that user \(i\) classified as a dual-stance user is in fact dual-stance. Noting that each tweet is classified independently into any of the stances and assuming that the user sent \(n_{a}\) anti-vax and \(n_{p}\) pro-vax tweets, \(p_{i}\) can be estimated as below: \[p_{i}=1-\left(1-\alpha_{a}\right)^{n_{a}}-\left(1-\alpha_{p}\right)^{n_{p}}+ \left(1-\alpha_{a}\right)^{n_{a}}\left(1-\alpha_{p}\right)^{n_{p}}, \tag{1}\] where \(\alpha_{a}\) (\(\alpha_{p}\)) is the precision of the stance detection tool for the anti-vax (pro-vax) class. For our dataset \(\alpha_{a}\) is in the range of 052-0.92 and \(\alpha_{p}\) is in between 0.68-0.95 calculated using test datasets, as also described above. The effective size of the dual-stance cohort \(N_{e}\) can be calculated as \[N_{e}=\sum_{i=1}^{N}1.p_{i}, \tag{2}\] where \(N\) is the number of detected dual-stance users. For the precision of stance detection method given above, \(N_{e}\) is from \(1,212,327\) to \(1,791,219\). Even with the stance detection noise, we can safely conclude that there is a large number of users who have tweeted both anti- and pro-vax tweets. ## IV Dual Users' Classification As described above, \(60\%\) of the dual users have more pro-vax tweets than anti-vax tweets which shows tendency towards favouring the COVID-19 vaccines rather than rejecting them. However, considering the imperfections in the stance detection process, it would be erroneous to classify users into pro-leaning, anti-leaning, or balanced classes based only on their number of pro-vax and anti-vax tweets. Instead, we estimate the probability of an individual user to be in either of the above-mentioned classes. A user is subsequently placed in a class with higher likelihood. More precisely, we defined a threshold based assignment rule when probabilities are too close to each other. For tractable calculations, we assumed independent tweet classification into stances as in III-B, and also that each stance classification can result in either _true_ or _false_ classification, which allows us to use Binomial distribution for probability computations. The probability that a user favours pro-vax stance more than anti-vax or is _pro-leaning_ is the likelihood that there are more pro-vax tweets from the user than anti-vax in the presence of stance detection errors, i.e., \[\Pr(\text{pro})=\sum_{i=1}^{min(n_{a},n_{p}-1)}\binom{n_{a}}{i} \alpha_{a}^{i}(1-\alpha_{a})^{n_{a}-i}\times\\ \left[\sum_{j=i+1}^{min(n_{a}+1,n_{p})}\binom{n_{p}}{j}\alpha_{ p}^{j}(1-\alpha_{p})^{n_{p}-j}\right], \tag{3}\] where \(n_{a}\) (\(n_{p}\)) are the total number of anti-vax (pro-vax) tweets posted by the user. Similarly, the probability that a user is _anti-leaning_ is, \[\Pr(\text{anti})=\sum_{i=1}^{min(n_{p},n_{a}-1)}\binom{n_{p}}{i} \alpha_{p}^{i}(1-\alpha_{p})^{n_{p}-i}\times\\ \left[\sum_{j=i+1}^{min(n_{p}+1,n_{a})}\binom{n_{a}}{j}\alpha_{ a}^{j}(1-\alpha_{a})^{n_{a}-j}\right]. \tag{4}\] Also, the probability of a user being _balanced_ is, \[\Pr(\text{bal})=\sum_{i=1}^{min(n_{a},n_{p})}\binom{n_{a}}{i} \alpha_{a}^{i}(1-\alpha_{a})^{n_{a}-i}\times\\ \binom{n_{p}}{i}\alpha_{p}^{i}(1-\alpha_{p})^{n_{p}-i}. \tag{5}\] The classification strategy for each user to be pro-leaning, anti-leaning, or balanced is defined as \[\text{User is}\left\{\begin{array}{ll}\text{pro-leaning,}&\text{if}\Pr(\text{ pro})>\Pr(\text{anti})+\epsilon\\ &\&\Pr(\text{pro})>\Pr(\text{bal})+\epsilon\\ \text{anti-leaning,}&\text{if}\Pr(\text{anti})>\Pr(\text{pro})+\epsilon\\ &\&\Pr(\text{anti})>\Pr(\text{bal})+\epsilon\\ \text{balanced,}&\text{otherwise.}\end{array}\right. \tag{6}\] Here, \(\epsilon\) is a small tolerance value which is used to classify users with very close \(\Pr(\text{pro})\) and \(\Pr(\text{anti})\) to the balanced group. In our work, we select \(\epsilon=0.05\) (details are given in Appendix B). Figure 1 plots the number of anti-vax and pro-vax tweets for each class. A higher value of \(\epsilon\) will result in a wider purple band in the middle, i.e., more users classified as balanced. With \(\epsilon=0.05\), \(50\%\) of the dual-stance users are classified as pro-leaning, \(8\%\) as anti-leaning and \(42\%\) as balanced. These percentages will change to \(44\%\), \(8\%\), and \(48\%\) respectively with \(\epsilon=0.1\). Also note that each point in Fig. 1 may be related to many users. The balanced cohort looks smaller in Fig. 1, although it is significantly larger in terms of number of users. An important thing to note here is that the balanced band is wider near the origin, or when users have smaller number of anti- and pro-vax tweets and the impact of stance detection errors is relatively more significant. For example, a user with 60 pro-vax and 30 anti-vax tweets is more likely to be a pro-leaning user than another with 2 pro-vax and 1 anti-vax tweets, though both have the same \(n_{p}/(n_{a}+n_{p})\) ratio. Similarly, a user with 10 anti-vax and 8 pro-vax tweets is more likely to be pro-leaning, due to higher precision of pro-vax stance prediction, than a user with 3 anti-vax and 1 pro-vax tweets, though the difference between the anti-vax and pro-vax tweets are the same in both cases. We believe that this classification strategy is more suitable for our case with stance detection noise, comparing with a simple ratio based classification. Figure 1 also depicts some facts described in Section III-A, such as significant dominance of pro-vax tweets in the cohort and more specifically for the users with more pro-vax tweets. ## V Content Analysis ### _Discussion Topics_ In order to develop a comprehensive understanding of the content posted by the dual-stance users, we classified the tweets into different discussion topics. [1] employed a composite topic modelling method, using GS-DMM (Gibbs Sampling-Dirichlet Multinomial Mixture) topic modelling tool [19] and a manual search to compile a comprehensive list of topics and relevant keywords and phrases. GS-DMM, an LDA (Latent Dirichlet Allocation) based topic modelling tool optimised for Short Text Topic Modelling (STTM), was found to be the most suitable in finding relatively distinct and meaningful clusters of words for the study dataset. However, GS-DMM alone was insufficient for the topic classification tasks because the discussion topics turned out to be highly intertwined with one another and used similar words. Therefore, GS-DMM was used to identify keywords and implemented an additional phase in which these keywords were refined into _key phrases_ manually. A single topic - 'vaccine development' - dominated pro-vax tweets, while anti-vax tweets included more diverse topics, including 'vaccine mandates', 'vaccine side effects','masks and lockdowns', etc. [1]. Figure 2a shows the number of tweets from all dual-stance users classified into 25 most significant anti-vax and pro-vax topic. The light indigo bars represent the expected number of tweets for each anti-vax (pro-vax) topic, calculated by multiplying the total topic tweets by the fraction of anti-vax (pro-vax) tweets from dual-stance users to total anti-vax (pro-vax) tweets in the dataset [1]. Some topics commonly appeared in both anti- and pro-vax tweets (e.g.,'masks and lockdowns', 'Pfizer/Moderna vaccines') with opposing perspectives and are shown with indigo coloured labels. Dual-stance users, in general, contributed more tweets than expected for many anti-vax and pro-vax topics except 'jokes (side effects)' and 'jokes' as shown in Fig. 2a. The dual-stance users appear to be highly engaged contributors in the vaccine discourse, who do not send out jokes as much. In contrast, [1] found that pure anti-vax users were far more prominent in spreading anti-vax memes and jokes. Nonetheless, the characteristics of this cohort are still unclear, mainly because it is also a very diverse group, as discussed in Section III. To gain a more nuanced understanding, we divide the dual-stance cohort into three groups, as described in Section IV, in terms of their general orientation towards vaccination, i.e., pro-leaning, anti-leaning, or balanced. We only considered the users with 90% or more probability to be dual-stance, which allowed us to lower the impact of misclassified tweets. A majority of the users in this group were pro-leaning at 243,583 (66%), with 90,313 (25%) anti-leaning and 32,226 (9%) balanced. The tolerance value \(\epsilon\) is set at \(0.05\) (see Section IV). The topics for anti-leaning users are in Fig. 2b. This group was highly active in all anti-vax topics, except 'jokes (side effects'), 'jokes', and 'Trump'. This group contributed less than expected to the top pro-vax topic of 'vaccine development'. However, they showed higher activities in 'Masks and lockdown', 'viral infections', 'Fauci', 'big pharma', 'Bill Fig. 1: Pro-leaning, anti-leaning, and balanced users with respective anti-vax and pro-vax tweets. Gates', and'supporting vaccine mandate'. This group seemed to be not in favour of vaccines generally, but also shared some vaccine updates and seems to be aware of the positive impact of COVID-19 responses, such as, masks, lockdowns, and vaccines. They were highly involved in all discussions related to COVID-19, including pandemic control measures where they opposed and supported masks and lockdowns, posted tweets with updates about Bill Gates' efforts as well as criticized him, put forward arguments against vaccine mandate but also advocated for it. The results for topic classification for the balanced group appears in Fig. 2c. Unlike the rest of the dual-stance cohort, this group took a greater part in jokes (side effects) and jokes. 43% of the balanced group posted only one tweet of each stance during our study period. As expected, none of them reached the 90% probability of being accurately classified as dual-stance. The users in this selected group has 7-514 total tweets, almost similar for both stances or slightly more anti-vax than pro-vax tweets (as precision is lower for anti-vax stance compared to pro-vax as discussed in Section III). They posted higher than expected number of tweets about a number of topics, such as,'masks and lockdowns', 'Fauci', and 'Bill Gates' from both anti- and pro-vax perspectives, 'debunking anti-vax', 'pro-vax (general)', 'Trump', 'viral infections', 'elections', 'no need for vaccine', etc. They appear to be aware of the current affairs and occasionally sharing related posts on such topics. They seem to be supporting vaccines but also sharing a lot of jokes and memes about vaccines and their side effects, probably forwarding and responding to online content they found interesting without adhering to a particular opinion or taking it seriously. Some of these user could just be really undecided about the vaccine. Fig. 2: Top 25 anti-vax and pro-vax discussion topics for dual-stance users’ tweets. Lighter shade bars represent the expected number of tweets, and darker bars represent the actual observation. (a) For all dual-stance users, (b) for anti-leaning users with \(p_{i}>=0.9\), (c) for balanced users with \(p_{i}>=0.9\), (d) for pro-leaning users with \(p_{i}>=0.9\). Finally, the pro-leaning group presents a mirror image of the anti-leaning group. They showed lower than expected activity in most of the anti-vax topics, except for 'Pfizer/Moderna', 'Trump', and 'AstraZeneca (AZ)' and higher than expected contributions in most of the pro-vax topics. The AstraZeneca vaccine made news headlines in March 2021 when several European countries suspended its use following blood clot reports among vaccinated individuals [20]. In our dataset, people who were supportive of vaccinations also expressed concerns about the AZ vaccine, which is quite understandable. This pro-leaning group appears to have been looking forward to COVID-19 vaccines, but at the same time are concerned about the vaccines' safety. ### _Genuine Concerns vs. Falsehoods_ Figure 3 gives another perspective. We classified the anti-vax tweets into genuine issues versus falsehood by separating topic keywords and key-phrases into two sets, as in [1]. Genuine issues included concerns around mandates for a fast-tracked vaccine with unknown long-term side effects, historical issues with vaccines and clinical trials, pharmaceutical companies profiteering, general vaccine side effects, blood clots after receiving the AstraZeneca vaccine, waning immunity and virus variants, administrative mismanagement, and concerns about animal abuse during vaccine development. The details can be accessed from [1]. There were some tweets, which remained unclassified because of lack of relevant keywords, or they discussed neither genuine issues nor falsehoods. Note that one anti-vax tweet can be classified into both genuine issues and falsehoods. In fact, tweets identified as discussing a 'genuine concern' often used excessive exaggeration or presented information about true facts or events out of context, Here, we show the expected number of tweets for each class with wider light-coloured bars and overlay it with the observed number of tweets with narrower and darker bars in Fig. 3. The expected number of tweets is calculated by multiplying the total topic tweets with the fraction of anti-vax tweets by the group to the total anti-vax tweets. Figure 3 shows that the anti-leaning group has more than the expected number of tweets touching on genuine issues, but even more tweets with falsehoods and misinformation. Following from Fig. 2b, where anti-leaning users show more than expected contribution in top anti-vax topics of vaccine mandate, side effects, etc., and less than expected contribution in jokes, it seems that these people are typically not hardcore conspiracy-believing anti-vaxxers, but rather anxious and super-concerned about COVID-19 vaccines. Governments and policymakers should therefore place effort in having genuine dialogue on these issues, rather than waving them aside. On the other hand, the pro-leaning group has less than the expected amount of tweets involving misinformation, but slightly more tweets discussing genuine issues. The balanced group's contributions did not deviate from expectations. The pro-leaning and anti-leaning groups show contrasting behaviours. However, the pro-leaning group's engagement with misinformation, no matter how limited, shows the prevalence and impact of misinformation and how important it is to develop solutions and counter strategies. ## VI Temporal Dynamics and Causality In this section, we present our observations about change of stance, i.e., when a user posts an anti-vax tweet following a pro-vax tweet or vice versa. Assume \(\delta_{i}^{+}(n)\) is the number of stance changes into pro-vax and \(\delta_{i}^{-}(n)\) is the number of stance changes into anti-vax (also out of pro-vax) for a user \(i\) on day \(n\) during our study period. That is, a pro-vax tweet from a user \(i\) on any day at or before \(n\) subsequently followed by an anti-vax tweet on day \(n\) adds 1 into \(\delta_{i}^{-}(n)\). Similarly, \(\delta_{i}^{+}(n)\) counts the pro-vax tweets from a user \(i\) on day \(n\) which came right after a past anti-vax tweet. We ignored the neutral stance for this analysis. We also define, \[\delta_{i}^{+}=\sum_{n}\delta_{i}^{+}(n)\ \ \text{and}\ \ \delta_{i}^{-}=\sum_{n} \delta_{i}^{-}(n), \tag{7}\] Fig. 4: Total stance changes versus total anti-vax and pro-vax tweets for anti-leaning, balanced, and pro-leaning user groups. Insets are zoomed-in plots for respective groups. Fig. 3: Genuine issues versus falsehoods classification of anti-vax tweets. where \(\delta_{i}^{+}\) and \(\delta_{i}^{-}\) adds up all stance changes into and out of pro-vax for a user \(i\) over the whole study period. Note, that by construction \[|\delta_{i}^{+}-\delta_{i}^{-}|=\left\{\begin{array}{l}0\\ 1\end{array}\right.\] Usually, though not all, active dual-stance users who posted many tweets tended to change their stances many times, as shown in the Fig. 4. Figure 4 shows the number of stance changes for total number of anti-vax and pro-vax tweets for each individual. The number of stance changes should always be less than the total tweets but from the figure, it seems that mostly half or lesser number of user's tweets are causing changes in stances as most of the scatter point are under the red line. For users with lesser number of tweets, shown in the inset charts, stance changes are almost close to the total number of tweets shown with the yellow line, which means an actual oscillating behaviour. Anti-leaning users appear to have fewer oscillations, but it is because of the users with only few more anti-vax tweets than pro-vax are assigned to balanced or pro-vax groups due to low precision for anti-vax stance. An example of an oscillating user is shown in Fig. 5. This user is classified as balanced, although there are more anti-vax tweets than pro-vax. Most of the oscillatory behaviour happens after Nov. 9, 2020 when Pfizer announced the interim trial results for COVID-19 vaccine [21]. As we will see later, most of the stance changes happened in the post-vaccine period. Moreover, anti-leaning and pro-leaning groups have some users with many tweets but only a few stance changes, however, stance changes in the balanced group stayed closer to the half gradient red line. It reflects that we do not have users who posted anti-vax tweets for some time, then switched to pro-vax stance and stayed pro-vax for some time. The oscillating behaviour, although not always for every tweet, appears to be a characteristic of the balanced group. In addition, we define \(\delta^{+}(n)\) and \(\delta^{-}(n)\), which count stance changes over all dual-stance users for each day, i.e., \[\delta^{+}(n)=\sum_{i}\delta_{i}^{+}(n)\ \ \text{and}\ \ \delta^{-}(n)=\sum_{i} \delta_{i}^{-}(n). \tag{8}\] Figure 6(a) plot shows \(\delta^{+}(n)\) and \(\delta^{-}(n)\) for each day \(n\) in our study period. The solid black line marks the announcement of Pfizer's preliminary trial results on Nov. 9, 2020 [21]. Interestingly, both trajectories largely overlap and appear to be correlated. Later, we will explore this correlation in detail, but now, let us focus on some observations from these plots. The Pfizer announcement clearly marks the start of a higher activity period, which continued until the end of our study period. The bottom curve Fig. 6(b) shows the difference of \(\delta^{+}(n)-\delta^{-}(n)\) for each day, where we can see that right after the Pfizer announcement, there is a peak towards the pro-vax stance followed by many dips and peaks. Although, Fig. 6(b) shows large dips, i.e., more stance changes towards anti-vax, but the cumulative effect of stance changes moved towards the pro-vax side towards the end of the study as shown by Fig. 6(c), where \(S(n)=\sum_{1}^{n}(\delta^{+}(n)-\delta^{-}(n))\) is plotted versus \(n\). The most prominent dip in the pre-vaccine period (before Pfizer's announcement) in Fig. 6(b) came on Aug 13, 2020, just after the public announcement about Russia's Sputnik vaccine phase 2 trial results [22] and the second dip came on Sep 8, 2020 when AstraZeneca's phase 3 trials were put on hold after one volunteer developed an unknown reaction [23]. Dual-stance tweeting on COVID-19 vaccination appears to have been triggered by the announcements of successful trial results of COVID-19 vaccine. Multiple dips in Fig. 6 coincide with the Pfizer-BioNTech and Moderna vaccines gaining various approvals in the United Kingdom (UK) and from the USA's Food and Drug Administration (FDA) [24, 25, 26, 27, 28], and the launch of UK's public vaccination program on Dec. 8, 2020. Fig. 5: An example of a user’s timeline who is oscillating between anti-vax and pro-vax stances. Actual tweets are summarised for better visualisation and to avoid user identification. Fig. 6: (a) Number of changes in stance towards anti-vax and pro-vax, (b) their relative difference, and (c) \(S(n)=\sum_{1}^{n}(\delta^{+}(n)-\delta^{-}(n))\) for each day. The black solid line marks the Pfizer announcement on Nov 9, 2020. ### _Convergent Cross Mapping (CCM)_ The almost synchronous ups and downs of the time series plots in Fig. 6(a) suggest a strong correlation between \(\delta^{+}(n)\) and \(\delta^{-}(n)\). Indeed, the correlation coefficient was \(0.96\). This prompted a further exploration of potential causal relationships between the daily changes to pro- or anti-vax stances in the Twittersphere. We used Convergent Cross Mapping (CCM) [29], a non-parametric method based on the theory of non-linear dynamical systems, to infer causality between two variables \(x\) and \(y\) based on their time-series data. CCM follows Takens' theorem that if indeed \(x\) influences \(y\), past values of \(x\) can be recovered from \(y\)[30]. [29] developed the technique of "cross mapping", where a time delay embedding from the time series of \(y\) is used to estimate the values of \(x\) and the causal effect of \(x\) on \(y\) is determined by how well \(y\) cross maps \(x\) or \(x\) is forecasted by \(y\)[30]. The forecasting skill or the capacity of the time series data of a variable to predict the other is used to infer causality. The variable with a higher forecasting skill can be interpreted as driven or caused by the other variable in a non-linear dynamical system. CCM requires the time series to have statistical stationarity. Since the publication of [29], CCM has been widely used in life sciences, social networking, and environmental research. First, we checked for statistical stationarity of \(\delta^{+}(n)\) and \(\delta^{-}(n)\) by using Augmented Dickey-Fuller (ADF) test and KPSS test available as Matlab functions. Both \(\delta^{+}(n)\) and \(\delta^{-}(n)\) are found stationary by ADF, but not by KPSS test. For such cases, the standard approach is to take first order difference of the time series which will be stationary. Instead of \(\delta^{+}(n)\) and \(\delta^{-}(n)\), we applied CCM1 algorithm over \(\delta^{+}(n+1)-\delta^{+}(n)\) and \(\delta^{-}(n+1)-\delta^{-}(n)\) and calculated their forecast skill or cross-map (xmap) skill to predict each other. The value for lag is selected using the mutual information of the time series. Mutual information is shown in Fig. 16 and there is a minimum at lag = 3. The embedding dimension is selected as 32 which gives the best prediction skill. Footnote 1: [https://skccm.readthedocs.io/en/latest/quick-example.html](https://skccm.readthedocs.io/en/latest/quick-example.html) The results are given in Fig. 7 for lag = 3 and embedding dimension = 32. As shown in the figure, change towards pro-vax stance (\(\delta^{+}(n+1)-\delta^{+}(n)\)) has a higher forecasting skill than change to anti-vax stance (\(\delta^{-}(n+1)-\delta^{-}(n)\)). This implies that the change to pro-vax stance could be a possible reaction to or be driven by the change to anti-vax stance as a retaliatory or counter measure. ## VII Influence of Refweets and Replies In this dataset, there are 8241666 anti-vax and pro-vax tweets by dual-stance users, where change of stance is detected, i.e., these tweets were preceded by a tweet of opposite stance by the same user. We denote these tweets as the 'change-tweets'. This set contains \(67\%\) retweets and \(17\%\) reply tweets as shown in the Fig. 8a. Many of such retweets are originated by the same user, and many such reply tweets belong to the same thread. We found that the change-tweets belonged to 900984 and 1141124 distinct retweet and reply threads, respectively. Dividing the \(84\%\) change-tweets, which are retweets or replies, by the users who originated the thread and clustering all users associated with 1, (1 10), (10 100), (100 1000) and greater than 1000 change-tweets, we draw another pie chart shown in Fig. 8b. Note that we were not able to hydrate all retweet/in-reply-to IDs. The most interesting observation from Fig. 8b is that a few users which have posted the original tweet or started the thread are responsible for numerous stance changes. More precisely, 6200 users are behind more than \(50\%\) of the stance changes detected in our dataset and only 42 users are responsible for \(8\%\) of the stance changes. Looking further into these 42 users, 1. 7 are mainstream media and news accounts, such as, CNN, Reuters, etc. These accounts are usually followed by many users, e.g. CCN has 66M followers. 2. 7 are low-credible news companies or individual journalists, out of which, 4 are anti-vax and found spreading misinformation. Interestingly, except NY Post, a low-credible news company with 2.9M followers over Twitter, anti-vax accounts have lesser number of followers, i.e., 109K-592K, comparing with the pro-vax accounts with 659K-2.4M followers. 3. 5 are politicians and except the US president Biden and VP Harris's accounts, the rest are ring-wing politicians. Two accounts belong to US Republican Party members and one belongs to a UKIP member. They all were posting misinformed tweets about COVID-19 vaccine and against vaccine mandate. 4. 8 accounts belong to anti-vax influencers including Robert Kennedy Jr., his organization Children's Health Defense, and America's Frontline Doctors etc. Interestingly, only 3 influencers are followed by over a million users. An author with under 500K follower was found to be behind the maximum (49040) stance changes. 5. 3 accounts belong to pro-vax individuals with under a million followers each, including an epidemiologist and a scientist. Fig. 7: Forecast scores calculated from CCM for pre-vaccine and post-vaccine periods, with lag = 3 and embedding dimension = 32. 6. 5 accounts are associated to viral jokes [1], 2 with pro-vax and 3 with anti-vax stances. These accounts mostly do not have huge following, i.e., under a thousand followers but have initiated some of the viral memes. 7. 7 user IDs are now offline. These top influencers behind COVID-19 vaccine discourse are also discussed in detail in [31], where they were found polarizing the vaccine debate. Here, an interesting question is why these tweets were retweeted so many times by users who were supposedly not fully aligned with the tweets' stances? Following the topic modelling strategy used in Section V, the top 31 anti-vax and pro-vax topics of change-tweets which are retweets is presented in Fig. (a)a. Expected number of tweets for each topic and its contrast with the actual observations are shown in similar manner as Section V. These retweets are dominated by pro-vax tweets which are mostly about vaccine development updates, contemporary issues such as pandemic responses, COVID-19 vaccines, Trump, etc., and responses to anti-vax rhetoric. The anti-vax retweets in change-tweets are mostly about side effects, mandate, contemporary issues, such as, masks and lockdowns, Trump, Pfizer/Moderna etc., and jokes and memes. The topics which are mentioned over-proportionately in these tweets are the contemporary issues and jokes. On the other hand, we do not see that jokes and memes are significant in reply tweets, although \(55\%\) of these tweets are classified as anti-vax. Discussion topics, with expected and observed number of change-tweets which are replies, are presented in Fig. (b)b. These anti-vax replies are dominated by discussion about side effects and mandate. Contemporary issues and the conspiracy'mRNA alters DNA' are found in proportionally more tweets. Pro-vax replies were dominated by vaccine development updates, but contemporary issues, pro-vax (general), other viral infections, and debunking anti-vax are relatively more significant in this set. In COVID-19 vaccine related tweets, reply threads are mostly active discussions between the two opposite camps. Figure 10 shows a connected component from the reply graph from the change-tweets posted on Nov 9, 2020, the first day of the post-vaccine period when preliminary results of Pfizer vaccine were announced [21]. Edges represent replies in Fig. 10, and they are directed links between the users who posted the original tweet (source nodes) with the replying users (target nodes). Fig. 10 has 186 edges and there are 53 root nodes. We consider a signed graph, whereby an edge can have a +1 (blue) or -1 (red) weighting. For the reply network, the weight of an edge between two nodes is +1 (-1) if the tweet stances of the two nodes are the same (opposite). In the reply graph in Fig. 10 with red edges, we can see some debates and exchange of opposing views. The social influence played a significant role in driving the dual-stance tweeting. Retweets are used primarily as a form of agreement, whereas replies may be more prevalent for debate and discussion with differing view. Hundreds of replies in the change-tweet set (as many as 1173) are found to be part of the same thread. The majority of the threads were found to have both anti-vax and pro-vax tweets. Figure 11 shows the ratio of pro-vax tweets in a reply thread to the total tweets found in the thread within change-tweets. We have selected reply threads with 10 or more tweets. There are a couple of hundred threads composed only of anti-vax tweets, though the majority of the threads have more anti-vax tweets. These reply threads appear to be genuine debates and engagement of opposing views around COVID-19 vaccine, where change of stances are also taking place. In Fig. 12 (upper plot), the number of retweets per day belonging to different threads are shown with different colours. We selected threads with 1000 or more retweets for this analysis. As shown in this figure, the retweet activity for each thread peaked within a short interval and their influence did not last very long. The lower chart in Fig. 12 plots the number of retweets in these thread versus the number of days between first and last observation of relevant retweet ID. Even the popular threads were short-lived, and there are very few retweet threads where influences lasted over months and caused change of stances much later than the posting of the original tweets. Fig. 8: (a) The composition of change-tweets: 67% are retweets and 17% are replies. (b) Many retweet and reply threads are originated by the same user. Pie chart shows the percentage of such retweets and replies which are originated by users associated with 1, 2-10, 11-100, 101-1000, and 1000+ change-retweets. 6200 users are behind more than 50% of the stance changes. ## VIII Two Years On The dataset analysed in the paper was hydrated in 2021. Twitter is a dynamic platform, and it is quite common that tweets are being deleted or taken off-line by the author or by the platform. Authors may delete the post because of change of opinion or for some other reason. The platform usually takes the tweets or users off-line if they are found breaching the code of conduct on Twitter. An interesting statistics is the number of tweets deleted since our hydration. The choice of selective deletion may reflect the stance sustained over the time. For the overall dual-stance cohort, we observed \(28\%\) anti-vax tweets went off-line, whereas for pro-vax tweets, the deleted tweets are \(20\%\). If we look at this statistics in the anti-leaning, balanced, and pro-leaning groups, anti-leaning group has the highest percentage of deleted tweets both anti- and pro-vax tweets, as shown in Fig. (a)a. However, the difference between deleted anti-vax and pro-vax tweets is greatest in the pro-leaning group. Figure (b)b shows the change in user classification in Fig. 11: Composition of reply threads. Fig. 12: Size and duration of retweet threads. Fig. 10: A connected component from ‘Reply’ graph for dual-stance users changing their stances on Nov 9, 2020. Directed link are from the originator of the thread to the replying user. The red colour represents that both original tweet and reply tweet have opposite stances, and a blue colour represents the same stances. Fig. 9: Topics discussed in the change-tweets which are (a) retweets and (b) replies, with expected and observed number of tweets. relation to the retained tweets. Lighter circles show the original user classes in terms of fraction of dual-stance users assigned to it, and darker circles show the classification using the retained tweets. All classes are shrunk in size, but the majority of the users retained their labels. Interestingly, among those who retained their original label, \(37\%\) of anti-leaning, and \(47\%\) each of balanced and pro-leaning users also deleted some of their tweets, but the deletion did not change the relative composition of dual-stance tweets. Out of the users who moved away from any class, majority have deleted all anti-vax and pro-vax tweets, as shown by the grey circle in the middle. We have not checked, but some of these users in the grey circle might also be offline or banned from Twitter. The second largest migrating pro-leaning group became pure pro-vax by deleting all of their anti-vax tweets. The emerging pure pro-vax class got an even bigger contribution from the balanced group and a minuscule portion from the anti-leaning group. Similarly, a tiny class of pure anti-vax also emerged, mostly out of the balanced group and with small contribution from anti-leaning and, even smaller, from pro-leaning groups. Users moving away from the balanced group by selectively deleting their tweets are almost evenly split between pro-vax/pro-leaning and anti-vax/anti-leaning, as shown in Fig. (b)b. Interestingly, the majority of the users with selective deletion from anti-leaning moved to pro-leaning, whereas pro-leaning became pro-vax and balanced while a tiny fraction of them became anti-leaning or anti-vax. If the oscillation between the stances is an indication of the uncertainty of the time, then a higher deletion of anti-vax posts may indicate an act of rectification from some users. An overall trend in users who have selectively deleted their tweets is to favour pro-vax stance. Balanced group, however, remains a puzzling phenomenon with almost even split between favouring anti-vax and pro-vax stances. ## IX Discussion and Conclusion The expression of an opinion and its understanding are noisy processes. To make the matter worse, in the Twittersphere, opinions are expressed through a message with a maximum of 280 characters. In the present study, the stances of tweets were automatically detected by state-of-the-art transformer models, which are still outperformed by humans in this task. Therefore, one would expect some noise in the stance detection. Nonetheless, even when the limitations of our methods are considered, we found a significant cohort of Twitter users who provided arguments both for and against COVID-19 vaccination. One might expect that there would be a 'grey zone' of this type - those who cannot be clearly classified as anti-vaxxers and pro-vaxxers. However, the size of this 'grey zone' - 22% of the users - along with their significant contributions to the overall number of tweets, is surprising and perhaps raises question as to how polarised the vaccine debate truly is. In this paper, we attempted to gain an understanding or and get further insight into this dual-stance cohort. Our content analysis shows that this cohort can be divided into 3 groups: 1 anti-leaning, 2- balanced, and 3- pro-leaning. The anti-leaning group is highly active in sharing concerns about COVID-19 vaccines, including a lot of falsehood and misinformation. They shared some vaccine updates and positive news, but they appear to be not in favour of COVID-19 vaccine. The balanced group is significant in size, and seems to be on the fence as far as the vaccine is concerned. They are undecided, sharing news, and worried about safety and mandating of COVID-19 vaccine. The pro-leaning group is most numerous and appears to be favouring COVID-19 vaccines, though this group does present some concerns about them. A large number of tweets related to the change of stance are retweets and replies. Also, both anti-vax tweets and pro-vax tweets are found to be part of many reply thread, indicating that people engaged with other people with opposing views and change of opinion also took place. The temporal analysis of change of stance suggests significant correlation between change into and out of anti-vax stance, which actually triggered once the results of the clinical trials of various COVID-19 vaccines were made public. The change into pro-vax stance also seemed to be a reaction to the change into anti-vax stance, as suggested from our analysis Fig. 13: In the course of 2 years, early 2021 to early 2023, many of the anti-vax and pro-vax tweets from the dual-stance users went offline. (a) Number of deleted anti-vax (pro-vax) tweets as a percentage of original anti-vax (pro-vax) tweets from the user group. All groups have more deleted anti-vax tweets than pro-vax tweets, this difference is more significant in pro-leaning group. (b) User classification with original data (lighter circles), with dehydrated data (darker circles), and the migration of users to different classes as a result of selective deletion of tweets. Two smaller classes of pure anti-vax and pure pro-vax also emerged. Percentages are calculated with respect to the total number of original dual-stance users. using the Convergent Cross Mapping method. There are days when overwhelming changes of stance took place towards anti-vax, such as, when AstraZeneca trials were suspended due to a participant developing an unknown reaction. However, towards the end of our study period, the cumulative effect was positive, and stance changes are mostly towards pro-vax stance. In this study, we looked into a surprising and perplexing phenomenon, got some insight about the behaviour patterns, and were able to create a comprehensible and plausible picture about the dual-stance cohort. The binary division between anti-vax and pro-vax classes was perhaps too strict for many people, specially during the uncertain and unprecedented time of COVID-19 pandemic. In fact, numerous binary classifications, such as, being part of political left or right, conservative or liberal, favouring one party over another, etc., could also be too strict for some people. Just as it is quite likely to have contrasting views about one topic, within a population, for one reason or another, one person may only partially subscribe to one view or partially rejects the other or just remain undecided, which is also quite intrinsic to human nature. The positive point of this cohort, which exists at the boundary of polarising camps, is that it can act as a bridge and bring opposing communities together. The need, however, is to create opportunities for genuine dialogue where legitimate concerns can be acknowledged and addressed. ## Acknowledgement We like to thank Dr. Marc Cheong for his insightful comments about this work.
2309.01769
Effects of Material Mapping Agnostic Partial Volume Correction for Subject Specific Finite Elements Simulations
Partial Volume effects are present at the boundary between any two types of material in a CT image due to the scanner's Point Spread Function, finite voxel resolution, and importantly, the discrepancy in radiodensity between the two materials. In this study a new algorithm is developed and validated that builds on previously published work to enable the correction of partial volume effects at cortical bone boundaries. Unlike past methods, this algorithm does not require pre-processing or user input to achieve the correction, and the correction is applied directly onto a set of CT images, which enables it to be used in existing computational modelling workflows. The algorithm was validated by performing experimental three point bending tests on porcine fibulae specimen and comparing the experimental results to finite element results for models created using either the original, uncorrected CT images or the partial volume corrected images. Results demonstrated that the models created using the partial volume corrected images did improved the accuracy of the surface strain predictions. Given this initial validation, this algorithm is a viable method for overcoming the challenge of partial volume effects in CT images. Thus, future work should be undertaken to further validate the algorithm with human tissues and through coupling it with a range of different finite element creation workflows to verify that it is robust and agnostic to the chosen material mapping strategy.
Aren Beagley, Hannah Richards, Joshua W. Giles
2023-09-04T19:13:33Z
http://arxiv.org/abs/2309.01769v1
Effects of Material Mapping Agnostic Partial Volume Correction for Subject Specific Finite Elements Simulations ###### Abstract Partial Volume effects are present at the boundary between any two types of material in a CT image due to the scanner's Point Spread Function, finite voxel resolution, and importantly, the discrepancy in radiodensity between the two materials. In this study a new algorithm is developed and validated that builds on previously published work to enable the correction of partial volume effects at cortical bone boundaries. Unlike past methods, this algorithm does not require pre-processing or user input to achieve the correction, and the correction is applied directly onto a set of CT images, which enables it to be used in existing computational modelling workflows. The algorithm was validated by performing experimental three point bending tests on porcine fibulae specimen and comparing the experimental results to finite element results for models created using either the original, uncorrected CT images or the partial volume corrected images. Results demonstrated that the models created using the partial volume corrected images did improved the accuracy of the surface strain predictions. Given this initial validation, this algorithm is a viable method for overcoming the challenge of partial volume effects in CT images. Thus, future work should be undertaken to further validate the algorithm with human tissues and through coupling it with a range of different finite element creation workflows to verify that it is robust and agnostic to the chosen material mapping strategy. Partial Volume Artifacts; Finite Element; Computed Tomography + Footnote †: journal: ## 1 Introduction To derive accurate subject-specific finite element (FE) models from patient data, two key considerations are accurately replicating skeletal geometry and material properties (Knowles et al., 2016; Eberle et al., 2013; Schileo et al., 2008; Yosibash et al., 2010; Keyak and Falkinstein, 2003; Szwedowski et al., 2011; Babazadeh Naseri et al., 2021; Vaananen et al., 2019; Taddei et al., 2007; Pauchard et al., 2016; Helgason et al., 2008, 2016; Pakdel et al., 2016). Geometric accuracy can be achieved using automatic meshing procedures (Falcinelli et al., 2016) after segmenting computed tomography (CT) images (Pauchard et al., 2016; Lu et al., 2016; Prellan et al., 2016). Heterogenous mechanical properties can be obtained from the density calibrated Hounsfield units of the CT images using correlations with bone mineral density and empirically derived relationships between density and elastic modulus values (Babazadeh Naseri et al., 2021; Fung et al., 2017; Collins et al., 2021; Studders et al., 2020). These correlations and relationships depend on the anatomical site, density regime (cortical versus trabecular), and species (Li et al., 2020; Feng et al., 2012; Les et al., 1994; Linde et al., 1991; Snyder and Schneider, 1991). The development of these correlations and relationships is ongoing as multiple research groups have reported different results for the same anatomical site and species (Knowles et al., 2016; Emerson et al., 2013; Eberle et al., 2013b). Assigning these heterogenous material properties to FE meshes is non-trivial due to the difference in geometry and topology between the CT voxels and mesh elements. Various material mapping methods (MMMs) have been proposed that can primarily be categorized as either element-based or node-based. Element-based methods assign property values to each FE mesh element (Taddei et al., 2007) while node-based methods assign property values to FE mesh nodes, allowing for material property variation across an element (Helgason et al., 2008; Babazadeh Naseri et al., 2021). Partial Volume (PV) effects are often observed in CT images and occur because each voxel represents the attenuation properties of the material(s) within the voxel's specific volume, meaning that if the volume contains multiple materials than the resulting Hounsfield Unit (HU) intensity represents some average of their properties (Bushberg et al., 2020). PV effects are most evident at boundaries between materials with markedly different radiodensities (e.g. air and cortical bone) causing sharp boundary to appear blurred. Additionally, all material boundaries in a CT image are blurred due to scanner system resolution limitations (Pakdel et al., 2014; Bushberg et al., 2020). PV effects have been shown to cause multiple challenges: * Incorrect diagnosis/assessment of cortical bone thickness (Treece and Gee, 2015; Museyko et al., 2017) * Artificially low cortical bone mineral density (BMD) (Soucek et al., 2015) * Increased difficulty in accurately segmenting structures (Rittweger et al., 2004; Falcinelli et al., 2016; Peleg et al., 2014). In the context of deriving FE models from CT images, regardless of the MMM strategy used, surface nodes and elements may correspond to regions of the CT images that are affected by partial-volume (PV) effects. For skeletal FE models, PV effects typically result in underestimation of cortical bone density and therefore reduce the estimated elastic modulus (Soucek et al., 2015; Peleg et al., 2014), which can result in inaccurate FE simulation results. Although various reconstruction kernels exist to minimize the effects of blurring caused by tomographic projection, they are not able to remove all blurring and PV effects in CT images. As a result, post-hoc methods have been developed to try to overcome the continued challenge of PV effects. Helgason et al. (2008) introduced a modified MMM for FE meshes derived from CT scans that integrated a partial volume correction (PVC) method. After assigning material properties to mesh nodes, nearest neighbour interpolation was used to correct surface nodes that had a lower density than adjacent interior nodes (Helgason et al., 2008). Pakdel et al. (2016) built upon the previous method and introduced the Node-based elastic Modulus Assignment with Partial-volume correction (NMAP) method for material mapping of FE meshes derived from CT scans. This method was intended to be used after deblurring techniques are applied to reduce to effects of the CT scanner point spread function (Pakdel et al., 2012, 2014, 2016). NMAP performs the partial volume correction prior to mapping material properties onto the FE mesh by using an inverse distance weighted interpolation method to correct surface voxels in the segmentation of the CT scan itself. Both methods demonstrated higher fidelity compared to experimental data than the widely used used material mapping methods available as part of the Bonemat software (Taddei et al., 2007) and assigned material properties to mesh nodes instead of elements. However, neither method has an open-source implementation available for use. As well, the PVC methods are integrated with the material mapping strategies and have not been validated on their own, meaning they cannot be integrated into existing workflows to improve research results. For NMAP, Pakdel et al. notes that it has only been validated on CT scans that have undergone deblurring first, which is nontrivial to perform, making the method even more difficult to integrate into existing and varied workflows. Therefore, the purpose of this study is to develop, validate, and release an open-source implementation of a PVC method that works on the CT scan itself and thus enabling either type of material mapping strategy to be used when deriving FE meshes from the CT scan. The specific goals in support of this purpose were to: * create a PVC method not depend on deblurring or other pre-processing techniques * ensure the method can generalize to long bones, flat bones, etc. * be simple to use (i.e. not require user expertise) * make as few assumptions about the bone and partial volume effects as possible * be deterministic given the same inputs * have minimal inter-operator differences in results (e.g. not require thresholding to set the thickness of the PV layer as was done by Pakdel et al.) ## 2 Materials and Methods ### Experimental Model Nine fresh frozen porcine hind limbs were obtained from a local abattoir. Ethics approval was not required for this study as the animal specimen were not sacrificed specifically for this study. The specimens were dissected to remove and denude the fibula and tibia. Three-point bend tests were performed for each fibula using a procedure adapted from the American Society of Agricultural and Biological Engineers (ASABE) Shear and Three-Point Bending Test of Animal Bone standard (Asabe, 2005). A custom bending jig (see fig. 1) was designed and manufactured in accordance with (Asabe, 2005) for use with an ElectroForce 3510 (TA Instruments, New Castle, Delaware) mechanical testing system to apply the bending load. For each fibula the total length and cross-section was measured (Asabe, 2005). The cross-section was determined by taking both the medial-lateral and anterior-posterior diameters at 5 locations along the elliptical cross-section and averaging the results. The diameter of an equivalent area circle was then computed and used to determine the support distance required to obtain a support length to bone diameter ratio greater than 10 as required by the ASABE standard (Asabe, 2005). Prior to loading, the fibula was placed on the supports and the location of the site under the center driver relative to the proximal end was marked and recorded. The center location was then prepared (Zdero et al., 2017b) and a rectangular strain rosette with a resistance of 350 \(\Omega\) (CEA-13-125UR-350, Intertechnology, Toronto, Ontario) was affixed with M-Bond 200 Adhesive Kit (Intertechnology, Toronto, Ontario) and sealed with M-Coat A (Intertechnology, Toronto, Ontario), as shown in fig. 2a. A flat section of the tibia was similarly prepared and affixed with a strain rosette (CEA-13-125UR-350, Intertechnology, Toronto, Ontario) for use as a temperature compensation gauge, see fig. 2b (Micro-Measurements, Figure 1: Experimental testing jig mounted to the MTS with a prepared fibula resting on the supports of the three-point bending jig. The affixed strain rosette can be seen on the fibula surface opposite the center driver, which is mounted to the load cell attached to the MTS mover. 2014, 2015). Each stain gauge within the rosette was connected to a National Instruments multi-channel strain data acquisition module using a two-wire quarter-bridge configuration with a temperature compensation gauge (Micro-Measurements, 2015; Zdero et al., 2017, 2017). The two wire configuration was chosen due to the short lead length and low lead resistance of 0.1 \(\Omega\) relative to the strain gauge resistance of 350 \(\Omega\). The fibula was then placed back onto the supports and the strain rosette was aligned with the driver facing downward opposite to the drivers point of contact. An initial preload of 5 N was used and the system was paused so the distance from proximal and distal ends of the fibula to the supports could be measured safely. A compressive load was then applied in displacement control (Asabe, 2005) until the defined displacement limit or half of the expected failure load was reached to ensure that loading could be repeated and plastic deformation did not occur. The expected failure load and displacement were calculated as per the ASABE standard using the measured cross-section for each fibula (Asabe, 2005). An estimated elastic modulus of 6 GPa was used for all specimens, which was determined by testing a single pilot specimen to failure (Asabe, 2005). From pilot testing it was also determined that an approximate tensile strength of 93 MPa gave reasonable predictions (Morgan et al., 2018). Once the load or displacement limit was reached a constant displacement was held for 5 seconds before unloading in displacement control. Load and displacement were recorded throughout the loading cycle by the ElectroForce 3510 (TA Instruments, New Castle, Delaware) while strain data was recorded. The loading procedure was performed a total of three times for each fibula and results were averaged across trials. ### Imaging Prior to thawing and dissection, clinical-quality qCT scans were acquired for each of the fresh-frozen porcine hind-limbs, using the acquisition and reconstruction parameters detailed in table 1. All images were taken with a Model 3 QCT Phantom (Mindways, Austin, Texas) present within the field of view. CT images were then segmented using thresholding, region growing, and manual editing with Mimics (Materialise, Leuven, Belgium). Figure 2: Strain rosettes for the fibula (a) and tibia (b) after preparing the bone surface, affixing the rosettes, soldering the terminal wires, and sealing the rosette. ### Partial Volume Correction Algorithm The CT images and associated binary segmentation masks were loaded into Python using a custom library for DICOM processing based on pydicom (Mason et al., 2022) and VTK (Schroeder et al., 2006). The set of interior voxels \(\mathbf{Q}\) was defined by performing binary morphological erosion using a 3 \(\times\) 3 \(\times\) 3 binary kernel with a square connectivity of 1 on the segmentation mask. The set of surface voxels \(\mathbf{S}\) was then defined as the set difference of the segmentation mask and \(\mathbf{Q}\). For each voxel x, a 26-connected neighbourhood \(\mathbf{P}(x)\) was defined, which contains all voxels connected by a face, edge, or corner. For each voxel x in \(\mathbf{S}\) a new HU value was calculated according to eqs. (1) to (5) with p = 2. \[HU(x)=\begin{cases}max(h(x),u(x))&\text{if }u(x)\neq 0,\\ h(x)&\text{if }u(x)=0\end{cases} \tag{1}\] where \[h(x)=\text{current HU value of voxel x} \tag{2}\] \[u(x)=\begin{cases}\frac{\sum w_{i}(x)\cdot HU_{i}}{\sum w_{i}(x)}&\text{if }d(x,x _{i})\neq 0\wedge\sum w_{i}(x)\neq 0,\\ H\overline{U_{i}}&\text{if }d(x,x_{i})=0\vee\sum w_{i}(x)=0\end{cases} \tag{3}\] \[w_{i}(x)=\begin{cases}\frac{1}{d(x,x_{i})}^{p}&\text{if }x_{i}\in\mathbf{Q} \wedge x_{i}\in\mathbf{P}(x),\\ 0&\text{if }x_{i}\notin\mathbf{Q}\lor x_{i}\notin\mathbf{P}(x)\end{cases} \tag{4}\] \[d(x,x_{i})=\text{distance between $x$ and $x_{i}$} \tag{5}\] The interpolation kernel was defined using a neighbourhood of voxels instead of a fixed radius due to the possibility of anisotropic voxel sizes in CT images with variable slice thickness. The connectivity of 26 for **P(x)** was chosen to balance minimizing \begin{table} \begin{tabular}{l c c} \hline \hline **Parameter** & **Specimens** & **Value** \\ \hline CT Scanner & All & Toshiba Aquilion 64 \\ Institution & All & A64S WAVES VETERINARY HOSPITAL \\ Acquisition Mode & All & Helical CT \\ Beam Energy & All & 120 KVp \\ Slice Thickness & All & 1.0 mm \\ Exposure Time & All & 500 \\ & 5-11 & 0.488 \(\times\) 0.488 mm \\ Pixel Spacing (Resolution) & 3 & 0.461 \(\times\) 0.461 mm \\ & 4 & 0.406 \(\times\) 0.406 mm \\ Total Pixels (Matrix Size) & All & 512 \(\times\) 512 \\ Convolution Kernel & All & FC30 (Bone) \\ Tube Current & All & Auto-adjusted to specimen \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of CT Scan Acquisition metadata. discontinuities in the HU field in all 3 dimensions while still limiting the interpolation to a localized area due to the highly heterogenous nature of bone. Similarly, all voxels in **S** are assigned a weight of 0, due to the likelihood of being subject to PV effects. As a consequence of the weighting, neighbourhood size, and eq. (1) any surface voxel x with no interior voxels in its neighbourhood **P(x)** will have no correction applied as u(x) will equal 0. While this does mean that voxels in thin regions, such as the blade of a scapula, will not be corrected it also means the method makes no assumption that bone material properties are smooth or continuous over volumes larger than **P(x)** as adjacent thicker regions could be anatomically different or a considerable distance away. Finally, eq. (1) was formulated to prevent artificially lowering the HU value of surface voxels that are accurately segmented and not subject to PV effects by only ever increasing the HU value as bone has the highest HU value amongst natural tissues and thus PVs on bone surface voxels always decrease the voxel's intensity. After applying the PV correction method, the CT images were saved as DICOM images for use in established FE modelling workflows. ### Finite Element Modelling For each source CT image, a triangulated surface mesh was generated from the segmentation mask using Mimics (Materialise, Leuven, Belgium). The triangular surface meshes were then imported into SolidWorks (Dassault Systemes, Velizy-Villacoublay, France) along with the models of the rollers used as the supports and driver of the three-point bending test jig. A rigid body motion simulation was then performed to improve fidelity of the computational model regarding how each fibula lay on the support rollers. Roller to roller distance was set to the experimentally measured support span length. The fibula surface mesh was positioned with the medial or lateral surface facing the driver, in accordance with the experimental set up for each specimen. The measurements to the proximal and distal ends of the fibula were scaled to account for the difference in mesh geometry and the measured experimental geometry (bone length) and used to constrain the position of the fibula on the rollers. Finally the preload step was simulated by applying a constant displacement of 10 mm/min to the driver until contact with the fibula was achieved. The resulting configuration of the fibula, support rollers, and driver for each specimen was then exported for use in creating the finite element meshes. An initial n-points registration and subsequent iterative closest point registration was used to align the assembly containing the fibula, driver, and support rollers for each specimen with the source CT image. As the fibula surface mesh was unchanged a registration error of 0 was achieved for each specimen. The triangulated surface mesh of the fibula was then imported into 3-Matic (Materialise, Leuven, Belgium), where it was remeshed to achieve a uniform edge length before generating a quadratic tetrahedral mesh. As the PV corrected and the source CT images share the same voxel architecture and segmentation masks, the quadratic tetrahedral mesh generated for each source CT image was also used for the corresponding PV corrected images. Material properties from both the source and PV corrected CT images were then assigned to the mesh using Mimics (Materialise, Leuven, Belgium), which implements an element-based MMM. HU values were first calibrated and converted to equivalent density according to the MindWays phantom-derived CT calibration curve (Mindways Software Inc., 2011). As the specimens were of varying skeletal maturity and there are no well-validated density-modulus relationships for porcine fibulae in the literature, we computationally derived specimen-specific density-to-modulus relationships according to the response surface methodology published by Eberle et al (2013) (Eberle et al., 2013a). The response surface aimed to minimize the difference between the FE computed and experimentally measured root-mean-square-error (RMSE) of the maximum principal strain at the strain gauge location and the whole bone stiffness. Furthermore, we adapted Eberle et al.'s method by sampling a greater number of points (i.e. xx points) in order to calculate our response surface. As well, we incorporated an inequality constraint to ensure the maximum elastic modulus for each specimen was not greater than 20 GPa, which was the maximum value reported it the literature for porcine bone (Feng et al., 2012). Finally, we reduced the lower bound for the power law coefficient A (from \(A(rho)^{b}\)) to 300 MPa instead of 5000 MPa based on literature for tibia relationships (Grant et al., 2014; Snyder and Schneider, 1991). Response surface design and optimization was done using R and NLopt (Lenth, 2009; Johnson, 2007; Jones et al., 1993). As discussed in the introduction, work by Pakdel and Helgason has previously shown that PVC improved results and more accurately reflect reality. Thus, in order to isolate the effect that the PVC method we have developed has and avoid other confounding factors, only one density-to-modulus relationship was derived for each specimen and it was based exclusively on the PV corrected images. Specifically, the response surface method was only applied to simulations derived from PV corrected images but the resulting 'optimal' density-to-modulus relationships were used for material property assigned for FE simulations derived from both the PV corrected and the uncorrected (i.e. raw) CT images. After determination of the specimen-specific density-to-modulus relationship for each specimen, material properties were assigned to each mesh. To minimize averaging effects during the element-based material assignments that could reduce the effectiveness of the PVC algorithm, elastic moduli were grouped into 100 bins for trabecular bone and 100 bins for cortical bone. Finally, a Poisson's ratio of 0.3 was set for all material groups (Studders et al., 2020). Once material properties were assigned, FE model assemblies were import into Abaqus (Dassault Systemes, Velizy-Villacoublay, France) to apply boundary conditions and solve. All of the rollers were modelled as discrete rigid elements as the stiffness of 304 stainless steel greatly exceeded that of the fibula. Each supporting roller was constrained to have zero displacement in all degrees of freedom except for rotation about the long axis of the roller, while the driver was constrained to have zero displacement in all degrees of freedom except along the line of action of the MTS mover, and the fibula was unconstrained. Contact pairs between the fibula and each of the supports as well as the fibula and the driver were created and assigned the coefficient of friction \(\mu=0.37\)(Lopez-Campos et al., 2018) using a tangential behavior. A load with magnitude equal to the average peak experimental load was then applied along the driver's line of action. The average peak load was used because, during the experimental dwell phase (i.e. after the peak was reached) the displacement and strain held relatively constant but the load showed a gradual decline due to bone stress relaxation. Although the bending tests were performed in displacement control applying a load in the FE simulations was chosen because the dependant experimental variable was principal strain, which would be constant regardless whether the model is derived from raw or PV corrected images if a constant displacement was used. Automatic stabilization in Abaqus was enabled to apply damping during the initial solving step to improve stability when resolving contact. The damping was gradually reduced over the first solver step such that it was no longer applied in subsequent solver steps. It was determined that the models could be solved without damping but converged to a solution much slower, while still reaching the same result as with damping. After solving, strain data was extracted from mesh elements in the location and shape of the strain rosettes, and the same elements were selected for the original and PV corrected simulations. The maximum principal strain was calculated as the mean maximum principal strain of the elements in the selected area. Mesh convergence was investigated using an FE simulation derived from the original CT images for specimen 7 and plotting the mean maximum principal strain against the mean element edge length, see fig. 3. The minor oscillation in fig. 3 can be attributed to the variable topology of each mesh making it impossible to select the same elements to compute the mean maximum principal strain for each mesh. Similarly, selecting elements that had an equal area to the strain rosette was not always possible for meshes with a larger average element edge length. Given the trend in fig. 1, a target maximum edge length of 1.0 mm was chosen to balance computational cost and accuracy. This result also agreed with previous findings in the literature that FE meshes achieve good convergence when the average element edge length is approximately equal to the CT image slice thickness (Perillo-Marcone et al., 2003). ### Statistical Comparison Between Experimental Testing and Finite Element Analysis The experimental mean maximum principal strain was calculated over 100 data points from the time the peak load for each specimen was reached in each of the three loading tests. An average of the experimental mean maximum principal strain over the three loading tests was then calculated and used for comparison with the computational strain results. The relative error between experimental and computational strain was then calculated. After checking normality using the Shapiro-Wilks test and confirming the assumption of equal variance with Levene's test, a single tailed paired t-test with \(h_{0}:d=0\) and \(h_{1}:d>0\) where \(d=\epsilon_{Source}-\epsilon_{PVC}\) and \(\alpha=0.05\) was performed to Figure 3: Maximum principal strain from different mesh edge lengths. determine if FE simulations derived from PV corrected CT images had lower relative error than FE simulations derived from the raw CT images. Analysis was performed using the scipy.stats and researchpy packages in Python (Virtanen et al., 2020; Bryant, 2018). ## 3 Results ### Specimen-Specific Density-to-Modulus Relationships It was found that across the eight specimen the optimized specimen-specific density-to-modulus relationships calculated using the response surface method had highly variable constants, which agreed with qualitative observations indicating that the skeletal maturity of the specimen was highly variable (Table 2). ### Strain The maximum principal strains from the experimental testing, the original image derived simulations, and the PVC image derived simulations are tabulated in Table 3, along with the relative differences between the simulated and experimental data. The relative differences between the two methods are graphically shown in Figure 4. It can be seen that the strain error compared to the experimental data is lower for all specimens when using the PVC image derived vs original image derived models with the difference between the models averaging 6% (range: 3 - 12%). This difference in relative strain error was found to be statistically significant (p\(<\)0.05, see Table 4). Descriptive statistics regarding data normality and equality of variance for all data can be see in Table 5. Figure 5 illustrates that the strain from the PVC models produces a slightly more linear relationship to the experimental strain (r=0.675) compared to the strain from the original models (r=0.616). However, the linear fit for both types of model has a slope of \(\approx\)0.4 when it would be expected to be 1. \begin{table} \begin{tabular}{c c c} \hline \hline **Specimen** & **A (MPa)** & **B** \\ \hline 3 & 12277.42 & 0.994193 \\ 4 & 13684.27 & 0.88775 \\ 5 & 11114.34 & 1.295186 \\ 6 & 10306.96 & 1.441808 \\ 8 & 12756.7 & 1.080887 \\ 9 & 12761.89 & 1.091541 \\ 10 & 10975.18 & 1.461723 \\ 11 & 9010.101 & 1.748485 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of computationally derived specimen-specific density-modulus relationship coefficients. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Specimen** & \multicolumn{3}{c}{**Max principal strain**} & \multicolumn{3}{c}{**Relative Difference**} \\ & **Experimental** & **Raw** & **PVC** & **Raw** & **PVC** \\ \hline 3 & 0.002303 & 0.00264308 & 0.002559104 & 15\% & 11\% \\ 4 & 0.002225 & 0.002892817 & 0.002816715 & 30\% & 27\% \\ 5 & 0.003236 & 0.003446681 & 0.003261663 & 6\% & 1\% \\ 6 & 0.002425 & 0.002853342 & 0.002705169 & 18\% & 12\% \\ 8 & 0.00207 & 0.003031263 & 0.002884193 & 46\% & 39\% \\ 9 & 0.001986 & 0.002421083 & 0.002341355 & 22\% & 18\% \\ 10 & 0.00189 & 0.002766969 & 0.002595284 & 46\% & 37\% \\ 11 & 0.001876 & 0.003101739 & 0.002877716 & 65\% & 53\% \\ \hline & & & **Mean:** & 31.13\% & 24.75\% \\ \hline \hline \end{tabular} \end{table} Table 3: Maximum principal strain from experimental testing, original simulation, and PVC simulation, and the relative differences between the simulated and experimental data. Figure 4: Relative difference between simulated and experimental maximum principal strain. \begin{table} \begin{tabular}{c c c c c c c} & **N** & **Mean** & **Variance** & **SD** & **SE** & **95\% Conf. Interval** \\ \hline **PVC** & 8 & 0.247475 & 0.031085 & 0.176309 & 0.062334 & [0.1001,0.3949] \\ **Raw** & 8 & 0.311274 & 0.039712 & 0.199278 & 0.070456 & [0.1447,0.4779] \\ **PVC - Raw** & - & -0.063798 & & 0.029546 & 0.010446 & [-0.0885,-0.0391] \\ \hline \end{tabular} \end{table} Table 4.: Paired T-Test Results for Strain Error Figure 5.: Linear correlation of experimental strain compared to computational determined strain from the raw and partial volume corrected simulations. \begin{table} \begin{tabular}{c c c c c c} & _Strain Error_ & _Modulus Error_ & _RMSE_ \\ \cline{2-6} & **Raw** & **PVC** & **Raw** & **PVC** & **Raw** & **PVC** \\ \hline **Shapiro-Wilks** & 0.6075 & 0.8324 & 0.8976 & 0.8652 & 0.5273 & 0.8265 \\ **Levene’s Test** & 0.7044 & & 0.757 & & 0.9456 \\ \hline \end{tabular} \end{table} Table 5.: Descriptive statistics of the sets of relative differences. ### Modulus In addition to comparing the models' ability to replicate the maximum principal strain, the whole bone moduli of each specimen were compared between the experimental data and the PVC-derived and original image-derived models. In this case, the modulus calculated from models using the original images more accurately matched the experimental data with an average percent error of 15.31% while the PVC-derived models had 6% higher error (Figure 6). This difference in relative modulus error was found to be statistically significant (p\(<\)0.05, see Table 6). Figure 7 illustrates that the modulus from the original image-derived models produces a slightly more linear relationship to the experimental modulus (r=0.728) compared to the modulus from the PVC models (r=0.703). However, the linear fit for both types of model has a slope of \(\approx\)0.7 when it would be expected to be 1. \begin{table} \begin{tabular}{c c c c c c} & **N** & **Mean** & **Variance** & **SD** & **SE** & **95\% Conf. Interval** \\ \hline **PVC** & 8 & 0.212662 & 0.023474 & 0.153211 & 0.054168 & [0.0846, 0.3407] \\ **Raw** & 8 & 0.153135 & 0.019412 & 0.139328 & 0.04926 & [0.0367, 0.2696] \\ **PVC - Raw** & - & 0.059526 & & 0.02235 & 0.007902 & [0.0408, 0.0782] \\ \hline **Difference** & **PVC - Raw \(\neq\) 0** & **PVC - Raw \(<\) 0** & **PVC - Raw \(>\) 0** & \\ **P-Value** & 0.000134 & 0.999933 & 0.000067 & \\ \hline \end{tabular} \end{table} Table 6: Paired T-Test Results for Modulus Error Figure 6: Relative difference between simulated and experimental whole bone elastic modulus. ### Rmse Assessing the RMSE percent relative error that combines the relative strain and modulus errors (Figure 8) it can be seen that the PVC-derived models produce slightly lower error compared to the experimetal results, averaging 23.20%. The models derived from the original images produced RMSE that was 2% higher than that found with the PVC-derived models. This difference in RMSE was found to be statistically significant (p\(<\)0.05, see Table 7). ## 4 Discussion This work developed a flexible and easy to use partial volume correction algorithm to overcome blurring and PV effects at cortical bone boundaries. The algorithm works di \begin{table} \begin{tabular}{c c c c c c} & **N** & **Mean** & **Variance** & **SD** & **SE** & **95\% Conf. Interval** \\ \hline **PVC** & 8 & 0.232011 & 0.0266 & 0.163094 & 0.057662 & [0.0957, 0.3684] \\ **Raw** & 8 & 0.251944 & 0.025785 & 0.160577 & 0.056773 & [0.1177, 0.3862] \\ **PVC - Raw** & - & -0.019933 & - & 0.01732 & 0.006124 & [-0.0344, -0.0055] \\ \hline **Difference** & **PVC - Raw \(\neq\) 0** & **PVC - Raw \(<\) 0** & **PVC - Raw \(>\) 0** & & \\ **P-Value** & 0.013957 & 0.006979 & 0.993021 & & \\ \hline \end{tabular} \end{table} Table 7: Paired T-Test Results for RMSE Figure 7: Linear correlation of experimental elastic modulus compared to computaional determined modulus from the raw and partial volume corrected simulations. rectly on the CT images themselves and yields corrected CT images, rather than making the correction during 3D mesh material assignment. This image based approach means that the algorithm can be applied for a range of application beyond just FE modeling including deep learning, and medical image recognition. In the context of its use in FE modeling, which was our main focus, yielding corrected CT images means that this algorithm can be used with all existing CT-derived FE creation workflows and material mapping strategies. Furthermore, because the algorithm does not make assumptions about morphology, it has the potential for use across a range of bones, which was a limitation of previously published works. Specifically, the algorithm will only ever increase the density of surface voxels (i.e. suspected cortical bone) and thus, it will never make poorly segmented cortical bone even worse. For instance, in some cases a poorly segmented image will identify soft tissue as surface cortical bone and in previous methods (e.g. Pakdel et al.) the presence of the soft tissue would cause the interpolated value for neighbouring true cortical bone voxel's to decrease, which exacerbates the issue of partial volume effects. Finally, a tangential result of this work was the further validation of previously described methods for generating computationally derived specimen-specific density-modulus relationships and demonstration that additional constraints such as a maximum elastic modulus may be incorporated to yield more realistic results. Qualitative review of the corrected images clearly demonstrates that the developed PVC algorithm was successful at correcting the PV effects and in fact had a consistent corrective effect across all specimen. For both strain and whole bone modulus results the level of error produced by either type of model was highly variable ranging from as little as 1% up to 65%. This large range can be attributed to the difficulty in extracting model data that exactly matches the experimental setup and due to the highly variable skeletal maturity that was qualitatively observed and confirmed by the large range in Figure 8: Relative difference between simulated and experimental root-mean-square-error as a combined measure of strain and modulus error. density-to-modulus relationships calculated (see Table 2) Partial volume correction resulted in improved surface strain values that more closely matched the experimental results with a 6.4% improvement compared to the results produced with the uncorrected, original models. This level of improvement in strain is similar to the mean improvement of 8% reported by Helgason et al. (2008) although they reported a mean increase in modulus error of 10% which was greater than the mean increase in modulus error of 6% observed with our method. Despite this improvement, the strain calculated using the PVC models still averaged 25%, which is attributable to the difficulty in extracting strain values from the FE models that exact match the experimental strain gauges geometry and location, as well as the effect of needing to create a custom specimen-specific density-to-modulus relationship rather than being able to rely on previously validated relationships. Despite these complications the improvement in strain error from the PVC models was found to be statistically significant. In addition to the discrete strain error values for each specimen, it is also useful to assess how the computational and experimental strains correlate across all specimen. In this analysis we found that neither the original or PV corrected model strains achieved the desired 1-to-1 relationship with experimental strain and only produced a moderately strong correlation. These findings can be attributed to the difficulty in extracting strain values from the FE models that precisely match the experimental strain gauge location which can cause the extracted value to be higher or lower than it would be if precisely matched which increases the randomness of the value thus affecting the linear fit and correlation. As well, the highly variable skeletal maturity of the specimen resulting in very different density-to-modulus relationships (see Table 2) would also effect these results. Application of partial volume correction also resulted in increases in the calculated whole bone modulus of each specimen as one would expect when the surface cortical bone layer HU values are increased. However, the specimen-specific density-to-modulus relationships resulted in consistent over-estimates of the whole bone modulus for all specimen even when using the uncorrected, original images. As a result, the PVC derived FE models actually had higher error compared to the uncorrected models with an average of 6%. This unexpected negative effect of the PV correction is most likely caused by the response surface method used to derived specimen-specific density-to-modulus relationships, which tried to minimize both strain error and whole bone modulus, which change in opposing ways. For instance, as surface modulus increases whole bone modulus increases but surface strain decreases. Considering the linear fit and correlational findings for whole bone modulus, it can be seen that both types of models produce results that are closer to the expected pattern as compared to the strain results but there was minimal difference between the two types of models. Specifically, the linear relationship had a slope of approximately 0.7, which more closely matched our expectation of a 1-to-1 relationship than we observed for strain results. As well, we observed correlations of approximately 0.7 which is nearing a strong relationship. It is believed that these results would be even better if it wasn't for the same complicating factors noted above for the strain related findings. Considering both the strain and modulus errors together through the RMSE, it can be seen that the PV corrected models did produce a 2% improvement over the results for the original models compared to the experimental results. This overall improvement is minimized by the overestimate of whole bone modulus that the specimen-specific density-to-modulus relationships produced. **Impact of the Work** The developed algorithm is expected to produce a number of impacts due to its improvements and differences in approach compared to previously published methods. First, this code is standalone and works on the images themselves, which allows for the method to easily be integrated into existing workflows regardless of the preferred software for segmentation, meshing, and material assignment. This will provide greater user freedom to conduct their research compared to previously published PVC methods (Helgason, Pakdel) that were integrated with specific material mapping strategies or packaged as part of dedicated software (MITK-GEM), thus requiring significant changes to existing workflows. Second, the described method requires less user input than in previously published methods such as Pakdel et al.'s that required deconvolution prior to using the method, which in turn required the user to have a significant degree of expertise/knowledge in order to optimize the result. The present method demonstrated that it is possible to perform a partial volume correction directly on the CT data while improving the accuracy of the results with clinical grade CT scans that have not undergone deblurring or deconvolution as recommended by Pakdel prior to applying their PVC method. Third, whereas previous methods have either not been open-source or difficult to modify due to their integration with other packages, this method will be released in a GitHub repo that provides the code for performing the PVC on any CT images formatted as a DICOM stack ([https://github.com/adbeagley/pvcpy](https://github.com/adbeagley/pvcpy)). The code will be released under an open-source license that allows other researchers to adopt it and/or iterate on the methods. **Limitations** As with any computational method, the algorithm developed here does have limitations. First, in the case of a poor segmentation which has poorly identified the bone surface such that the surface voxels and their adjacent interior voxels are actually located in regions of soft tissue, the method will be unable to accurately correct the HU intensity as the interior voxels do not represent cortical bone that has been less affected by partial volume artifacts. Second, this work has only characterized the effect of the partial volume correction algorithm with a single type of mesh (i.e. 10 node tetrahedral volume elements) and material mapping strategy (i.e. the element-based method used by Mimics). Previous literature has shown that the element type and material mapping strategy does influence the effectiveness of PVC; however, those methods applied the correction to the meshes and thus it is expected that meshing choices would effect results. Conversely, this PVC method works on the images and thus although later meshing choices may effect results it should not be effected by the PVC method. Third, there is currently no well validated density-to-modulus relationship for porcine bone, especially in circumstances of variable skeletal maturity as observed in this study. Therefore, computationally derived specimen-specific density-to-modulus relationships were determined based on the PVC images rather than using an empirical relationship. Use of an empirical relationship would have been preferable as it would have allowed us to characterize the PVC algorithm's ability to match true bone density rather than being limited to characterizing the algorithm's relative improvement compared to the uncorrected images. **Future Work** To further validate this algorithm, future work will focus on assessing the effect of pairing the PV correction with multiple material assignment strategies that including both element and node based methods. This will provide greater confidence that this new method is in fact agnostic to the material assignment method used. As well, in these future works, the method will be applied to human bones with well-validated empirical density-to-modulus relationships that will enable a direct assessment of the method's ability to correct surface voxel density to the true values. **Conclusion** This work has developed and preliminarily validated a partial volume correction algorithm that works directly on CT images, is easy to use, and can be integrated with any existing FE workflows. As well the method can be used in other applications that work with CT images and would benefit from the removal of PV effects including deep learning based semantic segmentation.
2308.00986
Ranking species in complex ecosystems through nestedness maximization
Identifying the rank of species in a social or ecological network is a difficult task, since the rank of each species is invariably determined by complex interactions stipulated with other species. Simply put, the rank of a species is a function of the ranks of all other species through the adjacency matrix of the network. A common system of ranking is to order species in such a way that their neighbours form maximally nested sets, a problem called nested maximization problem (NMP). Here we show that the NMP can be formulated as an instance of the Quadratic Assignment Problem, one of the most important combinatorial optimization problem widely studied in computer science, economics, and operations research. We tackle the problem by Statistical Physics techniques: we derive a set of self-consistent nonlinear equations whose fixed point represents the optimal rankings of species in an arbitrary bipartite mutualistic network, which generalize the Fitness-Complexity equations widely used in the field of economic complexity. Furthermore, we present an efficient algorithm to solve the NMP that outperforms state-of-the-art network-based metrics and genetic algorithms. Eventually, our theoretical framework may be easily generalized to study the relationship between ranking and network structure beyond pairwise interactions, e.g. in higher-order networks.
Manuel Sebastian Mariani, Dario Mazzilli, Aurelio Patelli, Flaviano Morone
2023-08-02T07:40:39Z
http://arxiv.org/abs/2308.00986v1
# Ranking species in complex ecosystems through nestedness maximization ###### Abstract Identifying the rank of species in a social or ecological network is a difficult task, since the rank of each species is invariably determined by complex interactions stipulated with other species. Simply put, the rank of a species is a function of the ranks of all other species through the adjacency matrix of the network. A common system of ranking is to order species in such a way that their neighbours form maximally nested sets, a problem called nested maximization problem (NMP). Here we show that the NMP can be formulated as an instance of the Quadratic Assignment Problem, one of the most important combinatorial optimization problem widely studied in computer science, economics, and operations research. We tackle the problem by Statistical Physics techniques: we derive a set of self-consistent nonlinear equations whose fixed point represents the optimal rankings of species in an arbitrary bipartite mutualistic network, which generalize the Fitness-Complexity equations widely used in the field of economic complexity. Furthermore, we present an efficient algorithm to solve the NMP that outperforms state-of-the-art network-based metrics and genetic algorithms. Eventually, our theoretical framework may be easily generalized to study the relationship between ranking and network structure beyond pairwise interactions, e.g. in higher-order networks. ## I Introduction Experience reveals that species forming complex ecosystems are organized in hierarchies. The ranks of such species, namely their position in the hierarchy, are functions of the interactions encoded in the adjacency matrix of the ecological network. Under this assumption, the task of ranking species can be cast in the problem of finding a suitable permutation of the rows and columns of the adjacency matrix, and this problem is, fundamentally, a combinatorial one. Ranking rows and columns of the adjacency matrix has revealed the existence of nested structures: neighbors of low rank nodes are subsets of the neighbors of high rank nodes [1; 2; 3]. For example, nested patterns are found in the world trade, in which products exported by low-fitness countries constitute subset of those exported by high-fitness countries [4]. In fragmented habitats, species found in the least hospitable islands are a subset of species in the most hospitable islands [1]. Nestedness in real world interaction networks has captured cross-disciplinary interest for three main reasons. First, nested patterns are ubiquitous among complex systems, ranging from ecological networks [1; 2] and the human gut microbiome [5] to socioeconomic systems [4; 6] and online social media and collaboration networks [7; 8]. Second, the ubiquity of nested patterns have triggered intensive debates about the reasons behind the emergence of nestedness in mutualistic systems [9; 10; 11; 12] and socioeconomic networks [6; 8]. Third, nestedness may have profound implications for the stability and dynamics of ecological and economic communities: highly-nested rankings of the nodes have revealed vulnerable species in mutualistic networks [13] and competitive actors in the world trade [14; 15]. The ubiquity of nestedness and its implications in shaping the structure of biotas have motivated the formulation of the nestedness maximization problem. This problem can be stated in the following way: find the permutation (i.e. ranking) of the rows and columns of the adjacency matrix of the network resulting in a maximally nested layout of the matrix elements. Originally introduced by Atmar and Patterson [1], the problem has been widely studied in ecology, leading to several algorithms for measuring the nestedness of a matrix, e.g. the popular nestedness temperature calculator and its variants [1; 16; 17; 18]. Yet many of these methods do not attempt to optimize the actual cost of a nested solution, but exploit some simple heuristic that is deemed to be correlated with nestedness. Other methods, e.g. BINMATNEST [16], do optimize a nestedness cost following a genetic algorithm, but lack the theoretical insight contained in an analytic solution to the problem. More generally, we lack a formal theory to derive the degree of nestedness of a network from the structure of the adjacency matrix and the ranking of the nodes. Here, we map the nestedness maximization problem onto the Quadratic Assignment Problem [19], thereby tackling directly the problem of finding the optimal permutation of rows and columns that maximizes the nestedness of the adjacency matrix. In our formulation, the degree of nestedness is measured by a cost function over the space of all possible rows and columns permutations, whose global minimum corresponds to a matrix layout having maximum nestedness. Roughly speaking, the cost function is designed to reward permutations that move the maximum number of non-zero elements of the matrix in the upper left corner and to penalize those that move non-zero elements in the bottom right corner. Next, we set up a theoretical framework which allows us to obtain the mean field solution to the NMP as a leading order approximation and, in principle, calculate also next-to-leading order corrections. ## II Problem formulation We consider bipartite networks where nodes of one kind, representing for example plants indexed by a variable \(i=1,...,N\), can only be connected with nodes of another kind, e.g. pollinators indexed by another variable \(a=1,...,M\), as seen in Fig. 1a. We denote by \(A_{ia}\) the element of the network's \(N\times M\) adjacency matrix: \(A_{ia}\neq 0\) if \(i\) and \(a\) are connected, and \(A_{ia}=0\) otherwise. Besides connectivity, the adjacency matrix encodes the interaction strength between nodes such that whenever \(i\) and \(a\) are connected, the strength of their interaction is \(A_{ia}=w_{ia}>0\). A ranking of the rows is represented by a permutation of the integers \(\{1,2,...,N\}\), denoted \(r\equiv\{r_{1},r_{2},...,r_{N}\}\); a ranking of the columns is represented by a (different) permutation of the integers \(\{1,2,...,M\}\), denoted \(c\equiv\{c_{1},c_{2},...,c_{M}\}\). More precisely, the \(r\) sequence arranges rows in ascending order of their ordinal rankings \(r_{i}\) such that row \(i\) is ranked higher than row \(j\) if \(r_{i}<r_{j}\). Similarly, the \(c\) sequence arranges columns such that column \(a\) ranks higher than column \(b\) if \(c_{a}<c_{b}\). To model the problem, one more concept is needed: network nestedness. Nestedness is the property whereby if \(j\) ranks lower than \(i\), than the neighbors of \(j\) form a subset of the neighbors of \(i\), as illustrated in Fig. 1b. Different rankings, i.e. different sequences \(r\) and \(c\), produce different nested patterns, that is, nestedness is a function of the rankings. Therefore, any cost (energy) function that seeks to quantify matrix nestedness must be a function of the rankings \(r\) and \(c\). The simplest energy function that does the job, aside from trivial cases (see Supplementary Information Sec. VI), is \[E(r,c)=\sum_{i=1}^{N}\sum_{a=1}^{M}A_{ia}r_{i}c_{a}. \tag{1}\] The product \(A_{ia}r_{i}c_{a}\) penalizes strong interactions between low-rank nodes, since they contribute a large amount to the cost function; thus, low rank nodes typically interact weakly. Strong interactions are only allowed between high rank nodes, because when \(A_{ia}\) is large the product \(A_{ia}r_{i}c_{a}\) can be made small by choosing \(r_{i}\) and \(c_{a}\) to be small. Furthermore, high rank nodes can have moderate interactions with low rank nodes, because the product \(r_{i}A_{ia}c_{a}\) can be still relatively small when \(r_{i}\) is large and \(c_{a}\) is small (or viceversa) provided \(A_{ia}\) is not too large (hence the name'moderate' interaction). The assumptions of our model are relevant to diverse scenarios where nestedness has been ob Figure 1: **Modeling of the Nested Maximization Problem.****a**, A bipartite network models the interactions between, e.g., plants \(i\), represented by purple circles, and pollinators \(a\), represented by cyan squares, through the adjacency matrix \(A\). The interaction is mutualistic, i.e. \(A_{ia}=1>0\) if \(i\) interacts with \(a\) and \(A_{ia}=0\) otherwise. **b**, A nested network has a hierarchical structure wherein the neighbors of low rank nodes (the specialist species at the bottom) are a subset of the neighbors of high rank nodes (the generalists at the top). The rank of a node is encoded in the variables \(r_{i}\) (for plants) and \(c_{a}\) (for pollinators). Top rank nodes have \(r=c-1\), while bottom ones have \(r=c=4\). The adjacency matrix of a nested network shows a peculiar pattern with all non-zero entries clustered in the upper left corner. **c**, Maximizing network nestedness amounts to minimize the cost function \(E(r,c)\) over the ranking vectors \(r\) and \(c\), which, in turn, is equivalent to optimizing the cost \(E(P,Q)\) with respect to the permutations matrices \(P\) and \(Q\). The optimal permutation matrices bring the adjacency matrix to its maximally nested form \(P^{t}AQ=A_{\text{nested}}\), which is complementary to the layout of matrix \(B\). served. In bipartite networks of countries connected to their exported products, we could interpret \(r_{i}\) as the fitness of country \(i\) and \(c_{a}\) as the inverse of the complexity of product \(a\). In this scenario, high-energy links \(r_{i}A_{ia}c_{a}\) represent the higher barriers faced by underdeveloped countries to produce and export sophisticated products [4], whereas low-energy links represent competitive countries exporting ubiquitous products. In mutualistic ecological networks, high-energy links represent the higher extinction risk for specialist pollinators to be connected with specialist plants, whereas low-energy links represent connections within the core of generalist nodes [2] as depicted in Fig. 1b. With this equipment, it should be clear that to maximize nestedness, we have to minimize the energy function in Eq. (1). More precisely, nestedness maximization is the mathematical optimization problem in which we seek to find the optimal sequences \(r^{*}\) and \(c^{*}\) that minimize the energy function, i.e. \(\min_{r,c}E(r,c)=E(r^{*},c^{*})\). Since the sequence \(r\) is a permutation of the ordered sequence \(\{1,2,...,N\}\), we can always write \(r_{i}=\sum_{n=1}^{N}P_{in}n\), where \(P\) is a \(N\times N\) permutation matrix. Similarly, we can write \(c_{a}=\sum_{m=1}^{M}Q_{am}m\) where \(Q\) is a \(M\times M\) permutation matrix. Therefore, the energy function, considered as a function of the permutation matrices \(P\) and \(Q\), can be rewritten in the form \[E(r,c)=E(P,Q)=\text{Tr}\big{(}P^{t}AQB^{t}\big{)}\, \tag{2}\] where \(B\) is a \(N\times M\) matrix with entries \(B_{ia}=ia\), as shown in Fig. 1c. In this language, the NMP is simply the problem of finding the permutations \(P^{*}\) and \(Q^{*}\) that minimize the energy function given by Eq. (2), which mathematically reads \[(P^{*},Q^{*})=\underset{P,\ Q}{\arg\min}\,E(P,Q). \tag{3}\] The geometric meaning of the optimal permutations \(P^{*}\) and \(Q^{*}\) is clear if we apply them to the adjacency matrix as \(P^{t}AQ=A_{\text{nest}}\) in that the nested structure in \(A_{\text{nest}}\) is visually manifest, as schematized in Fig. 1c. The optimization problem defined by Eqs. (2) and (3) can be recognized as an instance of the Quadratic Assignment Problem (QAP) in the Koopmans-Beckmann form [19], one of the most important problem in combinatorial optimization, that is known to be NP-hard. The formal mathematical mapping of the NMP onto an instance of the QAP represents our first most important result. Having formulated the NMP in the language of permutation matrices, we move next to solve it using a Statistical Physics approach. ## III Solving the NMP with statistical physics Our basic tool to study the NMP is the partition function \(Z(\beta)\) defined by \[Z(\beta)=\sum_{P,\ Q}e^{-\beta E(P,Q)}\, \tag{4}\] where \(\beta\) is an external control parameter, akin to the 'inverse temperature' in the statistical physics language. The partition function \(Z(\beta)\) provides a tool to determine the global minimum of the energy function via the limit \[E(P^{*},Q^{*})=-\lim_{\beta\rightarrow\infty}\frac{1}{\beta}\ln Z(\beta) \tag{5}\] Calculating the partition function may seem hopeless, since it requires to evaluate and sum up \(N!M!\) terms. Nonetheless, the calculation is greatly simplified in the limit of large \(\beta\), since we can evaluate \(Z(\beta)\) via the steepest descent method. The strategy consists of two main steps. The first step is to work out an integral representation of \(Z(\beta)\) of the form \[Z(\beta)=\int DXDY\ e^{-\beta F(X,Y)}\, \tag{6}\] where the integral is over the space of \(N\times N\) doubly-stochastic (DS) matrices \(X\) and \(M\times M\) DS matrices \(Y\), that converge onto permutation matrices \(P\) and \(Q\) when \(\beta\rightarrow\infty\); and \(F(X,Y)\) is an "effective cost function" that coincides with \(E(P,Q)\) for \(\beta\rightarrow\infty\). The second step is to find the stationary points of \(F(X,Y)\) by zeroing the derivatives \(\partial F/\partial X=\partial F/\partial Y=0\), resulting in a set of self-consistent equations for \(X\) and \(Y\), called saddle point equations. All steps of the calculation are explained in great detail in Supplementary Information VII. The resulting saddle point equations are given by \[\begin{split} X_{ij}&=u_{i}\exp\Big{[}-\beta\big{(} AYB^{t})_{ij}\Big{]}v_{j},\\ Y_{ab}&=\mu_{a}\exp\Big{[}-\beta\big{(}A^{t}XB)_{ ab}\Big{]}\nu_{b}\,\end{split} \tag{7}\] where \(u,v\) are \(N\)-dimensional vectors and \(\mu,\nu\) are \(M\)-dimensional vectors determined by imposing that all row and column sums of \(X\) and \(Y\) are equal to 1. At this point we can exploit the specific form of matrix \(B\), i.e. \(B_{ia}=ia\), to further simplify Eqs. (7). Specifically, we define the "stochastic" rankings \(\rho_{i}\) and \(\sigma_{a}\) as \[\rho_{i}=\sum_{k=1}^{N}X_{ik}\ k\,\quad\sigma_{a}=\sum_{b=1}^{M}Y_{ab}\ b\, \tag{8}\] whereby we can cast Eqs. (7) in the following vectorial form (details in Supplementary Information VII) \[\begin{split}\rho_{i}&=\frac{\sum_{k}k\ v_{k}\ e^{- \beta k\sum_{a}A_{ia}\sigma_{a}}}{\sum_{k}v_{k}\ e^{-\beta k\sum_{a}A_{ia}\sigma_ {a}}}\,\\ \sigma_{a}&=\frac{\sum_{c}c\ v_{c}\ e^{-\beta c\sum_{ i}A_{ia}\rho_{i}}}{\sum_{c}\nu_{c}\ e^{-\beta c\sum_{i}A_{ia}\rho_{i}}}\, \end{split} \tag{9}\] where the normalizing vectors \(v\) and \(\nu\) satisfy \[\begin{split}\frac{1}{v_{j}}&=\sum_{i}\Big{[}\sum_{ k}\ v_{k}\ e^{-\beta(k-j)\sum_{a}A_{ia}\sigma_{a}}\Big{]}^{-1}\,\\ \frac{1}{\nu_{b}}&=\sum_{a}\Big{[}\sum_{c}\ \nu_{c}\ e^{- \beta(c-b)\sum_{i}A_{ia}\rho_{i}}\Big{]}^{-1}\.\end{split} \tag{10}\] Equations (9) and (10) represent our second most important result and, when interpreted as iterative equations, provide a simple algorithm to solve the NMP, whose implementation is discussed in detail in Supplementary Information VIII. Note that \(\rho\) and \(\sigma\) converge to the the actual ranking \(r\) and \(s\) for \(\beta\rightarrow\infty\). However, in practice, we solve Eqs. (9) and (10) iteratively at finite \(\beta\). Once we reach convergence, we estimate \(r\) and \(s\) by simply sorting the entries of \(\rho\) and \(\sigma\). We observe that larger values of \(\beta\) give better results, i.e., lower values of the cost \(E(r,s)\), as seen in Fig. 2a. A full discussion of convergence and bounds of our algorithm will be published elsewhere. Here, we test its performance by applying it to many real mutualistic networks and show that we obtain better results than state-of-the-art network metrics and genetic algorithms, as discussed next. ## IV Numerical Results We apply our algorithm on 47 real mutualistic networks freely downloadable at [https://www.web-of-life.es/](https://www.web-of-life.es/), whose filenames can be found in the first column of Table 1. To standardize the comparison with existing methods, we binarize the adjacency matrices of the networks setting \(A_{ij}=1\) if nodes \(i\) and \(j\) are connected and zero otherwise, thus ignoring the weights. Despite this simplification, we like to emphasize that our algorithm can be applied, as is, to any mutualistic weighted network of the most general form. Then we run four different algorithms comprising: naive degree [20], fitness-complexity (FC) [4], minimal extremal metric (MEM) [21], and BINMATNEST [16]. While BINMATNEST is the state-of-the-art algorithm in ecology for nestedness maximization [22], the effectiveness of the FC [23; 24] and MEM [21] has been proved in recent works in economic complexity, which also connected the FC to the Sinkhorn algorithm from optimal transport [24; 25; 26]. We compare the value of the cost function \(E(r,c)\) returned by each of the analyzed algorithms to the value returned by our algorithm (see Supplementary Information Sec. VI for implementation details). As shown in Fig. 2b, our algorithm finds a better (i.e. lower) cost than degree, FC, and MEM on 100% of the networks. When compared to BINMATNEST, we find a better (or equal) minimum cost in 80% of the instances, as seen in Fig. 2b and Table 1. We conclude this section by showing an application of the similarity transformation that brings the adjacency matrix to its maximally nested form. We call \(P\) and \(Q\) the optimal permutations that solve the QAP in Eq. (3) (details in Supplementary Information Sec. VIII) and we perform the similarity trasformation \[A\to P^{t}AQ\, \tag{11}\] which reveals the nested structure of the adjacency matrix shown in Fig. 2c. ## V Conclusions In this work we introduced a cost function for the NMP in bipartite mutualistic networks. This formulation allowed us to recast the problem as an instance of the QAP, that we tackled by Statistical Physics techniques. In particular, we obtained a mean field solution by using the steepest-descent approximation of the partition function. The corresponding saddle-point equations depend on a single hyper-parameter (the inverse temperature \(\beta\)) and can be solved by iteration to find the optimal rankings of the rows and columns of the adjacency matrix that result in a maximally nested layout. We benchmarked our algorithm against other methods on several real ecological networks and showed that our algorithm outperforms the best existing algorithm in 80% of the instances. We note that by changing the definition of the matrix \(B\), i.e. using measures other than a sequence of ordinal numbers, one can repurpose our algorithm to rank rows and columns of a matrix according to other geometric patterns [27; 28]. Therefore, the proposed framework holds promise for the effective detection of a wide range of network structural patterns beyond the nestedness considered here. Finally, the present framework can be easily extended and applied to solve the ranking problem in networks with higher order interactions. For example, given the adjacency tensor \(A_{ia\gamma}\) for a system with 3-body interactions, we can define the energy function \(E(P,Q,R)\) to be optimized over 3 permutation matrices \(P\), \(Q\), and \(R\) following exactly the same steps outlined in this paper for the case of pairwise interactions. This may be especially relevant in the world trade for ranking countries according to both exported and imported goods. \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \hline \multicolumn{1}{|c|}{Net} & N & M & \(||A||/NM\) & FC & DEG & MEM & BIT & OUR \\ \hline \hline M-PL-001 & 84 & 101 & 0.042551 & 137348 & 165930 & 155841 & 125048 & 125042 \\ \hline M-PL-002 & 43 & 64 & 0.071221 & 37556 & 232827 & 38823 & 33850 & 33858 \\ \hline M-PL-003 & 36 & 25 & 0.09000 & 3927 & 55335 & 4220 & 3866 & 3862 \\ \hline M-PL-004 & 12 & 102 & 0.136438 & 12082 & 176999 & 12274 & 11672 & 11672 \\ \hline M-PL-005 & 96 & 275 & 0.034962 & 885890 & 9040760 & 939937 & 767320 & 767393 \\ \hline M-PL-006 & 17 & 61 & 0.140791 & 6579 & 293503 & 6653 & 6379 & 6379 \\ \hline M-PL-007 & 16 & 36 & 0.147569 & 3109 & 98372 & 3210 & 3038 & 3036 \\ \hline M-PL-008 & 11 & 38 & 0.253589 & 5654 & 148325 & 6153 & 5428 & 5422 \\ \hline M-PL-009 & 24 & 118 & 0.085452 & 48398 & 2535535 & 50418 & 44559 & 44556 \\ \hline M-PL-010 & 31 & 76 & 0.193548 & 103649 & 6714987 & 120773 & 97454 & 97472 \\ \hline M-PL-011 & 14 & 13 & 0.285714 & 970 & 46815 & 968 & 943 & 943 \\ \hline M-PL-012 & 29 & 55 & 0.090909 & 9948 & 1861449 & 10871 & 9460 & 9449 \\ \hline M-PL-013 & 9 & 56 & 0.204365 & 4863 & 383760 & 4910 & 4644 & 4644 \\ \hline M-PL-014 & 29 & 81 & 0.076203 & 20106 & 4179783 & 20387 & 18830 & 18827 \\ \hline M-PL-016 & 26 & 179 & 0.088526 & 122835 & 15019420 & 127784 & 111800 & 111725 \\ \hline M-PL-017 & 25 & 79 & 0.151392 & 35393 & 10925775 & 37814 & 32533 & 32534 \\ \hline M-PL-018 & 39 & 105 & 0.093529 & 121642 & 19872497 & 124677 & 107023 & 107022 \\ \hline M-PL-019 & 40 & 85 & 0.077647 & 56643 & 16872116 & 56890 & 48888 & 48879 \\ \hline M-PL-020 & 20 & 91 & 0.104396 & 17037 & 6545141 & 17540 & 16022 & 16022 \\ \hline M-PL-022 & 21 & 45 & 0.087831 & 4339 & 1833172 & 4655 & 4156 & 4158 \\ \hline M-PL-023 & 23 & 72 & 0.075483 & 9513 & 6341662 & 9890 & 9098 & 9011 \\ \hline M-PL-024 & 11 & 18 & 0.191919 & 803 & 103022 & 862 & 755 & 755 \\ \hline M-PL-025 & 13 & 44 & 0.250000 & 8148 & 1921580 & 8233 & 7243 & 7243 \\ \hline M-PL-026 & 105 & 54 & 0.035979 & 17998 & 16395570 & 56197 & 17847 & 17855 \\ \hline M-PL-027 & 18 & 60 & 0.111111 & 14188 & 5208823 & 14803 & 12644 & 12633 \\ \hline M-PL-028 & 41 & 139 & 0.065626 & 126748 & 46897882 & 129783 & 113503 & 113490 \\ \hline M-PL-029 & 49 & 118 & 0.059841 & 105634 & 46529364 & 114448 & 88825 & 88805 \\ \hline M-PL-030 & 28 & 53 & 0.073450 & 15658 & 7451270 & 16284 & 13918 & 13915 \\ \hline \hline \end{tabular} **Data availability** Data that support the findings of this study are publicly available at the Web of Life database at [https://www.web-of-life.es/](https://www.web-of-life.es/) **Acknowledgments** This work was partially supported by AFOSR: Grant FA9550-21-1-0236. MSM acknowledges financial support from the URPP Social Networks at the University of Zurich, and the Swiss National Science Foundation, Grant 100013-207888. **Author contributions** All authors contributed equally to this work. **Additional information** Supplementary Information accompanies this paper. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Net} & N & M & \(||A||/NM\) & FC & DEG & MEM & BIT & OUR \\ \hline \hline M-PL-031 & 48 & 49 & 0.066327 & 24134 & 14712154 & 28025 & 22418 & 22409 \\ \hline M-PL-032 & 7 & 33 & 0.281385 & 1379 & 322338 & 1413 & 1363 & 1363 \\ \hline M-PL-033 & 13 & 34 & 0.319005 & 9718 & 2086383 & 10128 & 8648 & 8648 \\ \hline M-PL-034 & 26 & 128 & 0.093750 & 48523 & 37671897 & 49907 & 44993 & 44938 \\ \hline M-PL-035 & 61 & 36 & 0.081056 & 19907 & 11775325 & 28663 & 18565 & 18567 \\ \hline M-PL-036 & 10 & 12 & 0.250000 & 465 & 64621 & 483 & 452 & 452 \\ \hline M-PL-037 & 10 & 40 & 0.180000 & 3543 & 1061073 & 3763 & 3346 & 3342 \\ \hline M-PL-038 & 8 & 42 & 0.235119 & 3616 & 860044 & 3631 & 3399 & 3399 \\ \hline M-PL-039 & 17 & 51 & 0.148789 & 8400 & 6259559 & 8956 & 8065 & 8050 \\ \hline M-PL-040 & 29 & 43 & 0.091419 & 8126 & 8906049 & 9676 & 7739 & 7739 \\ \hline M-PL-041 & 31 & 43 & 0.108777 & 12445 & 12353208 & 13463 & 11771 & 11761 \\ \hline M-PL-042 & 12 & 6 & 0.347222 & 221 & 29225 & 298 & 212 & 212 \\ \hline M-PL-043 & 28 & 82 & 0.108885 & 46324 & 36103187 & 47058 & 42156 & 42156 \\ \hline M-PL-045 & 17 & 26 & 0.142534 & 1833 & 1291777 & 1941 & 1795 & 1783 \\ \hline M-PL-046 & 16 & 44 & 0.394886 & 23365 & 12810171 & 25494 & 22591 & 22592 \\ \hline M-PL-047 & 19 & 186 & 0.120260 & 82943 & 46841210 & 84968 & 77126 & 77126 \\ \hline M-PL-048 & 30 & 236 & 0.094774 & 273971 & 144577341 & 284223 & 243852 & 243771 \\ \hline M-PL-049 & 37 & 225 & 0.070871 & 255534 & 175524328 & 267224 & 226068 & 226039 \\ \hline M-PL-050 & 14 & 35 & 0.175510 & 3467 & 2586805 & 3581 & 3317 & 3317 \\ \hline \end{tabular} \end{table} Table 1: Numerical results on real mutualistic networks from the Web of Life database. First tab is the filename of the network as it appears in the database. Second and third tabs are the number of rows and columns, respectively. Fourth tab is the norm of the (binarized) adjacency matrix (sum of non zero entries) divided by \(NM\). Last five tabs represent the minimum cost returned by, in order, Fitness-Complexity, Degree, Minimal Extremal Metric, BINMATNEST and our method. We highlight in blue the best result among these five methods. **Competing interests** The authors declare no competing interests. **Correspondence** should be addressed to F. M. at: [email protected] ## 4 Energy minimization Figure 2: **Numerical solution and comparison with other methods**. **a**, Optimal cost \(E(r,c)\) returned by our algorithm on the mutualistic network named _ML-PL-OO1_ in the Web-of-Life database, for several choices of the parameter \(\beta\). Larger values of \(\beta\) give lower costs. In particular, for sufficiently large \(\beta\) our algorithm returns a lower cost than the best off-the-shelf algorithm for nestedness maximization (BINMATNEST, red line). **b**, Comparison of our algorithm with state-of-the-art methods in the literature: Degree (upper-left), Fitness-Complexity (upper-right), Minimal-Extremal-Metric (bottom-left), and BINMATNEST (bottom-right). In each panel we plot the cost returned by each algorithm divided by the cost returned by our algorithm, denoted \(E/E_{\text{our}}\), for each network considered in this work. A value \(E/E_{\text{our}}>1\) means that our algorithm returns a better, i.e. lower, cost. We find that our algorithm returns a better cost in 100% of the networks when compared to degree, FC, and MEM, and in 80% of the networks when compared to BINMATNEST (see also Table I). **c**, Similarity transformation applied to the adjacency matrix \(A\) of network _ML-PL-OO1_ that brings \(A\) into its maximally nested form \(P^{t}AQ\), where \(P\) and \(Q\) are the optimal permutation matrices constructed from the optimal ranking vectors \(r^{*}\) and \(s^{*}\). ## References * [1] Wirt Atmar and Bruce D Patterson. The measure of order and disorder in the distribution of species in fragmented habitat. _Oecologia_, 96(3):373-382, 1993. * [2] Jordi Bascompte, Pedro Jordano, Carlos J Melian, and Jens M Olesen. The nested assembly of plant-animal mutualistic networks. _Proceedings of the National Academy of Sciences_, 100(16):9383-9387, 2003. * [3] Manuel Sebastian Mariani, Zhuo-Ming Ren, Jordi Bascompte, and Claudio Juan Tessone. Nestedness in complex networks: observation, emergence, and implications. _Physics Reports_, 813:1-90, 2019. * [4] Andrea Tacchella, Matthieu Cristelli, Guido Caldarelli, Andrea Gabrielli, and Luciano Pietronero. A new metrics for countries' fitness and products' complexity. _Scientific Reports_, 2(1):1-7, 2012. * [5] Sergio Cobo-Lopez, Vinod K Gupta, Jaeyun Sung, Roger Guimera, and Marta Sales-Pardo. Stochastic block models reveal a robust nested pattern in healthy human gut microbiomes. _PNAS Nexus_, 2022. * [6] Michael D Konig, Claudio J Tessone, and Yves Zenou. Nestedness in networks: A theoretical model and some applications. _Theoretical Economics_, 9(3):695-752, 2014. * [7] Maria J Palazzi, Jordi Cabot, Javier Luis Canovas Izquierdo, Albert Sole-Ribalta, and Javier Borge-Holthoefer. Online division of labour: emergent structures in open source software. _Scientific Reports_, 9(1):1-11, 2019. * [8] Maria J Palazzi, Albert Sole-Ribalta, Violeta Calleja-Solanas, Sandro Meloni, Carlos A Plata, Samir Suweis, and Javier Borge-Holthoefer. An ecological approach to structural flexibility in online communication systems. _Nature Communications_, 12(1):1-11, 2021. * [9] Samir Suweis, Filippo Simini, Jayanth R Banavar, and Amos Maritan. Emergence of structural and dynamical properties of ecological mutualistic networks. _Nature_, 500(7463):449-452, 2013. * [10] Sergi Valverde, Jordi Pinero, Bernat Corominas-Murtra, Jose Montoya, Lucas Joppa, and Ricard Sole. The architecture of mutualistic networks as an evolutionary spandrel. _Nature Ecology & Evolution_, 2 (1):94-99, 2018. * [11] Daniel S Maynard, Carlos A Servan, and Stefano Allesina. Network spandrels reflect ecological assembly. _Ecology Letters_, 21(3):324-334, 2018. * [12] Weiran Cai, Jordan Snyder, Alan Hastings, and Raissa M D'Souza. Mutualistic networks emerging from adaptive niche-based interactions. _Nature Communications_, 11(1):1-10, 2020. * [13] Virginia Dominguez-Garcia and Miguel A Munoz. Ranking species in mutualistic networks. _Scientific Reports_, 5(1):1-7, 2015. * [14] Andrea Tacchella, Dario Mazzilli, and Luciano Pietronero. A dynamical systems approach to gross domestic product forecasting. _Nature Physics_, 14(8):861-865, 2018. * [15] Carla Sciarra, Guido Chiarotti, Luca Ridolfi, and Francesco Laio. Reconciling contrasting views on economic complexity. _Nature Communications_, 11(1):1-10, 2020. * [16] Miguel A Rodriguez-Girones and Luis Santamaria. A new algorithm to calculate the nestedness temperature of presence-absence matrices. _Journal of biogeography_, 33(5):924-935, 2006. * [17] Mario Almeida-Neto, Paulo R. Guimaraes Jr, and Thomas M. Lewinsohn. On nestedness analyses: Rethinking matrix temperature and anti-nestedness. _Oikos_, 116(4):716-722, 2007. * [18] Claudia Payrato-Borras, Laura Hernandez, and Yamir Moreno. Measuring nestedness: A comparative study of the performance of different metrics. _Ecology and Evolution_, 10(21):11906-11921, 2020. * [19] Tjalling C Koopmans and Martin Beckmann. Assignment problems and the location of economic activities. _Econometrica: journal of the Econometric Society_, pages 53-76, 1957. * [20] Aderaldo IL Araujo, Gilberto Corso, Adriana M Almeida, and Thomas M Lewinsohn. An analytic approach to the measurement of nestedness in bipartite networks. _Physica A: Statistical Mechanics and Its Applications_, 389(7):1405-1411, 2010. * [21] Rui-Jie Wu, Gui-Yuan Shi, Yi-Cheng Zhang, and Manuel Sebastian Mariani. The mathematics of nonlinear metrics for nested networks. _Physica A: Statistical Mechanics and its Applications_, 460:254-269, 2016. * [22] Carsten F Dormann. Using bipartite to describe and plot two-mode networks in r. _R Package Version_, 4:1-28, 2020. * [23] Jian-Hong Lin, Claudio Juan Tessone, and Manuel Sebastian Mariani. Nestedness maximization in complex networks through the fitness-complexity algorithm. _Entropy_, 20(10):768, 2018. * [24] Dario Mazzilli, Manuel Sebastian Mariani, Flaviano Morone, and Aurelio Patelli. Fitness in the light of sinkhorn. _arXiv preprint arXiv:2212.12356_, 2022. * [25] Richard Sinkhorn and Paul Knopp. Concerning nonnegative matrices and doubly stochastic matrices. _Pacific Journal of Mathematics_, 21(2):343-348, 1967. * [26] Albert W Marshall and Ingram Olkin. Scaling of matrices to achieve specified row and column sums. _Numerische Mathematik_, 12(1):83-90, 1968. * [27] Flaviano Morone. Clustering matrices through optimal permutations. _Journal of Physics: Complexity_, 3(3):035007, 2022. * [28] Caterina De Bacco, Daniel B Larremore, and Cristopher Moore. A physical model for efficient ranking in networks. _Science Advances_, 4(7):eaar8260, 2018. **Supplementary Information for:** **Ranking species in complex ecosystems through nestedness maximization** Manuel Sebastian Mariani, Dario Mazzilli, Aurelio Patelli & Flaviano Morone ###### Contents * I Introduction * II Problem formulation * III Solving the NMP with Statistical Physics * IV Numerical results * V Conclusions * VI Related Works * A. Ranking by degree * B. SpringRank * C. BINMATNEST * D. Fitness-complexity * E. Minimal extremal metric * VII Derivation of the saddle point equations * Integral representation of \(Z(\beta)\) * Steepest descent evaluation of the partition function * VIII Algorithm Related Works In this section we briefly review existing methods, models, and algorithms tackling the ranking and nestedness maximization problems. ### Ranking by degree The degree of a node is simply defined as its number of connections. It can be connected to a nestedness maximization problem as follows. In Ref. [20] the authors consider the following energy function \[E(\vec{r},\vec{s})=\sum_{i,a}A_{ia}(r_{i}+s_{a}). \tag{12}\] The meaning of this energy function can be easily understood when \(A_{ia}\in\{0,1\}\). In this case the sum can be rewritten as: \(\sum_{ia}A_{ia}(r_{i}+s_{a})=\sum_{i}k_{i}r_{i}+\sum_{a}k_{a}s_{a}\), where \(k_{i}\) and \(k_{a}\) are the degrees (number of connections) of nodes \(i\) and \(a\), respectively. In the language of statistical physics the term \(k_{i}r_{i}\) represents an interaction between the degrees of freedom \(r_{i}\) and a local magnetic field \(k_{i}\), whose intensity equals the node's degree. The stronger the magnetic field \(k_{i}\) is, the lower the value of \(r_{i}\) ought to be in order to minimize the product \(r_{i}k_{i}\). This reasoning can be generalized to the case \(A_{ia}\in\{0,w_{ia}\}\) upon changing the definition of the magnetic field from the node degree to the weighted node degree, the weights being the interaction strengths \(w_{ia}\). In both cases, the effect of this term is to assign high rank to nodes with high values of \(k_{i}\) (or \(k_{a}\) of course). The non-interacting energy function defined in (12) is minimized by ranking the nodes according to their degree, and can be seen as an instance of the Linear Assignment Problem, whose solution can be found in polynomial time (in this case by simply sorting the degrees, so in \(O(N\log N)\) operations). Authors of Ref. [20] only considered the rankings of nodes by degree, and they were interested in comparing the energy observed in empirical networks against that of idealized nested structures. In our framework, we model the nestedness maximization problem by an energy function that couples the rows and columns' ranking positions, which can be seen as an instance of the Quadratic Assignment Problem [19], which is known to be NP-hard, and thus there is no known algorithm that can find the optimal solution in polynomial time. ### SpringRank Reference [28] considered an energy-based approach to rank nodes in directed weighted unipartite networks. They defined \(A_{ij}\) as the number of interactions suggesting that \(i\) is ranked above \(j\), and they defined the SpringRank centrality as the vector \(\vec{\eta}^{*}\) of real-valued scores that minimize the energy function \[E(\vec{\eta})=\sum_{i,j}A_{ij}(\eta_{i}-\eta_{j}+1)^{2}. \tag{13}\] The model reflects the assumption that if many directed interactions suggesting that \(i\) is ranked above \(j\) are observed, then the centrality of \(i\) should be much larger than that of \(j\). Subsequently, the authors develop statistical inference techniques to infer the node-level SpringRank scores in empirical networks. Broadly speaking, their approach is conceptually related to ours as it defines the rankings of the nodes in terms of the minimum of an energy function that depends on the nodes scores and the network's adjacency matrix. However, their ranking method focuses on directed weighted unipartite networks and it does not aim at maximizing the network nestedness, and therefore it won't be compared to the method presented in this work. ### Binnmatnest BINMATNEST [16] can be considered as the state-of-the-art algorithm to maximize nestedness in ecology [22]. In fact, the algorithm minimizes the nestedness temperature [1], a variable that is conceptually related to the nestedness energy defined in the main text. The nestedness temperature \(T\) quantifies the average distance of the adjacency matrix's elements from the so-called isocline of perfect nestedness, which represents the separatrix between the empty and filled regions of a perfectly-nested matrix with the same density as the original matrix. We refer to [16] for details of the isocline determination and temperature calculation. Of course \(T\) depends on the adjacency matrix \(A\) as well as the permutation of its rows and columns. The dependence of \(T\) on the ranking vectors is more complex than the nestedness energy function introduced here, and therefore, its optimization less amenable to analytic treatment. The genetic algorithm BINMATNEST bypasses the problem by relying on an iterative algorithm. In BINMATNEST [16], a candidate solution is represented by the rankings' vectors \(r=\{r_{1},r_{2},\ldots,r_{N}\}\) and \(c=\{c_{1},c_{2},\ldots,c_{M}\}\). One starts from a population of initial solutions, composed of the original matrix, solutions found with a similar algorithm as the original one by Atmar and Patterson [1], and their mutations. From a well-performing candidate solution, an offspring of solution is created by selecting a second "parent" from the remaining solutions in the population, suitably combining the information from the two solutions, and eventually performing random mutations in the resulting child solution. Specifically, denote as \(w\) the row ranking vector of a well-performing solution and \(p\) the row ranking vector of its selected partner (the procedure is analogous for the column ranking vectors). The row ranking vector of the offspring solution, \(o\), is set to \(w\) with probability \(0.5\), otherwise it is determined by a combination of \(w\) and \(p\) determined by the following algorithm [16]: * An integer \(k\in\{1,\ldots,N\}\) is selected uniformly at random. * We set \(o_{i}=w_{i}\) for all \(i\in\{1,\ldots,k\}\). * For \(i\in\{k+1,\ldots,N\}\), if \(p_{i}\notin\{w_{i},\ldots,w_{k}\}\), then we set \(o_{i}=p_{i}\). * For \(i\in\{k+1,\ldots,N\}\), if \(p_{i}\in\{w_{i},\ldots,w_{k}\}\), then the value of \(o_{i}\) is chosen at random from all the unused positions. As final step, a random mutation of ranking vector \(o\) is performed by selecting at random \(k_{1},k_{2}\in\{1,\ldots,N\}\) and performing a cyclical permutation of the elements \(r_{k_{1}},\ldots,r_{k_{2}}\). For both rows and columns, the procedure is repeated for a prefixed number of iterations, and the lowest-temperature candidate solution \((r^{*},c^{*})\) is then chosen as the final solution. In our study, we run the BINMATNEST algorithm through the nestedrank function of the bipartite R package [2]. ### Fitness-complexity The fitness-complexity algorithm has been introduced to simultaneously measure the economic competitivenss of countries (\(f_{i}\in[0,\infty)\)) and the sophistication of products (\(q_{\alpha}\in[0,\infty)\) from the bipartite network connecting the countries with the products they export in world trade [4]. The original fitness-complexity equations read [4] \[\begin{split} f_{i}^{-1}&=x_{i}=\frac{1}{\sum_{a} A_{ia}\,q_{a}}\\ q_{a}&=y_{a}&=\frac{1}{\sum_{i}A_{ia }\,x_{i}},\end{split} \tag{14}\] which implies that high-fitness countries export many products - both high- and low-complexity ones - and high-complexity products are rarely exported by low-fitness countries. We observe that the fitness-complexity equations are formally equivalent to the Sinkhorn-Knopp equations used in optimal transport [24; 25]. As such, they can be derived by solving a quadratic optimization problem with logarithmic barriers, defined by the energy function [26] \[E=\sum_{i,a}A_{ia}\,x_{i}\,y_{a}-\sum_{i}\log x_{i}-\sum_{\alpha}\log y_{a}. \tag{15}\] By taking the partial derivatives of \(E(\mathbf{x},\mathbf{y})\) with respect to \(x_{i}\) and \(y_{\alpha}\), respectively, we obtain indeed the fitness-complexity equations in Eq. (14). This remark provides an optimization-based interpretation of the fitness-complexity equations, while it does not provide a principled interpretation for the logarithmic barriers and the relation between the fitness-complexity scores and the degree of nestedness of a network. The algorithm has been shown to effectively pack bipartite adjacency matrices into nested configurations through both qualitative and quantitative arguments [23; 4], which motivates its inclusion in our paper. ### Minimal extremal metric The minimal extremal metric (MEM) is a variant of the fitness-complexity algorithm that penalizes more heavily products exported by low-fitness countries. The MEM equations read [21] \[\begin{split} f_{i}^{-1}&=x_{i}=\frac{1}{\sum_{a}A _{ia}\,q_{a}}\\ q_{a}&=y_{a}=\min_{i:A_{ia}=1}\{F_{i}\},\end{split} \tag{16}\] which implies high-complexity products are never exported by low-fitness countries. The metric has been shown to visually pack bipartite adjacency matrices better than the original FC algorithm [21], which motivates its inclusion in our paper. ## VII Derivation of the saddle point equations In this section we discuss in detail how to derive the saddle point Eqs. (7) given in the main text. We consider the minimization problem defined by \[(r^{*},s^{*})=\operatorname*{arg\,min}_{r\in\mathcal{R}_{N},s\in\mathcal{R}_{M })}E(r,s)\, \tag{17}\] where the cost (energy) function is given by \[E=\sum_{i=1}^{N}\sum_{a=1}^{M}A_{ia}\,r_{i}\,s_{a}\, \tag{18}\] and \(\mathcal{R}_{N}\) and \(\mathcal{R}_{M}\) are the sets of all vectors \(r\) and \(s\) obtained by permuting the entries of the representative vectors \(r^{0}\) and \(s^{0}\) defined as \[\begin{split}& r^{0}\equiv(1,2,3,...,N)\,\\ & s^{0}\equiv(1,2,3,...,M)\.\end{split} \tag{19}\] Therefore, we can write any two vectors \(r\) and \(s\) as \[\begin{split}& r_{i}=\sum_{j=1}^{N}P_{ij}r_{j}^{0},\\ & s_{a}=\sum_{a=1}^{M}Q_{ab}s_{b}^{0}\,\end{split} \tag{20}\] where \(P\) and \(Q\) are arbitrary permutation matrices of size \(N\times N\) and \(M\times M\), respectively. Furthermore, we introduce the \(N\times M\) matrix \(B\) defined as the tensor product of \(r^{0}\) and \(s^{0}\), whose components are explicitly given by \[B_{ia}=(r^{0}\otimes s^{0})_{ia}=ia. \tag{21}\] With these definitions we can rewrite the energy function as the trace of a product of matrices in the following way: \[E\equiv E(P,Q)=\text{Tr}(P^{t}AQB^{t}). \tag{22}\] The minimization problem in Eq. (17) can be reformulated as a minimization problem in the space of permutation matrices as follows \[(P^{*},Q^{*})=\operatorname*{arg\,min}_{(P\in\mathcal{S}_{N},\ Q\in\mathcal{S }_{M})}E(P,Q)\, \tag{23}\] where \(\mathcal{S}_{N}\) and \(\mathcal{S}_{M}\) denote the symmetric groups on \(N\) and \(M\) elements, respectively. Next we discuss a relaxation of the problem in Eq. (23) that amounts to extend the spaces \(\mathcal{S}_{N}\) and \(\mathcal{S}_{M}\) of permutation matrices onto the spaces of doubly-stochastic (DS) matrices \(\mathcal{D}_{N}\) and \(\mathcal{D}_{M}\). The space \(\mathcal{D}_{N}\) (\(\mathcal{D}_{M}\)) is a superset of the original space \(\mathcal{S}_{N}\) (\(\mathcal{S}_{M}\)). Solving the problem on the \(\mathcal{D}\)-space means to find two doubly-stochastic matrices \(X^{*}\) and \(Y^{*}\) that minimize an 'effective' cost function \(F\), i.e. \[F(X^{*},Y^{*})=\min_{(X\in\mathcal{D}_{N},\ Y\in\mathcal{D}_{M})}F(X,Y)\, \tag{24}\] and are only'slightly different' from the permutation matrices \(P^{*}\) and \(Q^{*}\) (we will specify later what'slightly different' means in mathematical terms and what \(F\) actually is). The quantity which plays the fundamental role in the relaxation procedure of the original problem is the partition function, \(Z(\beta)\), defined by \[Z(\beta)=\sum_{P\in\mathcal{S}_{N}}\sum_{Q\in\mathcal{S}_{M}}e^{- \beta E(P,Q)}. \tag{25}\] The connection between \(Z(\beta)\) and the original problem in Eq. (23) is established by the following limit: \[\lim_{\beta\rightarrow\infty}-\frac{1}{\beta}\log Z(\beta)=\min_{ (P\in\mathcal{S}_{N},\ Q\in\mathcal{S}_{M})}E(P,Q). \tag{26}\] The optimization problem in Eq. (23) is thus equivalent to the problem of calculating the partition function in Eq. (25). Ideally, we would like to compute exactly \(Z(\beta)\) for arbitrary \(\beta\) and then take the limit \(\beta\rightarrow\infty\). Although an exact calculation of the partition function is, in general, out of reach, in practice we may well expect that the better we estimate \(Z(\beta)\), the closer the limit in Eq. (26) will be to the true optimal solution. In fact, the procedure of relaxation is basically a procedure to assess the partition function for large but finite \(\beta\). Mathematically, this procedure is called method of steepest descent [1]. By estimating the partition function via the steepest descent method we will obtain a system of non-linear equations, called saddle-point equations, whose solution is a pair of doubly-stochastic matrices \(X^{*},Y^{*}\) that solve the relaxed problem given by Eq. (24). Eventually, the solution to the original problem in Eq. (23) can be obtained formally by projecting \(X^{*},Y^{*}\) onto the subspaces \(\mathcal{S}_{N},\mathcal{S}_{M}\subset\mathcal{D}_{N},\mathcal{D}_{M}\) via the limit \[\begin{split}\lim_{\beta\rightarrow\infty}X^{*}( \beta)&=P^{*}\,\\ \lim_{\beta\rightarrow\infty}Y^{*}(\beta)&=Q^{*}\.\end{split} \tag{27}\] Having explained the rationale for the introduction of the partition function, we move next to discuss the details of the calculation leading to the saddle point equations. In order to cast the partition function in a form suitable for the steepest-descent evaluation, we need the following preliminary result. **Definition: Semi-permutation matrix:** a \(N\times N\) square matrix \(\not{P}\) is called a semi-permutation matrix if \(\not{P}_{ij}\in\{0,1\}\) and each row sums to one, i.e. \(\sum_{j=1}^{N}\not{P}_{ij}=1\) for \(i=1,...,N\), but no further constraint on the column sums is imposed. We denote \(\not{S}_{N}\) the space of semi-permutation matrices: \[\not{S}_{N}=\left\{\not{P}\ |\not{P}_{ij}\in\{0,1\}\ \text{AND}\ \sum_{j=1}^{N}\not{P}_{ij}=1\ \forall i\right\} \tag{28}\] **Lemma** Consider an arbitary \(N\times N\) square matrix \(G\) and the function \(W(G)\) defined by \[e^{W(G)}=\sum_{\not{P}\in\not{S}_{N}}e^{{\rm Tr}(\not{P}G^{t})}. \tag{29}\] Then, \(W(G)\) is explicitly given by the following formula \[\boxed{\ W(G)=\sum_{i=1}^{N}\log\sum_{j=1}^{N}e^{G_{ij}}\ }. \tag{30}\] **Proof** Let us write the right hand side of Eq. (29) as \[\sum_{\not{P}\in\not{S}_{N}}e^{\sum_{ij}\not{P}_{ij}G_{ij}}=\sum_{\not{P}_{1}} e^{\sum_{j}(\not{P}_{1})_{j}G_{1j}}\sum_{\not{P}_{2}}e^{\sum_{j}(\not{P}_{2})_{j}G_ {2j}}\ \ldots\, \tag{31}\] where \(\not{P}_{i}\) is the i\({}^{\rm th}\) row of \(\not{P}\) (and thus is a vector) having one component equal to 1 and the remaining \(N-1\) components equal to 0. The sum \(\sum_{\not{P}_{i}}\) denotes a summation over all possible choices of the vector \(\not{P}_{i}\): there are \(N\) possible such choices, namely \(\not{P}_{i}=(1,0,...,0)\), \(\not{P}_{i}=(0,1,...,0),...,\not{P}_{i}=(0,0,...,1)\). Hence, each sum in the right hand side of Eq. (31) evaluates \[\sum_{\not{P}_{i}}e^{\sum_{j}(\not{P}_{i})_{j}G_{ij}}=e^{G_{i1}}+e^{G_{i2}}+... =\sum_{j=1}^{N}e^{G_{ij}}. \tag{32}\] Thus, the left hand side of Eq. (31) is equal to \[\sum_{\not{P}\in\not{S}_{N}}e^{\sum_{ij}\not{P}_{ij}G_{ij}}=\prod_{i=1}^{N} \sum_{j=1}^{N}e^{G_{ij}}. \tag{33}\] Eventually, by taking the logarithm of both sides of Eq. (33), we prove Eq. (30). With these tools at hand we move to derive the integral representation of \(Z(\beta)\). ### Integral representation of \(Z(\beta)\) We use the definition of the Dirac \(\delta\)-function to write the partition function in Eq. (25) as follows \[Z(\beta)=\sum_{P\in S_{N}}\sum_{Q\in S_{M}}\int DX\int DYe^{-\beta E(X,Y)}\prod _{i,j=1}^{N}\delta(X_{ij}-P_{ij})\prod_{a,b=1}^{N}\delta(Y_{ab}-Q_{ab})\, \tag{34}\] where the integration measures are defined by \(DX\equiv\prod_{i,j}dX_{ij}\) and \(DY\equiv\prod_{a,b}dY_{ab}\). The next step is to transform the sum over permutation matrices \(P,Q\) into a sum over semi-permutations matrices \(\not{P},\not{Q}\) and then performing explicitly this sum using the Lemma in Eq. (30). In order to achieve this goal, we insert into Eq. (34) \(N\) delta functions \(\prod_{j=1}^{N}\delta\Big{(}\sum_{i}X_{ij}-1\Big{)}\) and \(M\) delta functions \(\prod_{b=1}^{M}\delta\Big{(}\sum_{a}Y_{ab}-1\Big{)}\) to enforce the conditions that the columns of \(X\) and \(Y\) do sum up to one. By inserting these delta functions, we can then replace the sum over \(P,Q\) by a sum over \(\not{P},\not{Q}\), thus obtaining \[Z(\beta)=\sum_{\not{P}}\sum_{\not{Q}}\int DXDYe^{-\beta E(X,Y)}\prod_{i,j=1}^{ N}\delta(X_{ij}-\not{P}_{ij})\prod_{a,b=1}^{N}\delta(Y_{ab}-\not{Q}_{ab}) \prod_{j=1}^{N}\delta\Big{(}\sum_{i}X_{ij}-1\Big{)}\prod_{b=1}^{M}\delta\Big{(} \sum_{a}Y_{ab}-1\Big{)}. \tag{35}\] To proceed further in the calculation, we use the following integral representations of the delta-functions: \[\begin{split}\delta(X_{ij}-\not{P}_{ij})&=\frac{1} {2\pi i}\int_{-i\infty}^{i\infty}d\hat{X}_{ij}\ e^{-\hat{X}_{ij}(X_{ij}-\not{P}_{ ij})}\,\\ \delta(Y_{ab}-\not{Q}_{ab})&=\frac{1}{2\pi i}\int_{ -i\infty}^{i\infty}d\hat{Y}_{ab}\ e^{-\hat{Y}_{ab}(Y_{ab}-\not{Q}_{ab})}\,\\ \delta\Big{(}\sum_{i}X_{ij}-1\Big{)}&=\frac{1}{2\pi i }\int_{-i\infty}^{i\infty}dz_{j}\ e^{-z_{j}\big{(}\sum_{i}X_{ij}-1\big{)}}\,\\ \delta\Big{(}\sum_{a}Y_{ab}-1\Big{)}&=\frac{1}{2\pi i }\int_{-i\infty}^{i\infty}dw_{b}\ e^{-w_{b}\big{(}\sum_{a}Y_{ab}-1\big{)}}\,\end{split} \tag{36}\] into Eq. (35) and we get \[\begin{split} Z(\beta)=\sum_{\not{P}}\sum_{\not{Q}}\int DXDYD \hat{X}D\hat{Y}DzDw&\ e^{-\beta E(X,Y)}e^{-\mathrm{Tr}(\hat{X}X^{t})+ \mathrm{Tr}(\hat{X}\not{P}^{t})-\mathrm{Tr}(\hat{Y}Y^{t})+\mathrm{Tr}(\hat{Y} \not{Q}^{t})}\times\\ &\times e^{-\sum_{j}z_{j}\big{(}\sum_{i}X_{ij}-1\big{)}}e^{- \sum_{b}w_{b}\big{(}\sum_{a}Y_{ab}-1\big{)}}\,\end{split} \tag{37}\] where we defined the integration measures \(D\hat{X}\equiv\prod_{i,j}d\hat{X}_{ij}/2\pi i\), \(D\hat{Y}\equiv\prod_{a,b}d\hat{Y}_{ab}/2\pi i\), \(Dz\equiv\prod_{j}dz_{j}/2\pi i\), and \(Dw\equiv\prod_{b}dw_{b}/2\pi i\). Performing the sums over \(\not{P}\) and \(\not{Q}\) using Eq. (30) we obtain \[\begin{split} Z(\beta)=\int DXDYD\hat{X}D\hat{Y}DzDw& \ e^{-\beta E(X,Y)}e^{-\mathrm{Tr}(\hat{X}X^{t})+W(\hat{X})-\mathrm{Tr}(\hat{ Y}Y^{t})+W(\hat{Y})}\times\\ &\times e^{-\sum_{j}z_{j}\big{(}\sum_{i}X_{ij}-1\big{)}}e^{- \sum_{b}w_{b}\big{(}\sum_{a}Y_{ab}-1\big{)}}\.\end{split} \tag{38}\] Next we introduce the **effective cost function**\(F(X,\hat{X},Y,\hat{Y},z,w)\) defined as \[\begin{split} F(X,\hat{X},Y,\hat{Y},z,w)&=E(X,Y)+ \frac{1}{\beta}\mathrm{Tr}(\hat{X}X^{t})+\frac{1}{\beta}\mathrm{Tr}(\hat{Y}Y^{ t})-\frac{1}{\beta}W(\hat{X})-\frac{1}{\beta}W(\hat{Y})+\\ &+\frac{1}{\beta}\sum_{j}z_{j}\big{(}\sum_{i}X_{ij}-1\big{)}+ \frac{1}{\beta}\sum_{b}w_{b}\big{(}\sum_{a}Y_{ab}-1\big{)}\equiv\\ &\equiv E(X,Y)-\frac{1}{\beta}S(X,\hat{X},Y,\hat{Y},z,w)\end{split} \tag{39}\] whereby we can write the partition function as \[Z(\beta)=\int DXDYD\hat{X}D\hat{Y}DzDw\ e^{-\beta F(X,\hat{X},Y,\hat{Y},z,w)}\, \tag{40}\] which can be evaluated by the steepest descent method when \(\beta\rightarrow\infty\), as we explain next. ### Steepest descent evaluation of the partition function In the limit of large \(\beta\) the integral in Eq. (40) is dominated by the saddle point where \(E(X,Y)\) is minimized and \(S(X,\hat{X},Y,\hat{Y},z,w)\) is stationary (in order for the oscillating contributions to not cancel out). In order to find the saddle point, we have to set the derivatives of \(F(X,\hat{X},Y,\hat{Y},z,w)\) to zero, thus obtaining the following **saddle point equations** \[\begin{split}\frac{\partial F}{\partial X_{ij}}&= \frac{\partial E}{\partial X_{ij}}+\frac{1}{\beta}\big{(}\hat{X}_{ij}+z_{j} \big{)}=0\,\\ \frac{\partial F}{\partial\hat{X}_{ij}}&=\frac{1}{ \beta}X_{ij}-\frac{1}{\beta}\frac{\partial W}{\partial\hat{X}_{ij}}\,\\ \frac{\partial F}{\partial z_{j}}&=\sum_{i}X_{ij}-1 =0\,\end{split} \tag{41}\] and similar equations for the triplet \((Y,\hat{Y},w)\). The derivative of \(E\) with respect to \(X_{ij}\) gives \[\frac{\partial E}{\partial X_{ij}}=(AYB^{t})_{ij}\, \tag{42}\] and the derivative of \(W\) with respect to \(\hat{X}_{ij}\) gives \[\frac{\partial W}{\partial\hat{X}_{ij}}=\frac{e^{\hat{X}_{ij}}}{\sum_{k}e^{ \hat{X}_{ik}}}. \tag{43}\] Solving Eq. (41) with respect to \(X_{ij}\) we get \[X_{ij}=\frac{e^{-\beta(AYB^{t})_{ij}-z_{j}}}{\sum_{k}e^{-\beta(AYB^{t})_{ik}-z _{k}}}. \tag{44}\] Analogously, solving with respect to \(Y_{ab}\) we get \[Y_{ab}=\frac{e^{-\beta(A^{t}XB)_{ab}-w_{b}}}{\sum_{c}e^{-\beta(A^{t}XB)_{ac}-w _{c}}}. \tag{45}\] It is worth noticing that Eqs. (44) and (45) are invariant under the tranformations \[\begin{split} z_{j}&\rightarrow\ z_{j}+\zeta\,\\ w_{b}&\rightarrow\ w_{b}+\xi\,\end{split} \tag{46}\] for arbitrary values of \(\zeta\) and \(\xi\). This translational symmetry is due to the fact that the \(2N\) constraints on the row and column sums of \(P\) are not linearly independent, since the sum of all entries of \(P\) must be equal to \(N\), i.e. \(\sum_{ij}P_{ij}=N\). The same reasoning applies to the \(2M\) constraints on the row and column sums of \(Q\), of which only \(2M-1\) are linearly independent, since \(\sum_{ab}Q_{ab}=M\). Furthermore, we notice that the solutions matrices \(X\) and \(Y\) in Eqs. (44), (45) automatically satisfy the condition of having row sums equal to one. Next, we derive the equations to determine the Lagrange multipliers \(z_{j}\) and \(w_{b}\). To this end we first introduce the vectors \(v\) and \(\nu\) with components \[\begin{split}& v_{j}=e^{-z_{j}}\,\\ &\nu_{b}=e^{-w_{b}}\.\end{split} \tag{47}\] Then, we define the vectors \(u\) and \(\mu\) as \[\begin{split} u_{i}&=\Big{(}\sum_{k}e^{-\beta(AYB^ {t})_{ik}}\ v_{k}\Big{)}^{-1}\,\\ \mu_{a}&=\Big{(}\sum_{c}e^{-\beta(A^{t}XB)_{ac}}\ \nu_{c} \Big{)}^{-1}\,\end{split} \tag{48}\] so that we can write the solutions matrices \(X\) and \(Y\) in Eqs. (44), (45) as \[\begin{split} X_{ij}&=u_{i}\ e^{-\beta(AYB^{t})_{ ij}}\ v_{j},\\ Y_{ab}&=\mu_{a}\ e^{-\beta(A^{t}XB)_{ab}}\ \nu_{b}. \end{split} \tag{49}\] Finally, imposing the conditions on \(X\) and \(Y\) to have column sums equal to one, we find the equations to be satisfied by \(v\) and \(\nu\) \[\begin{split} v_{j}&=\Big{(}\sum_{i}u_{i}\ e^{- \beta(AYB^{t})_{ij}}\Big{)}^{-1}\,\\ \nu_{b}&=\Big{(}\sum_{a}\mu_{a}e^{-\beta(A^{t}XB)_{ ab}}\Big{)}^{-1}\,\end{split} \tag{50}\] Equations (48), (49), and (50) are the constitutive equations for the relaxed nestedness-maximization problem corresponding to Eqs. (7) given in the main text. We conclude this section by deriving the self-consistent equations for the "stochastic rankings" corresponding to Eqs. (9) and (10) given in the main text. We define the stochastic rankings as the two vectors \[\begin{split}\rho_{i}&=\sum_{k=1}^{N}X_{ik}\ k\,\\ \sigma_{a}&=\sum_{a=1}^{M}Y_{ab}\ b\,\end{split} \tag{51}\] where the term "stochastic" emphasizes their implied dependence on the doubly stochastic matrices \(X\) and \(Y\). Clearly we have \[\begin{split}\lim_{\beta\rightarrow\infty}\rho_{i}&=r_{i }\,\\ \lim_{\beta\rightarrow\infty}\sigma_{a}&=s_{a}\.\end{split} \tag{52}\] Next, let's consider the argument of the exponentials in Eq. (49), that we can rewrite as \[\begin{split}(AYB^{t})_{ij}&=\sum_{a}A_{ia}\Big{(} \sum_{b}Y_{ab}\ b\Big{)}j=j\sum_{a}A_{ia}\sigma_{a},\\ (A^{t}XB)_{ab}&=\sum_{i}A_{ia}\Big{(}\sum_{j}X_{ij} \ j\Big{)}b=b\sum_{i}A_{ia}\rho_{i}\.\end{split} \tag{53}\] At this point is sufficient to multiply both sides of Eq. (49) by \(j\) and \(b\), and sum over \(j\) and \(b\), respectively, to obtain \[\begin{split}\sum_{j}X_{ij}\ j&=\rho_{i}=u_{i}\sum _{j}e^{-\beta(AYB^{t})_{ij}}\ v_{j}\ j=u_{i}\sum_{j}e^{-\beta j\sum_{a}A_{ia} \sigma_{a}}\ v_{j}\ j\,\\ \sum_{b}Y_{ab}\ b&=\sigma_{a}=\mu_{a}\sum_{b}e^{- \beta(A^{t}XB)_{ab}}\ \nu_{b}\ b=\mu_{a}\sum_{b}e^{-\beta b\sum_{i}A_{ia}\rho_{i}}\ \nu_{b}\ b\.\end{split} \tag{54}\] Using the definition of \(u_{i}\) and \(\mu_{a}\) in Eqs. (48) we obtain \[\begin{split}\rho_{i}&=\frac{\sum_{j}e^{-\beta j \sum_{a}A_{ia}\sigma_{a}}\ v_{j}\ j}{\sum_{j}e^{-\beta j\sum_{a}A_{ia}\sigma_{a} }\ v_{j}}\,\\ \sigma_{a}&=\frac{\sum_{b}e^{-\beta b\sum_{i}A_{ia} \rho_{i}}\ \nu_{b}\ b}{\sum_{b}e^{-\beta b\sum_{i}A_{ia}\rho_{i}}\ \nu_{b}}\, \end{split} \tag{55}\] which are the self-consistent Eqs. (9) for \(\rho\) and \(\sigma\) given in the main text. There are still two unknown vectors in the previous equations: vectors \(v\) and \(\nu\). In order to determine them we consider Eqs. (50) and eliminate \(u_{i}\) and \(\mu_{a}\) using Eqs. (48), thus obtaining \[\begin{split} v_{j}&=\Bigg{(}\sum_{i}\Big{[}\sum_{ k}\ v_{k}\ e^{-\beta(k-j)\sum_{a}A_{ia}\sigma_{a}}\Big{]}^{-1}\Bigg{)}^{-1},\\ \nu_{b}&=\Bigg{(}\sum_{a}\Big{[}\sum_{c}\ \nu_{c}\ e^{- \beta(c-b)\sum_{i}A_{ia}\rho_{i}}\Big{]}^{-1}\Bigg{)}^{-1}\,\end{split} \tag{56}\] which are the self-consistent Eqs. (10) for \(v\) and \(\nu\) given in the main text. In the next section we describe a simple iterative algorithm to solve Eqs. (55) and (56). ## VIII Algorithm The algorithm to solve Eqs. (55) and (56) consists of 4 basic steps, explained below. 1. Initialize \(\rho_{i}\) uniformly at random in \([1,N]\); similarly, initialize \(\sigma_{a}\) uniformly at random in \([1,M]\). Also, initialize \(v_{j}\) and \(\nu_{b}\) uniformly at random in \((0,1]\). 2. Choose an initial value for \(\beta\). To start, initialize \(\beta\) using the following formula: \[\beta=\beta_{\text{init}}=\frac{1}{\max\left[N\max_{i}\{k_{i}\},M\max_{a}\{k_ {a}\}\right]}\,\] (57) where \(k_{i}=\sum_{a}A_{ia}\), and \(k_{a}=\sum_{i}A_{ia}\). 3. Set \(\tau=1\), and a tolerance \(\text{TOL}=10^{-3}\). Then run the following subroutine. 1. Iterate Eqs. (56) according to the following updating rules \[\begin{split} v_{j}(t+1)&=\Bigg{(}\sum_{i}\Big{[} \sum_{k}\ v_{k}(t)\ e^{-\beta(k-j)\sum_{a}A_{ia}\sigma_{a}}\Big{]}^{-1}\Bigg{)} ^{-1},\\ \nu_{b}(t+1)&=\Bigg{(}\sum_{a}\Big{[}\sum_{c}\ \nu_{c}(t)\ e^{- \beta(c-b)\sum_{i}A_{ia}\rho_{i}}\Big{]}^{-1}\Bigg{)}^{-1}\,\end{split}\] (58) until convergence. 2. Iterate Eqs. (55) according to the following updating rules \[\begin{split}\rho_{i}(t+1)&=\frac{\sum_{j}e^{- \beta j\sum_{a}A_{ia}\sigma_{a}(t)}\ v_{j}\ j}{\sum_{j}e^{-\beta j\sum_{a}A_{ia} \sigma_{a}(t)}\ v_{j}}\,\\ \sigma_{a}(t+1)&=\frac{\sum_{b}e^{-\beta b\sum_{i}A _{ia}\rho_{i}(t)}\ \nu_{b}\ b}{\sum_{b}e^{-\beta b\sum_{i}A_{ia}\rho_{i}(t)}\ \nu_{b}}\, \end{split}\] (59) until convergence. Call \(\rho_{i}^{(\tau)}\) and \(\sigma_{a}^{(\tau)}\) the converged vectors and compute \[\text{MAXDIFF}\equiv\max\Bigg{\{}\max_{i}\Big{[}\rho_{i}^{(\tau)}-\rho_{i}^{( \tau-1)}\Big{]},\max_{a}\Big{[}\sigma_{a}^{(\tau)}-\sigma_{a}^{(\tau-1)}\Big{]} \Bigg{\}}.\] (60) 3. If \(\text{MAXDIFF}<\text{TOL}\), then RETURN \(\rho_{i}^{(\tau)}\) and \(\sigma_{a}^{(\tau)}\); otherwise increase \(\tau\) by \(1\) and repeat from (a). 4. Increase \(\beta\rightarrow\beta+d\beta\) and repeat from (3) or terminate if the returned vectors did not change from the iteration at \(\beta-d\beta\). Having found the solution vectors \(\rho\) and \(\sigma\), we convert them into integer rankings as follows. The smallest value of \(\rho_{i}\) is assigned rank \(1\). The second smallest is assigned rank \(2\), and so on and so forth. This procedure generates a mapping from \(1,2,...,N\) to \(i_{1},i_{2},...,i_{N}\) that can be represented by a \(N\times N\) permutation matrix \(P_{ij}\). The same procedure, applied to \(\sigma_{a}\), generates a \(M\times M\) permutation matrix \(P_{ij}\). The same procedure, applied to \(\sigma_{a}\), generates a \(M\times M\) permutation matrix \(P_{ij}\). permutation matrix \(Q_{ij}\). Matrices \(P\) and \(Q\) represent the optimal permutations that solve the nestedness maximization problem. Eventually, application of the similarity transformation \[A\to P^{t}AQ\, \tag{61}\] brings the adjacency matrix into its maximally nested form having all nonzero entries clustered in the upper left corner, as seen in Fig. 2c.
2308.11653
A theorem for the normalization of continuous spectrum stationary states
We present analytic formulae that simplify the evaluation of the normalization of continuous spectrum stationary states in the one-dimensional Schr\"odinger equation.
G. Kälbermann
2023-08-20T12:06:40Z
http://arxiv.org/abs/2308.11653v2
# A theorem for the normalization of continuous spectrum stationary states ###### Abstract We present analytic formulae that simplify the evaluation of the normalization of continuous spectrum stationary states in the one-dimensional Schrodinger equation. PACS: 03.65-w Subjects: orthonormality, scattering states \({}^{*}\)e-mail address: [email protected] Introduction The normalization of stationary states continuous spectrum one dimensional Schrodinger wave functions in the presence of a real and finite range potential often becomes involved and cumbersome. This normalization is essential for the use of these waves in a perturbative solution of more complicated problems in which extra interactions are present. It is also essential for the expression of completeness of the spectrum to have properly normalized wave functions.[1],[2] We here show that the calculation gets simplified resorting to a hitherto unknown mathematical equalities. In section 2 we derive these relations. Section 3 applies one of the equalities to the case of a square well. Section 4 evaluates the normalization of continuous spectrum wave functions. ## 2 Derivation of the equalities The Schrodinger equation for the one dimensional system labeled by a mass \(m\) with arbitrary potential V(x) is \[i\frac{\partial\Psi}{\partial t}=\frac{-1}{2\;m}\frac{\partial^{2}\Psi}{ \partial x^{2}}+V(x)\Psi \tag{1}\] For stationary states \[i\frac{\partial\Psi}{\partial t}=E\psi=\frac{-1}{2\;m}\frac{\partial^{2}\psi }{\partial x^{2}}+V(x)\psi \tag{2}\] with E, the eigenvalue of the energy of the state, positive for the continuous case and negative for discrete bound states. Using eq.(2) we can readily see that stationary state waves can be taken as real except for the trivial time evolution factor \(\Psi(x,t)=e^{-iEt}\psi(x)\). The full spectrum is spanned by even and odd spatial symmetry functions. For the continuous spectrum labeled by a wavenumber \(k\) with \(E=\frac{k^{2}}{2m}\), the spectrum is exhausted by taking even and odd waves with \(k\geq 0\) Consider the following integral \[I=\int_{x_{1}}^{x_{2}}\psi_{k}(x)\;\psi_{k^{\prime}}(x)dx \tag{3}\] Using eq.(2) we obtain \[I=\int_{x_{1}}^{x_{2}}\left(\frac{-1}{k^{2}}\frac{\partial^{2}\psi}{\partial x ^{2}}+\frac{V(x)}{E}\psi_{k}(x)\right)\psi_{k^{\prime}}(x) \tag{4}\] Integration by parts and repeated use of eq.(2) yield \[I=\frac{1}{(k^{2}-k^{\prime 2})}\left(\frac{\partial\psi_{k^{\prime}}}{\partial x }\;\psi_{k}(x)-\frac{\partial\psi_{k}}{\partial x}\;\psi_{k^{\prime}}(x) \right)\bigg{|}_{x_{1}}^{x_{2}} \tag{5}\] We will apply eq.(5) to the normalization of continuous spectrum wave functions. The expression is well defined for \(k\neq k^{\prime}\). The case of \(k=k^{\prime}\) will yield a \(\delta\) function normalization naturally. Consider eq.(3) along the whole real line, the normalization integral \[I =I_{1}+I_{2}+I_{3}\] \[I =\int_{-\infty}^{\infty}\psi_{k}(x)\ \psi_{k^{\prime}}(x)dx\] \[I_{1} =\int_{-\infty}^{x_{a}}\psi_{k}(x)\ \psi_{k^{\prime}}(x)dx\] \[I_{2} =\int_{x_{a}}^{x_{a}}\psi_{k}(x)\ \psi_{k^{\prime}}(x)dx\] \[I_{3} =\int_{x_{b}}^{\infty}\psi_{k}(x)\ \psi_{k^{\prime}}(x)dx \tag{6}\] where \(x_{a}\), \(x_{b}\) are the left and right boundaries of the finite range potential V(x), or alternatively any chosen points outside the potential range at which the potential is negligible. For the the first and third pieces of the integral of eq.(6) we can take the asymptotic solutions for the potential free region. For \(x\geq 0\) \[\psi_{k}^{out}(x)=A(k)\ e^{ikx}+A^{*}(k)e^{-ikx} \tag{7}\] and similar expressions for \(x\leq 0\) appropriate for the even-odd character of the wave. For the interior region we have need \[I_{2}=\frac{1}{(k^{2}-k^{\prime 2})}\left(\frac{\partial\psi_{k^{\prime}}^{ int}}{\partial x}\ \psi_{k}^{int}(x)\ -\frac{\partial\psi_{k}^{int}}{\partial x}\ \psi_{k^{\prime}}^{int}(x)\right)\bigg{|}_{x_{a}}^{x_{b}} \tag{8}\] where \(\psi^{in}\) is the solution of the Schrodinger equation for the region where the potential is nonvanishing, referred to as the interior region. In equation 8 both the numerator and the denominator vanish when \(k=k^{\prime}\). As shown below, the final expression is well defined, leading to the standard normalization of continuum states in terms of the Dirac \(\delta\) function. Continuity of the wave function and its derivative at \(x_{a}\) and \(x_{b}\) demands \[\psi_{k}^{int}(x_{a})= \psi_{k}^{out}(x_{a})\] \[\psi_{k}^{int}(x_{b})= \psi_{k}^{out}(x_{b})\] \[\frac{\partial\psi_{k}^{int}}{\partial x}(x_{a}) = \frac{\partial\psi_{k}^{out}}{\partial x}(x_{a})\] \[\frac{\partial\psi_{k}^{int}}{\partial x}(x_{b}) = \frac{\partial\psi_{k}^{out}}{\partial x}(x_{b}) \tag{9}\] where \(\psi^{out}\) is the solution of the Schrodinger equation for the region where the potential effectively vanishes, the outer region. This comes as no surprise do the property of the Wronskian appearing in eq.(8). Therefore, we can replace \(\psi^{int}\) by \(\psi^{out}\) in eq.(8). The normalization proceeds solely through the knowledge of \(\psi^{out}\). The outer function \(\psi^{out}\) obeys the the Schrodinger eqation without a potential, namely \[i\frac{\partial\Psi}{\partial t}=E\psi=\frac{-1}{2}\frac{\partial^{2}\psi}{ \partial x^{2}} \tag{10}\] It does not solve the equation in the inner region bearing a potential. However, it obeys the boundary conditions of eq.(9). Both properties imply, that although it is a continuous function even extended to the inner region, it's derivative is not. There are points \(x_{i}\), at least one, at which the derivative is discontinuous. In the next section we exemplify this characteristic for the case of a square well. The solution of the outer region applied to the inner region reads \[\psi^{out}(x)=\sum_{i}\left(\Theta(x_{i}-x)\psi_{1}(x)+\Theta(x-x_{i})\psi_{2}(x)\right) \tag{11}\] With \(\Theta\), the step function. Continuity of the function at \(x_{i}\) implies \[\psi^{\prime\ out}(x)=\sum_{i}\left(\Theta(x_{i}-x)\psi_{1}^{\prime}(x)+ \Theta(x-x_{i})\psi_{2}^{\prime}(x)\right) \tag{12}\] where \(\delta\) is the Dirac \(\delta\) function and primes denote derivatives with respect to x. The second derivative reads \[\psi^{\prime\ out}(x)=\sum_{i}\left(\Theta(x_{i}-x)\psi_{1}^{\prime\prime}(x)+ \Theta(x-x_{i})\psi_{2}^{\prime\prime}(x)+\delta(x-x_{i})\big{(}\psi_{2}^{ \prime}(x)-\psi_{1}^{\prime}(x)\big{)}\right) \tag{13}\] Applying the boundary conditions of eq.(9) to eq.(8) we obtain \[I_{2}=\frac{1}{(k^{2}-k^{\prime 2})}\left(\frac{\partial\psi_{k^{\prime}}^{ out}}{\partial x}\ \psi_{k}^{out}(x)\ -\frac{\partial\psi_{k}^{out}}{\partial x}\ \psi_{k^{\prime}}^{ out}(x)\right)\bigg{|}_{x_{a}}^{x_{b}} \tag{14}\] using eq.(10), eq.(14) becomes \[I_{2}=\frac{1}{(k^{2}-k^{\prime 2})}\int_{x_{1}}^{x_{2}}\left(\frac{- \partial^{2}\psi_{k^{\prime}}^{out}}{\partial x^{2}}\ \psi_{k}^{out}(x)\ +\frac{\partial^{2}\psi_{k}^{out}}{ \partial x^{2}}\ \psi_{k^{\prime}}^{out}(x)\right) \tag{15}\] Inserting eq.(13 ) into eq.(15) yields \[I_{2}=\int_{x_{1}}^{x_{2}}\psi_{k}^{int}(x)\ \psi_{k^{\prime}}^{ int}(x)dx\] \[=\int_{x_{1}}^{x_{2}}\psi_{k}^{out}(x)\ \psi_{k^{\prime}}^{ out}(x)dx+\sum_{i}\left((\psi_{2,k^{\prime}}^{ \prime}(x_{i})-\psi_{1,k^{\prime}}^{\prime}(x_{i})\big{)}\psi_{k}(x_{i})- \big{(}\psi_{2,k}^{\prime}(x_{i})-\psi_{1,k}^{\prime}(x_{i})\big{)}\psi_{k}^{ \prime}(x_{i})\right) \tag{16}\] Eq.(16) is a new identity that connects inner and outer regions normalization integrals. In the next section we show the validity of eq.(16) for the special case of a square well. ## 3 The case of a square well We here evaluate eq.(16) explicitly. Consider a particle of mass \(m\) in a region of space having a square well potential of strength \(-V_{0}\), placed between \(x=-d\) and \(x=d\). A stationary continuous spectrum even wave function for \(-d\leq x\leq d\) reads \[\Psi^{in}=cos(qx) \tag{17}\] whereas for \(|x|\geq d\) \[\psi^{out}=acos(k|x|+\phi) \tag{18}\] The wave function derivative of eq.(18) is discontinuous at the origin. Where \(\frac{k^{2}}{2m}=\frac{q^{2}}{2m}-V_{0}\). The overall normalization factor is irrelevant for the calculation. The boundary conditions for the function and the derivative at \(|x|=d\) are \[cos(kd)= acos(kd+\phi)\] \[qsin(qd)= aksin(kd+\phi) \tag{19}\] Inserting eq(17) into eq.(16) we find \[I_{2}=\int_{-d}^{d}\psi_{k}^{int}(x)\ \psi_{k^{\prime}}^{int}(x)dx\] \[=\frac{1}{q^{2}-q^{\prime 2}}\bigg{(}qsin(qd)cos(q^{\prime}d)-q^{ \prime}sin(q^{\prime}d)cos(qd)\bigg{)} \tag{20}\] where \(q^{\prime}\) corresponds to \(k^{\prime}\). For the integral of eq.(16), in terms of the the wave function of eq.(18) we use \[\psi^{out}=acos(kx+\phi)\Theta(x)+acos(-kx+\phi)\Theta(-x) \tag{21}\] The derivative of the function is discontinuous at \(x=0\). The expression of eq.(16) in terms of \(\psi^{out}\) including the contributions at the discontinuity at \(x=0\) becomes \[I_{2}=\frac{1}{k^{2}-k^{\prime 2}}\bigg{(}ksin(kd+\phi)cos(k^{\prime}d+\phi^{ \prime})-k^{\prime}sin(k^{\prime}d+\phi^{\prime})cos(kd+\phi)\bigg{)} \tag{22}\] Inserting eq.(19) and \(k^{2}-k^{\prime 2}=q^{2}-q^{\prime 2}\) into eq.(22) we find that eq.(22) is identical to eq.(20). Hence, eq.(16) is correct. The case of odd solutions is straightforward also. ## 4 Normalization of continuous spectrum wave functions The normalization integral of eq.(6) can be now evaluated resorting to eq.(5) or eq.(16) In both cases only \(\psi^{out}\) is needed for the calculation. Using eq.(7) in eq.(5) we obtain \[I=4\pi\ |A|^{2}\delta(k-k^{\prime}) \tag{23}\] Knowing the continuous spectrum wave function and we can normalize the function to yield the properly normalized continuous spectrum stationary state wave functions \(\psi^{norm}\) to be \[\psi^{norm}=\frac{\psi}{2\sqrt{\pi}|A|} \tag{24}\]
2305.11877
Development of a Metaverse Platform for Tourism Promotion in Apulia
Metaverse is an engaging way to recreate in a digital environment the real world. It allows people to connect not by just browsing a website, but by using headsets and virtual reality techniques. The metaverse is actually in a rapid development phase, thanks to the advances in different topics. This paper proposes a smart tourism platform in which tourists can interact with guides and different kinds of suppliers, without the need to phisically visit the city they are in. We propose some techniques to scan the real world and transpose it in a metaverse platform, using the recreation of an Italian city, Bari, as a real life scenario.
Enrico Carmine Ciliberti, Marco Fiore, Marina Mongiello
2023-05-05T14:02:33Z
http://arxiv.org/abs/2305.11877v1
# Development of a Metaverse Platform for Tourism Promotion in Apulia ###### Abstract Metaverse is an engaging way to recreate in a digital environment the real world. It allows people to connect not by just browsing a website, but by using headsets and virtual reality techniques. The metaverse is actually in a rapid development phase, thanks to the advances in different topics. This paper proposes a smart tourism platform in which tourists can interact with guides and different kinds of suppliers, without the need to physically visit the city they are in. We propose some techniques to scan the real world and transpose it in a metaverse platform, using the recreation of an Italian city, Bari, as a real life scenario. metaverse, tourism, Apulia, proposal ## I Introduction The metaverse is frequently characterized as the internet's successor, in which users can interact with each other and with digital objects in three dimensions rather than simply browsing websites or using social media platforms. It provides a variety of new opportunities for entertainment, social interaction, education, and commerce, thank to its direct effect on users satisfaction [1]. Between the different applications of metaverse, we want to focus on smart tourism. Authors of paper [2] raise three main questions with respect to the application of metaverse in tourism area: what could be the staging experiences in the metaverse, how consumers behavior will change, and what business strategies can be developed in this approach. A metaverse tourism ecosystem is defined in [3]: travelers and suppliers are connected in both the digital and the physical world. The metaverse can provide mirror worlds to virtualize real life experiences. The technology most linked to tourism and metaverse topic is Mixed Reality (MR) [4], in particular in the topic of visiting cultural heritage. MR helps Generation Z people to feel more involved in tourism, creating more engaging adventures from industries. Our proposal aims to develop a metaverse platform to support tourism in Apulia, Italy. In particular, we take advantage of MR technology to let tourists visit Bari, an Apulian city, discovering cultural places and activities. Suppliers can also join this platform to recreate their activities and sponsor them to the public, creating more engagement in visitors. Finally, touristic guides can use their avatar to easily connect to tourists and let them discover the city in a new and entertaining way. ## II Our scenario Our scenario is explained below. The back-end of the architecture is a metaverse platform that allows sharing three-dimensional spaces, usable by different users in real time via devices such as computers, smartphones and virtual reality headsets. The front-end is an immersive virtual space, built with 3D graphics programs and then programmed and loaded into the metaverse. The environment is programmed to possess precise details of real environments, with the addition of extra information elements such as texts, graphics or completely additional 3D objects, available to users under certain conditions to enhance their experience. Users accessing the platform are either consumers and providers. Consumers are visitors of the platform: they enter the space, attracted by an event inside the platform and then become intrigued by the reproduction of the real space. Furthermore, they have the opportunity to learn and increase their knowledge of the place in an organic way thanks to extra information elements present in the space for advertising, decorative and informative purposes. Providers create the three-dimensional reproduction of the space, and focus on populating the space by organizing cultural themed events in it, promoting the place of interest with exhibitions, historical anecdotes and details interesting geographical locations, thus ensuring the loyalty of passing visitors. Examples of providers are: * **Touristic guides** who could get in touch with potential tourists and visitors directly from the virtual space, thus following them in detail on any potential question or curiosity. * **Commercial activities** present in the virtual reproduction of the real world, such as a bar or restaurant, which could have special conventions for those who complete certain challenges inside the space [5, 6] or use the advertising space in the classic way. * **Cultural promotion associations** could use virtual spaces to promote a municipality or locality far beyond its teritorial borders, easily reaching the international scale but without sacrificing real events held in presence. This approach guarantees an hybridization of real activities with virtual ones via VR headsets [7]. ## III Implementation design Various steps of implementation are proposed in Fig. 1, from the real photo of Piazza del Ferrarese in Bari (Fig. 0(a)) to an intermediate representation of the location (Fig. 0(b)), to the final implementation (Fig. 0(c)). The manufacturing process consists in taking a series of photographs so that the geometry of the object is centered and as straight as possible. We import the image as a plane in Blender1 and, with the edge looping technique, we underline the salient reliefs of the photograph which are then extruded. To optimize structures with a repeated pattern, an array is used as shown in Fig. 0(d). Preliminary results show a complete platform in the metaverse to explore with or without an headset, thanks to the characteristics of the chosen back-end platform, Spatial2. Footnote 1: [https://www.blender.org](https://www.blender.org) Footnote 2: [https://www.spatial.io](https://www.spatial.io) ## IV Conclusion The proposed project will bring a great impact to the tourism market, which has always been looking for new innovative ways to attract the attention of millions of potential tourists every day via web-based social platforms. The main issue with such approaches was the lack of social and human aspects, such as the sociability between people and the sense of discovery. Users who discover a place through a virtual experience, whether it is actually existing or not, can reach a level of immersion that generates true memories equal to those of a visit to a real place. Therefore it is of vital importance to exploit this very high level of immersion to create innovative experiences that can amplify the possibilities of tourism based on web3. Furthermore, this experience has the additional element of sociality, which allows people in the same space to meet and make friends, chat and therefore encourage networking. Future steps regard the development of additional locations in the platform and an improvement of the quality of textures: a baking of the textures is necessary in order to lighten the overall scene in the best possible way. The overall weight of a Spatial scene is 100 MB. The constant optimization of the manufacturing processes is therefore necessary and auxiliary throughout the course of the creation of the model.
2301.12651
Complex Critical Points of Deep Linear Neural Networks
We extend the work of Mehta, Chen, Tang, and Hauenstein on computing the complex critical points of the loss function of deep linear neutral networks when the activation function is the identity function. For networks with a single hidden layer trained on a single data point we give an improved bound on the number of complex critical points of the loss function. We show that for any number of hidden layers complex critical points with zero coordinates arise in certain patterns which we completely classify for networks with one hidden layer. We report our results of computational experiments with varying network architectures defining small deep linear networks using HomotopyContinuation.jl.
Ayush Bharadwaj, Serkan Hoşten
2023-01-30T04:16:49Z
http://arxiv.org/abs/2301.12651v1
# Complex critical points of deep linear neural networks ###### Abstract. We extend the work of Mehta, Chen, Tang, and Hauenstein on computing the complex critical points of the loss function of deep linear neutral networks when the activation function is the identity function. For networks with a single hidden layer trained on a single data point we give an improved bound on the number of complex critical points of the loss function. We show that for any number of hidden layers complex critical points with zero coordinates arise in certain patterns which we completely classify for networks with one hidden layer. We report our results of computational experiments with varying network architectures defining small deep linear networks using _HomotopyContinuation.jl_. ## 1. Introduction Machine learning applications through deep learning techniques have been enormously successful in areas ranging from natural language processing and object recognition to drug discovery [4, 11]. These techniques involve optimizing a non-convex loss (cost) function such as the mean squared error between the observed and predicted data from the underlying neural network. Typically, this is an NP-hard problem [6] where the loss function has numerous local minima. Though methods such as stochastic gradient descent are empirically observed to close in to _good enough_ local minima (see [1, 9, 17]), a theoretical understanding for the success of deep learning is incomplete. A deep _linear_ neural network is a simpler model on which detailed analysis is possible [1]. They differ from more general deep neural networks in the activation function used: linear neural networks employ linear activation functions instead of typical nonlinear ones such as ReLU. Nevertheless, the loss function of the deep linear networks is non-convex, and these networks exhibit many characteristics found in deep nonlinear networks. They have been used as a testing ground for ideas in general artificial neural networks [2]. Our starting point is the work of Mehta, Chen, Tang, and Hauenstein [14] in which the authors employ techniques from (numerical) Introduction The study of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of local structure of the local structure of the local structure of the local structure of the local structure of local structure of the local structure of the local structure of the local structure of the local structure of the local structure of local structure of the local structure of the local structure of the local structure of the local structure of the local structure of local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of the local structure of local structure of the local structure of the local structure of the local structure of local structure of the local structure minimize the loss function: \[\mathcal{L}(W)=\frac{1}{2}\sum_{i=1}^{m}\lVert W_{H+1}W_{H}\cdots W_{1}x_{i}-y_{i }\rVert_{2}^{2}.\] We will follow [14] to regularize the loss function by using regularization matrices \(\Lambda_{1},\ldots,\Lambda_{H+1}\) where each \(\Lambda_{i}\) is the same size as \(W_{i}\). We arrive at the regularized loss function \[\mathcal{L}^{\Lambda}(W)=\mathcal{L}(W)+\frac{1}{2}\left(\lVert\Lambda_{1} \circ W_{1}\rVert_{F}^{2}+\cdots+\lVert\Lambda_{H+1}\circ W_{H+1}\rVert_{F}^{ 2}\right) \tag{1}\] where \(\Lambda_{i}\circ W_{i}\) denotes the entrywise (Hadamard) product of \(\Lambda_{i}\) and \(W_{i}\). This choice of regularization is justified by the results (see Theorem 1 and Theorem 2 in [14]) that for almost all choices of the regularization matrices the solutions to the gradient system \(\nabla\mathcal{L}^{\Lambda}=0\) are isolated and nondegenerate. ### Our contributions This paper is about the complex critical points of \(\mathcal{L}^{\Lambda}\), namely, the solutions to \(\nabla\mathcal{L}^{\Lambda}=0\). This gradient system is a system of \(N\) polynomial equations in \(N\) variables (weights) where each polynomial has degree \(2H+1\). This set up opens the door to using (numerical) algebraic geometry to bound the number of critical points as well as to compute them. In Section 2 we will summarize the gradient equations for the regularized loss function of deep linear neural networks. In Section 3 we will prove a bound on the number of complex critical points of the loss function of a linear neural network with one hidden layer trained on a single data point. We achieve this bound by rewriting the gradient equations. Our new bound is much better than the classic Bezout bound as well as the BKK bound previously observed [14, Table I] and confirmed and expanded by us (see Table 1 in the Appendix). Section 4 is devoted to understanding critical points where certain weights are zero. We show that, for any deep linear network trained on a single data point, if a weight is zero, then the entire row or column containing that weight must be zero. We also prove that row \(k\) of the weight matrix \(W_{i}\) is entirely zero if and only if the column \(k\) of the weight matrix \(W_{i+1}\) is entirely zero. As a corollary, we completely classify the zero patterns of complex critical points for linear networks with one hidden layer trained on a single data point. In Section 5 we turn to our computational experiments. Using _HomotopyContinuation.jl_[7] we solve the gradient equations for different parameter values of \(d_{x},d_{y}\) and \(d_{i}\) when \(H,m\in\{1,2\}\). These extend the computational horizon beyond the one set in [14]. We believe the extended data we provide will inspire further results. ## 2. Gradient Equations for the Loss Function The critical points of the regularized loss function \(\mathcal{L}^{\Lambda}(W)\) are the solutions to \(\nabla\mathcal{L}^{\Lambda}=0\), the gradient equations. We derive the gradient equations in a compact form. Let \(W=W_{H+1}W_{H}\cdots W_{1}\), and let \(U_{i}^{T}=\prod_{j=i+1}^{H+1}W_{j}^{T}\) and \(V_{i}^{T}=\prod_{j=1}^{i-1}W_{j}^{T}\). Then \[\frac{\partial\mathcal{L}^{\Lambda}}{\partial W_{i}}=U_{i}^{T}\left(W\left( \sum_{k=1}^{m}x_{k}x_{k}^{T}\right)-\left(\sum_{k=1}^{m}y_{k}x_{k}^{T}\right) \right)V_{i}^{T}+\Lambda_{i}\circ W_{i} \tag{2}\] where the \((j,k)\) entry of the matrix \(\frac{\partial\mathcal{L}^{\Lambda}}{\partial W_{i}}\) is the partial derivative of \(\mathcal{L}^{\Lambda}\) with respect to the \((j,k)\) entry of \(W_{i}\). It is easy to observe that the resulting system consists of \(N\) polynomials in \(N\) variables. The constant term in each polynomial is zero, therefore \(0\in\mathbb{C}^{N}\) is always a critical point. The terms of highest degree in every polynomial have degree \(2H+1\). This leads to an explicit classical Bezout bound (CBB). **Proposition 1**.: _[_14_, Proposition 3]_ _The regularized loss function \(\mathcal{L}^{\Lambda}\) has at most \((2H+1)^{N}\) complex critical points._ **Example 2**.: Let us consider a 2-layer network with \(W_{1}=[\alpha_{1}\ \alpha_{2}]\), \(W_{2}=\left[\begin{smallmatrix}\beta_{1}\\ \beta_{2}\end{smallmatrix}\right]\), \(X=\left[\begin{smallmatrix}1&2\\ 3&4\end{smallmatrix}\right]\), \(Y=\left[\begin{smallmatrix}1&3\\ 2&4\end{smallmatrix}\right]\), \(\Lambda_{1}=\left[\begin{smallmatrix}4&-3\end{smallmatrix}\right]\), and \(\Lambda_{2}=\left[\begin{smallmatrix}-2\\ 5\end{smallmatrix}\right]\). Using these in (2), we get the gradient system \[5\alpha_{1}\beta_{1}^{2}+5\alpha_{1}\beta_{2}^{2}+11\alpha_{2} \beta_{1}^{2}+11\alpha_{2}\beta_{2}^{2}-7\beta_{1}-10\beta_{2}+4\alpha_{1} =0\] \[11\alpha_{1}\beta_{1}^{2}+11\alpha_{1}\beta_{2}^{2}+25\alpha_{2} \beta_{1}^{2}+25\alpha_{2}\beta_{2}^{2}-15\beta_{1}-22\beta_{2}-3\alpha_{2} =0\] \[5\alpha_{1}^{2}\beta_{1}+22\alpha_{1}\alpha_{2}\beta_{1}+25 \alpha_{2}^{2}\beta_{1}-7\alpha_{1}-15\alpha_{2}-2\beta_{1} =0\] \[5\alpha_{1}^{2}\beta_{2}+22\alpha_{1}\alpha_{2}\beta_{2}+25 \alpha_{2}^{2}\beta_{2}-10\alpha_{1}-22\alpha_{2}+5\beta_{2} =0\] As the above example indicates the gradient equations are sparse. This allows the use of the BKK bound for the number of critical points in \((\mathbb{C}^{*})^{N}\) as well as a modified BKK bound for the number of critical points in \(\mathbb{C}^{N}\)[12, 15]. For instance, the BKK bound for the solutions in \(\mathbb{C}^{4}\) for the gradient equations in Example 2 is 33. The actual number of solutions in \((\mathbb{C}^{*})^{4}\) and \(\mathbb{C}^{4}\) are 16 and 17, respectively. The Bezout bound gives \(3^{4}=81\). Our computational experiments in Section 5 (also see [14]) will show that generally all of these bounds are far from the actual number of critical points. Starting from the next section we take a closer look into improving the BKK bound. We close this section with the following result. **Proposition 3**.: _Each polynomial in the gradient equations (2) has the same monomial support for generic input and output vectors as \(m\), the number of these vectors, varies while the rest of the parameters, namely, \(H\), \(d_{x}\), \(d_{y}\), and \(d_{i}\), \(i=1,\ldots,H\) stay constant. In particular, BKK bounds on the number of complex critical points for these systems in \((\mathbb{C}^{*})^{N}\) and \((\mathbb{C})^{N}\) are the same bounds for any \(m\)._ Proof.: First, we replace in (2) the matrix \(\sum_{k=1}^{m}x_{k}x_{k}^{T}\) with the \(d_{x}\times d_{x}\) symmetric matrix of indeterminates \(Z=(z_{ij})\), and the matrix \(\sum_{k=1}^{m}y_{k}x_{k}^{T}\) with the \(d_{y}\times d_{x}\) matrix of indeterminates \(T=(t_{ij})\). Multiplying out the matrices in (2) given polynomials in the entries of \(W_{1},\ldots,W_{H+1}\) with coefficients in \(z_{ij}\) and \(t_{ij}\). These polynomials have terms only in degrees \(2H+1\), \(H\), and \(1\). The terms of degree \(2H+1\) that actually appear in a polynomial have coefficients that are linear in the entries of \(Z\), and the terms of degree \(H\) that appear have coefficients that are linear in the entries of \(T\). Finally, the single monomial of degree one that appears in a polynomial has an entry of some \(\Lambda_{i}\) as a coefficient. We claim that for generic \(x_{1},\ldots,x_{m}\) and \(y_{1},\ldots,y_{m}\) the linear coefficient polynomials in \(Z\) or \(T\) do not vanish. This follows from the fact that determinantal varieties of \(d_{x}\times d_{x}\) symmetric matrices and determinantal varieties of \(d_{y}\times d_{x}\) matrices of rank \(\leq r\) are not contained in linear subspaces. Genericity of \(\Lambda_{i}\) guarantees that the required monomial of degree one is also present. ## 3. Networks with one hidden layer trained on one data point This section provides evidence that, by carefully looking at the gradient equations, one can gain insight into the number of critical points of the regularized loss function. We will treat linear neural networks with one hidden layer trained on a single data point (\(H=1\) and \(m=1\)) with no restrictions on the dimensions of the input and output vectors (\(d_{x}=n\) and \(d_{y}=p\)) as well as on the number of neurons in the hidden layer (\(d_{1}=d\)). Although this may be viewed unrealistic for real world applications it is meant to be a point of departure for more work from symbolic and numerical algebraic geometry. To simplify the exposition we let \(A=W_{1}\) and \(B=W_{2}\). We also use \(\Lambda=\Lambda_{1}\) and \(\Sigma=\Lambda_{2}\). If we declare \(S_{i}=\sum_{i=1}^{n}a_{ij}x_{j}\) for \(i=1,\ldots,d\) and \(R_{k}=\sum_{i=1}^{d}b_{ki}S_{i}-y_{k}\) for \(k=1,\ldots,p\), then \[Wxx^{T}-yx^{T}=\begin{pmatrix}R_{1}x_{1}&\cdots&R_{1}x_{n}\\ \vdots&&\vdots\\ R_{p}x_{1}&\cdots&R_{p}x_{n}\end{pmatrix}.\] Now \(B^{T}(Wxx^{T}-yx^{T})+\Lambda\circ A=0\) yields \[\left(\sum_{k=1}^{p}b_{k1}R_{k}\right)x_{j} =-\lambda_{1j}a_{1j}\qquad j=1,\ldots,n\] \[\vdots\] \[\left(\sum_{k=1}^{p}b_{kd}R_{k}\right)x_{j} =-\lambda_{dj}a_{dj}\qquad j=1,\ldots,n \tag{3}\] Similarly, \((Wxx^{T}-yx^{T})A^{T}+\Sigma\circ B=0\) gives rise to \[R_{k}S_{i}=-\sigma_{ki}b_{ki}\quad\ i=1,\ldots,d\quad\ k=1,\ldots,p. \tag{4}\] **Lemma 4**.: _For generic input and output vector \(x\) and \(y\) and generic regularization matrices \(\Lambda\) and \(\Sigma\) we obtain_ \[a_{1j} =\left(\frac{x_{j}}{\lambda_{1j}}\frac{\lambda_{11}}{x_{1}} \right)a_{11}\qquad j=1,\ldots,n\] \[\vdots\] \[a_{dj} =\left(\frac{x_{j}}{\lambda_{dj}}\frac{\lambda_{d1}}{x_{1}} \right)a_{d1}\qquad j=1,\ldots,n \tag{5}\] _and_ \[S_{i}=\left(\sum_{j=1}^{n}\frac{x_{j}^{2}}{\lambda_{ij}}\right)\frac{\lambda _{i1}}{x_{1}}a_{i1}\quad\ i=1,\ldots,d \tag{6}\] Proof.: For generic data, the equation \(B^{T}(Wxx^{T}-yx^{T})+\Lambda\circ A=0\) implies that \(\Lambda\circ A\) is a matrix of rank at most one since \(xx^{T}\) and \(yx^{T}\) are matrices of rank one. The identities in (5) are an explicit version of this observation. They follow from the fact that \((\sum_{k=1}^{p}b_{k1}R_{k})\), i.e. the coefficient of \(x_{j}\) in the first row of (3), is a constant complex number that does not depend on \(j\). Therefore \(\frac{-\lambda_{1j}a_{1j}}{x_{j}}\) is a constant. Hence, we have \(\frac{-\lambda_{11}a_{11}}{x_{1}}=\frac{-\lambda_{12}a_{12}}{x_{2}}=\cdots= \frac{-\lambda_{1n}a_{1n}}{x_{n}}\). Equivalently, \(a_{1j}=\left(\frac{x_{j}}{\lambda_{1j}}\frac{\lambda_{11}}{x_{1}}\right)a_{11}\) for all \(j=1,\ldots,n\). Repeating the same for all equations in (3) gives (5). Substituting these into \(S_{i}=\sum_{i=1}^{n}a_{ij}x_{j}\) results in (6). The next result is the main theorem of this section. **Theorem 5**.: _Consider a linear network where \(H=1\), \(m=1\), \(d_{x}=n\), \(d_{y}=p\) and \(d_{1}=d\) with generic input/output vectors and generic regularization matrices. Then, there are at most_ \[\mathcal{B}_{\mathbb{C}^{*}}=(4p)^{d}\] solutions of (2) for which \(a_{11},\dots,a_{dn}\in\mathbb{C}^{*}\)._ Proof.: The overall strategy for this proof is to eliminate \(b_{ki}\) using (4), and then substitute the expressions for the \(b_{ki}\) variables so obtained into (3). That will give us a polynomial system in just the \(a_{ij}\) variables. We will then compute the BKK bound for this "reduced" system. That will give us an upper bound on the number of solutions in \((\mathbb{C}^{*})^{d\times n}\). We first express \(b_{k2}\),..., \(b_{kd}\) in terms of \(b_{k1}\). By the equations in (6) each \(S_{i}\) is equal to \(a_{i1}\) scaled by some constant. Since each \(a_{ij}\) is non-zero by assumption, each \(S_{i}\) is non-zero. Then (4) implies \[\frac{b_{k1}\sigma_{k1}}{S_{1}}=\frac{b_{k2}\sigma_{k2}}{S_{2}}=\dots=\frac{b_ {kd}\sigma_{kd}}{S_{d}}.\] Hence, \(b_{k2}=b_{k1}\left(\frac{\sigma_{k1}}{\sigma_{k2}}\frac{S_{2}}{S_{1}}\right)\),..., \(b_{kd}=b_{k1}\left(\frac{\sigma_{k1}}{\sigma_{k,d}}\frac{S_{d}}{S_{1}}\right)\). Substituting the preceding expressions for \(b_{k2}\) through \(b_{kd}\) into \(R_{k}\), we get: \[R_{k}=b_{k1}\left[S_{1}+\left(\frac{\sigma_{k1}}{\sigma_{k2}}\frac{S_{2}^{2}}{ S_{1}}\right)+\dots+\left(\frac{\sigma_{k1}}{\sigma_{kd}}\frac{S_{d}^{2}}{S_{1}} \right)\right]-y_{k}.\] Using this expression in (4) and with a bit of algebra we obtain \[b_{ki}=\frac{y_{k}S_{i}}{\sigma_{ki}}\frac{1}{(1+T_{k})}\] where \(T_{k}=\frac{S_{1}^{2}}{\sigma_{k1}}+\frac{S_{2}^{2}}{\sigma_{k2}}+\dots+\frac {S_{d}^{2}}{\sigma_{kd}}\). Note that the right hand side of this identity is a rational function of \(a_{ij}\) only. We substitute these expressions for \(b_{ki}\) into the first equation of (3) to result in \[-\frac{\lambda_{1j}a_{1j}}{x_{j}}=-\sum_{k=1}^{p}\frac{y_{k}^{2}S_{1}}{\sigma _{k1}(1+T_{k})^{2}}.\] Now, (5) implies \(-\frac{\lambda_{1j}a_{1j}}{x_{j}}=-\frac{\lambda_{11}a_{11}}{x_{1}}\) and using (6) for \(S_{1}\), the above equation gives \[\frac{1}{\left(\sum_{j=1}^{n}\frac{x_{j}^{2}}{\lambda_{1j}}\right)}-\left[ \frac{y_{1}^{2}}{\sigma_{11}(1+T_{1})^{2}}+\frac{y_{2}^{2}}{\sigma_{21}(1+T_{ 2})^{2}}+\dots+\frac{y_{p}^{2}}{\sigma_{p1}(1+T_{p})^{2}}\right]=0.\] Clearing denominators we arrive at \[\frac{(1+T_{1})^{2}(1+T_{2})^{2}\cdot\cdot\cdot(1+T_{p})^{2}}{\left(\sum_{j=1 }^{n}\frac{x_{j}^{2}}{\lambda_{1j}}\right)}-\sum_{k=1}^{p}\frac{y_{k}^{2}}{ \sigma_{k1}}\prod_{i\neq k}^{p}(1+T_{i})^{2}=0. \tag{7}\] Each \(T_{k}\) is a polynomial in the variables \(a_{11},\dots,a_{d1}\). Hence, (7) is a polynomial equation in these variables only. A total of \(d\) such polynomials comprises a system of \(d\) polynomial equations in \(d\) variables. The solution set contains the solutions to (2) when \(a_{ij}\in\mathbb{C}^{*}\), but it also contains components with positive dimension, for instance, the codimension two algebraic sets given by \(1+T_{i}=1+T_{j}=0\) for \(i\neq j\). We proceed to introduce regularization for this system as well. To equation \(i=1,\ldots,d\) we add the regularization term \(\mu_{i}a_{di}\) where \(\mu_{i}\) is a regularization parameter. A standard argument using the generalized Sard's theorem and the implicit function theorem guarantees that, for almost all choices of \(\mu_{i}\), \(i=1,\ldots,d\), all solutions to the regularized system are isolated and nondegenerate, and for sufficiently small \(\mu_{i}\), as they shrink to \(0\) uniformly, the solutions to the regularized system either converge to components of the solution set of the non-regularized system (including the solutions we want to count) or diverge to infinity. For an excellent discussion of regularization in the context of solving systems of polynomials we refer to Section 3.2 in [14]. The Newton polytope of (7) and its regularization is \((4p)\Delta_{d}=\operatorname{conv}\{0,4p\cdot e_{1},...,4p\cdot e_{d}\}\) where \(e_{i}\) is the \(i\)-th standard basis vector of \(\mathbb{R}^{d}\). Thus, we obtain a zero-dimensional polynomial system with \(d\) equations in \(d\) variables where the Newton polytope of each equation is \((4p)\Delta_{d}\). Therefore the BKK bound for this system is given by the normalized volume of \((4p)\Delta_{d}\) which is equal to \((4p)^{d}\). This completes the proof. **Remark 6**.: Theorem 5 gives a bound on the number of critical points of the regularized loss function when the weights in \(A=W_{1}\) are all nonzero. In the following section we will relax this requirement after treating patterns of zeros for the \(H=1\) and \(m=1\) case. We also note that in this theorem we have not made any assumptions about the weights in \(B=W_{2}\). **Remark 7**.: It is an open question to find a reduced system for \(m>1\) even when \(H=1\) that will yield a better bound on the number of critical points. In particular, is there a systematic way to eliminate the weights in \(B=W_{2}\)? ## 4. Patterns of Zeros This section is about patterns we observe in the critical points of the regularized loss function of a deep linear neural network. All of our results are about networks trained on a single data point (\(m=1\)). However, we believe the phenomena we observe are more general and persist for arbitrary \(m\). To support this claim we provide computational evidence. We start with the observation that if in a critical point an entry of a weight matrix is zero then the entire row or column of that entry must also consist of zeros. **Proposition 8**.: _Suppose an arbitrary deep linear neural network is trained on a single generic data point \(x\) with the corresponding generic output vector \(y\). Then, if in a critical point of \(\mathcal{L}^{\Lambda}\) the \((i,j)\) entry of a weight matrix is zero, then either the \(i\)th row or the \(j\)th column of the same weight matrix is zero._ Proof.: We let \(d_{x}=n\) and \(d_{y}=p\). Suppose the \((i,j)\) entry of \(Z=W_{k}\in\mathbb{C}^{r\times s}\) is zero. Then \[\frac{\partial\mathcal{L}^{\Lambda}}{\partial Z}=U_{Z}^{T}\left(Wxx^{T}-yx^{T} \right)V_{Z}^{T}+\Lambda\circ Z=0\] where \(U_{Z}=U_{k}\), \(V_{Z}=V_{k}\) and \(\Lambda=\Lambda_{k}\) as in (2). It follows that \[\begin{pmatrix}D_{1}E_{1}&\cdots&D_{1}E_{s}\\ \vdots&&\vdots\\ D_{r}E_{1}&\cdots&D_{r}E_{s}\end{pmatrix}=-\begin{pmatrix}\lambda_{11}z_{11}& \cdots&\lambda_{1s}z_{1s}\\ \vdots&&\vdots\\ \lambda_{r1}z_{r1}&\cdots&\lambda_{rs}z_{rs}\end{pmatrix} \tag{8}\] where \[D_{i}=(U_{Z}^{T})_{i1}R_{1}+\cdots+(U_{Z}^{T})_{ip}R_{p}\] \[E_{j}=x_{1}(V_{Z}^{T})_{1j}+\cdots+x_{n}(V_{Z}^{T})_{nj}\] and \(R_{k}=(Wx-y)_{k}\). If \(z_{ij}=0\) then \(D_{i}=0\) or \(E_{j}=0\) and the result follows. It is worth noting that if row \(i\) of the weight matrix \(W_{k}\) is \(0\), it means that the \(i\)th neuron on layer \(k\) can be removed, and the column \(i\) of \(W_{k+1}\) can be deleted. Conversely, if column \(j\) of the weight matrix \(W_{k}\) is \(0\), then this column can be deleted and the \(j\)th neuron on layer \(k-1\) can be removed. These statements have the following stronger counterpart. **Theorem 9**.: _Suppose an arbitrary deep linear neural network is trained on a single generic data point \(x\) with the corresponding generic output vector \(y\). Then in a critical point of \(\mathcal{L}^{\Lambda}\) the row \(i\) of \(W_{k-1}\) is zero if and only if the \(i\)th column of \(W_{k}\) is zero._ Proof.: We retain our notation from the previous proof, and let \(P=W_{k-1}\) and \(Z=W_{k}\). Then \(V_{Z}=P\cdots W_{2}W_{1}\), and if row \(i\) of \(P\) is zero, then row \(i\) of \(V_{Z}\) is zero. It follows that \(E_{i}=x_{1}(V_{Z}^{T})_{1i}+\cdots+x_{n}(V_{Z}^{T})_{ni}=0\) and from (8) we conclude that the \(i\)th column of \(Z\) is zero. Conversely, suppose the \(i\)th column of \(Z\) is zero. Since \((U_{P})^{T}=Z^{T}\cdots W_{H}^{T}W_{H+1}^{T}\), then row \(i\) of \((U_{P})^{T}\) is zero. It follows that \(D_{i}=(U_{P}^{T})_{i1}R_{1}+\cdots+(U_{P}^{T})_{ip}R_{p}=0\) and the result follows again from (8). For the weight matrices \(W_{1}\) and \(W_{H+1}\), Proposition 8 has stronger corollaries. **Corollary 10**.: _Suppose an arbitrary deep linear neural network is trained on a single generic data point \(x\) with the corresponding generic output vector \(y\). Then, if in a critical point of \(\mathcal{L}^{\Lambda}\) the \((i,j)\) entry of \(W_{1}\) is zero, then the \(i\)th row of \(W_{1}\) must be zero._ Proof.: This follows from equations (5). **Corollary 11**.: _Suppose an arbitrary deep linear neural network is trained on a single generic data point \(x\) with the corresponding generic output vector \(y\). Then, if in a critical point of \(\mathcal{L}^{\Lambda}\) the \((i,j)\) entry of \(W_{H+1}\) is zero then the \(j\)th column of \(W_{H+1}\) must be zero._ Proof.: Let \(Z=W_{H+1}\) and suppose \(z_{ij}=0\). Then \(D_{i}E_{j}=0\) as in the proof of Proposition 8. We observe that \(U_{Z}\) is the identity matrix and therefore \(D_{i}=R_{i}=(Wx-y)_{i}\). If column \(j\) of \(Z\) is not zero, then \(E_{j}\neq 0\) and therefore \(D_{i}=0\). Hence \(R_{i}=0\) and this means that \((Wx)_{i}=y_{i}\). But this also means that \(i\)th row of \(Z\) is zero, and therefore the \(i\)th row of \(W\) is zero. But then \(y_{i}=0\) which contradicts the genericity of \(y\). We conclude that the \(j\)th column of \(Z=W_{H+1}\) must be zero. For a network with one hidden layer trained on a single data point we gave a bound on the number of critical points in which the entries of the first weight matrix are in \(\mathbb{C}^{*}\); see Theorem 5. Now we expand this count to all complex critical points. **Corollary 12**.: _Consider a linear network with \(H=1\), \(m=1\), \(d_{x}=n\), \(d_{y}=p\) and \(d_{1}=d\) with generic input/output vectors and generic regularization matrices. Then, there are at most_ \[\mathcal{B}_{\mathbb{C}}=(1+4p)^{d}\] _complex solutions of (2)._ Proof.: By the above results the possible patterns of zeros in critical points are completely determined by the rows of \(W_{1}\) that are zero. Since the corresponding columns of \(W_{2}\) must be zero we can reduce the neural network by removing the neurons matching the zero rows of \(W_{1}\). If \(r\) rows of \(W_{1}\) are zero, Theorem 5 implies that there are at most \((4p)^{d-r}\) critical points where the remaining rows of \(W_{1}\) are in \(\mathbb{C}^{*}\). Therefore the number of all complex critical points for this network is \[\mathcal{B}_{\mathbb{C}}=\sum_{r=0}^{d}\binom{d}{r}(4p)^{d-r}=(1+4p)^{d}.\] **Example 13**.: Consider a linear neural network with one hidden layer trained on a single input vector where \(d_{x}=d_{y}=d_{1}=2\). The classic Bezout bound on the number of complex critical points is \(3^{8}=6561\). The BKK bound yields \(1089\). According to Theorem 5 the number of critical points in \((\mathbb{C}^{*})^{8}\) is bounded by \(\mathcal{B}_{\mathbb{C}^{*}}=64\), and by the above corollary the number of critical points in \(\mathbb{C}^{8}\) is bounded by \(\mathcal{B}_{\mathbb{C}}=81\). These solutions are expected to come in four flavors: \[\begin{array}{ccccc}W_{2}&&W_{1}&&W_{2}&&W_{1}\\ \left(\begin{array}{ccccc}*&*&*&*\\ *&*&*&*\end{array}\right)&\left(\begin{array}{ccccc}0&*&0&0\\ 0&*&*&*\end{array}\right)\\ \\ \left(\begin{array}{ccccc}*&0&*&*\\ *&0&0&0\end{array}\right)&\left(\begin{array}{ccccc}0&0&0&0\\ 0&0&0&0\end{array}\right)\end{array}\] The first type with full support corresponds to critical points in \((\mathbb{C}^{*})^{8}\), and there are at most \(64\) of them. The next two types have zeros in the first or second row of \(W_{1}\), and there are at most \(8\) such solutions for each type. Finally, there is the last type which corresponds to the origin. Using _HomotopyContinuation.jl_ we see that there are actually \(33\) critical points in \(\mathbb{C}^{8}\) of which \(16\) are in \((\mathbb{C}^{*})^{8}\). \(\triangle\) **Example 14**.: Not every possible pattern of zeros is realized. A simple example comes from the case with \(d_{x}=d_{y}=1\) and \(d_{1}=2\) when \(H=m=1\). Again one expects four types of critical points, but there are no complex critical points with only nonzero coordinates. The total number of critical points is \(9\). A more interesting example arises from the case when \(H=d_{x}=d_{y}=d_{1}=2\) with \(m=1\). The total number of possible types of critical points is ten. However, the following two types of solutions do not appear: \[\begin{array}{ccccc}W_{3}&&W_{2}&&W_{1}\\ 0&*&0&0&*&*\\ 0&*&*&*&*&*\\ *&0&*&*&*&*\\ *&0&0&0&*&*\end{array}\] \(\triangle\) **Conjecture 15**.: All our results in this section are for linear neural networks trained on a single data point. However, in our computations we observe that Proposition 8, Theorem 9, Corollary 10, and Corollary 11 hold for \(m=2\). We conjecture these results to be true for any \(m\). ## 5. Computations and Conclusions In this section we present results showing the number of critical points for different network architectures, i.e. different values of \(H,m,d_{i},d_{x}\) and \(d_{y}\). When \(H=2\) we use \(d_{1}=d_{2}\) and denote it by \(d_{i}\). The results are summarized in Tables 1 - 4 which can be found in the Appendix. Each case yields a polynomial system whose coefficients are determined by the entries of the data matrices \(X\in\mathbb{R}^{d_{x}\times m}\) and \(Y\in\mathbb{R}^{d_{y}\times m}\), and the regularization constants from \(\Lambda_{i}\). The entries of \(X\) and \(Y\) are drawn i.i.d. from a Gaussian distribution with mean 0 and variance 1. The entries of \(\Lambda_{i}\) are drawn i.i.d. from the uniform distribution between 0 and 1. For each case, all isolated solutions to each of the 100 samples are computed using _HomotopyContinuation.jl_ on a single Amazon EC2 instance. Instances used were of type _c5.4xlarge_ (16 vCPUs, 8 cores, 32 GB memory) or _c5.9xlarge_ (36 vCPUs, 18 cores, 72 GB memory), depending on the size of the polynomial system. Most systems took less than an hour of wall-clock time to solve. Table 1 through Table 4 expand the results for \(H,m\in\{1,2\}\) reported in [14] where the authors used _Bertini_. We report the classical Bezout bound, the BKK bound, the number of complex critical points (\(N_{\mathbb{C}}\)), the number of complex critical points in the algebraic torus (\(N_{\mathbb{C}^{*}}\)), and the maximum number of real critical points observed (\(\max\{N_{\mathbb{R}}\}\)). Table 1 contains two additional columns reporting \(\mathcal{B}_{\mathbb{C}^{*}}\) and \(\mathcal{B}_{\mathbb{C}}\) of Theorem 5 and Corollary 12, respectively. We observe that while the BKK bound is much better than the Bezout bound, even the BKK bound is far from the actual number of complex critical points. The number of real critical points is even lower. The gradient equations (2) yield very structured and very sparse polynomials. Though the computed BKK bounds are not tight enough, it would be interesting to study the Newton polytopes of these equations in the hope of finding formulas for their mixed volumes. Theorem 5 provides much tighter bounds for the case of \(H=1\) and \(m=1\). We believe it is worthwhile to attempt to generalize this theorem. However, the real benefit of this theorem or its possible generalizations is in creating custom-made homotopies that will track only \(\mathcal{B}_{\mathbb{C}}\) paths. This will require constructing appropriate initial systems. We would like to also note that while some of our results explain some of the numbers in the tables, there are other patterns waiting to be explained. Proposition 3 tells us that the BKK bounds in Table 1 and 2, and those in Table 3 and 4 for the same \(d_{i},d_{x},d_{y}\) must match. From the limited computations we have, it appears that the BKK bounds for the identical parameters except \(d_{x}\) and \(d_{y}\) values switched (for instance \(d_{x}=2\) and \(d_{y}=3\) versus \(d_{x}=3\) and \(d_{y}=3\)) coincide. We invite the reader to unearth more patterns.
2301.03984
Probing Pole Skipping through Scalar-Gauss-Bonnet coupling
The holographic phenomena of pole skipping have been studied in the presence of scalar-Gauss-Bonnet interaction in the four-dimensional Anti-de Sitter-Schwarzchild black hole background. Pole skipping points are special points in phase space where the bulk linearised differential equations have multiple ingoing solutions. Those special points are claimed to be connected to chaos. In this paper, we initiated a novel study on understanding the response of those special points under the application of external sources. The source is identified with the holographic dual operator of the bulk scalar field with its non-normalizable solutions. We analyze in detail the dynamics of pole skipping points in both sound and shear channels, considering linear perturbation in bulk. In the perturbative regime, characteristic parameters for chaos, namely Lyapunov exponent and butterfly velocity, remain unchanged. However, the diffusion coefficient has evolved non-trivially under the external source.
Banashree Baishya, Kuntal Nayek
2023-01-10T14:27:09Z
http://arxiv.org/abs/2301.03984v3
# Probing Pole Skipping through Scalar-Gauss-Bonnet coupling ###### Abstract The holographic phenomena of pole skipping have been studied in the presence of scalar-Gauss-Bonnet interaction in four-dimensional Anti-de Sitter-Schwarzchild black hole background. Pole skipping points are special points in phase space where the bulk linearised differential equations have multiple ingoing solutions. Those special points are claimed to be connected to chaos. In this paper, we initiated a novel study on understanding the response of those special points under the application of external sources. The source is identified with the holographic dual operator of the bulk scalar field with its non-normalizable solutions. We analyze in detail the dynamics of pole skipping points in both sound and shear channels, considering linear perturbation in bulk. In the perturbative regime, characteristic parameters for chaos, namely Lyapunov exponent and butterfly velocity, remain unchanged. However, the diffusion coefficient has evolved non-trivially under the external source. ## 1 Introduction Chaos, at the classical level, explains various macroscopic phenomena of hydrodynamics from a microscopic viewpoint. These phenomena are local criticality, zero temperature entropy, diffusion transport, Lyapunov exponent, and butterfly velocity. At the quantum level, chaos is similarly essential to studying those phenomena [1; 2; 3]. Recently, chaos in many body systems has drawn tremendous interest. It can be observed from the energy density two-point function. Since the holographic tools have an extensive advantage in studying those two-point functions from gravity theory, nowadays, the AdS/CFT correspondence [4; 5] is being used to describe the chaotic behavior in many-body quantum system [6; 7; 8; 9]. However, using the holographic description, the microscopic behavior of quantum chaos was first established in [10]. The two-point energy density function can be described with the four-point out-of-time-ordered correlator(OTOC). \[\langle V(t,\vec{x})W(0)V(t,\vec{x})W(0)\rangle_{\beta_{0}}\approx e^{\lambda _{L}(t-|\vec{x}|/v_{B})} \tag{1}\] where \(\lambda_{L}\) is the Lyapunov exponent and \(v_{B}\) is the butterfly velocity related to chaos. For a chaotic system, the two-point energy density function shows non-uniqueness around some special points in momentum space \((\omega,\,k)\). Holographically, the OTOC is non-uniquely defined at those points. These points are where the poles and zeros of the energy density function overlap. They are marked as the Pole-skipping (PS) points. For example, the boundary two-point (Green's) function is a the ratio of the normalized mode to the non-normalized mode of the bulk field \(\Phi\), which generally takes the form as \(G_{R}\propto\frac{\Phi_{b}(\omega,k)}{\Phi_{a}(\omega,k)}\), At the pole-skipping point, \(\Phi_{b}(\omega_{*},k_{*})=\Phi_{a}(\omega_{*},k_{*})=0\) and makes the Green's function ill-defined. The line of poles is defined by \(\Phi_{a}(\omega_{*},k_{*})=0\) whereas the line of zeros is given by \(\Phi_{b}(\omega_{*},k_{*})=0\). Thus the pole-skipping points are some special locations in the \(\omega-k\) plane. By analyzing the shock waves in an eternal black-hole background, chaos parameters are related to OTOC [11]. At the above special points \((\omega_{*},k_{*})\) of energy-density two-point function, one can relate the parameters of chaos as, \[\omega_{*}=i\lambda_{L},\hskip 28.452756ptk_{*}=\frac{i\lambda_{L}}{v_{B}} \tag{2}\] where \(\lambda_{L}\) and \(v_{B}\) are the Lyapunov exponent and butterfly velocity associated with the considered chaotic system. However, the behavior of the energy density function is universal for maximally chaotic systems. The microscopic dynamics of various hydrodynamic quantities are deeply related to the near-horizon analysis of holographic gravity. Indeed, the pole-skipping points can be identified from the in-going bulk field near the horizon. At those special points, the bulk field leads to the multi-valued Green's function at the boundary[12]. In simple words, there is no unique in-going solution at the horizon for those pole-skipping points. This holographic study has been performed for various bulk theories [13; 14; 15; 16; 17; 18; 19; 20; 21]. In [12; 22], the pole-skipping points have been found for the BTZ background. They have shown the intersection of the lines of poles and zeros and the existence of two regular in-going solutions near the horizon. The pole-skipping has been also studied with finite coupling correction [23], with higher curvature correction [24] and also in the case of zero temperature [25]. Hydrodynamics transport phenomena have been studied with the pole-skipping [26; 27; 28; 29; 30]. Similar pole-skipping points have been also evaluated for the fermionic models [22; 31]. In the above articles, we have seen some of the pole-skipping points in the \(\omega-k\) plane located at \(\text{Im}(\omega)\) are related to chaos. However, they follow the chaos bound [32]. We have also seen that these special points describe various hydrodynamic mechanisms apart from chaos, e.g., the momentum density two-point function gives shear viscosity, diffusion modes, etc. Higher curvature corrections and stringy correction to the pole-skipping have been explicitly studied [23; 24]. Due to the effect of these corrections, the Lyapunov exponent and butterfly velocity have been modified. In this article, we discuss the effect of the higher order Gauss-Bonnet curvature term coupled with a scalar functional \(\zeta(\phi)\sim\phi^{p}\) of a scalar field \(\phi\), where \(p\) is an integer. However, the effect of this coupling is considered to be so trivial that no back-reaction is included in the bulk solution. In the bulk theory, we take the standard four-dimensional Schwarzchild-Anti de-Sitter metric which asymptotically reduces to pure AdS. So, on the boundary, we have a Conformal Field Theory at a finite temperature which is maximally chaotic in nature. Therefore without modification (due to back-reaction) the chaos profile remains unaffected. In this background, we have studied the pole-skipping points for scalar and metric perturbations. We expect the effect of interaction on the pole-skipping points. We show this effect with respect to the variation of the source of the scalar field located on the boundary. We plot those effects for different powers \(p\). In the sound channel, the flow and decay of energy density are expected to be affected by this interaction. Unlike the interaction-free background, we find decay in momentum density in the shear channel at a higher value of \(p\). Here we have pointed out the variation of the diffusion coefficient with the scalar source. It also shows consistent behavior with the effect of interaction. We briefly mention the result of this work as follows. We have noticed the effect of the interaction on the solution of the scalar field. To show this, we have plotted the values of the scalar \(\phi\) at the horizon against its value on the boundary, i.e., scalar source \(\mathcal{O}_{s}\). The relation between these two quantities has shown non-linearity for higher power \(p\) of scalar. However, for a low regime of source value, it remains linear. Similarly in the pole-skipping points of the scalar field, we find an additional correction term in \(k\) due to the interaction. Because of this correction, the imaginary value of \(k\) decreases. As we are interested in the perturbative regime, we will not allow the scalar source to increase much. In all of the plots, we will take the maximum value of the scalar source in \(\mathcal{O}(1)\). In the shear channel, we find a similar effect on \(k\). However, for \(p>3\), we find imaginary \(k\) which implies the exponential decay or growth of the corresponding density function. Here we calculate the diffusion coefficient from the lowest point. It shows that the rate of diffusion decreases with the increase of scalar source and it is always below \(1/4\pi T\) for \(p>3\). On the other hand, in the sound channel, we find the effect of interaction for all \(p>1\) are similar. In this channel, without interaction, \(k^{4}\) has pure real (\(<0\)) values. Due to interaction, it encounters an imaginary part which increases with the effect of the scalar source. As the real \(k^{4}<0\) gives \(k\) with equal real and imaginary parts indicating the energy transport and decay/growth of energy density respectively. With the effect of interaction, the real and imaginary parts of \(k\) become unequal. Thus one can conclude this is a result of the variation of thermal transport due to interaction. We have organized the paper as follows. In section 2, we briefly describe our model, showing Einstein's equation and background metric. We have also talked about the behaviour of the background scalar field and calculated the source and condensation values. In section 3, we have studied pole-skipping for scalar field perturbation. The metric perturbations - shear and sound modes - have been discussed in section 4. In the following section, we have calculated the chaos-related parameters, first, from the perturbed \(vv\) component and then from the master equation. Finally, we concluded our results with a brief overview of the paper in section 6. ## 2 Holographic Gravity Background Now, in the holographic model, as we want to study pole-skipping at finite temperatures, we need to use a black hole solution in bulk. We consider a four-dimensional Anti-de Sitter Schwarzchild black hole. Holographically, the boundary theory is three-dimensional gauge theory. The bulk metric asymptotically gives \((3+1)\) dimensional AdS space. So, the corresponding boundary theory is a finite temperature field theory. Initially, we consider pure black hole solution and associated Einstein's action in the bulk theory as, \[\mathcal{S}_{EH}=\int d^{4}x\sqrt{-g}\left(\kappa\mathcal{R}+\Lambda\right) \tag{1}\] where \(\kappa=(16\pi G_{N})^{-1}\) is a constant related to the four-dimensional Newton's constant with mass dimensions 2 (here we set it to unity.). The associated field equation \[\mathcal{G}_{\mu\nu}\equiv\mathcal{R}_{\mu\nu}-\frac{1}{2}\mathcal{R}g_{\mu\nu }=\frac{1}{2\kappa}\Lambda g_{\mu\nu} \tag{2}\] gives the \(3+1\) dimensional AdS-Schwarzchild black hole solution \[ds^{2}=L^{2}\left[-r^{2}f(r)dt^{2}+\frac{dr^{2}}{r^{2}f(r)}+h(r) \left(dx^{2}+dy^{2}\right)\right] \tag{3}\] \[f(r)=1-\left(\frac{r_{0}}{r}\right)^{3},\qquad h(r)=r^{2}\] Where \(L\) is the AdS radius. In the Einstein action, \(R\) is the Ricci scalar of the background (3) and \(\Lambda\) is related to the cosmological constant in four dimensions. In our case, \(\Lambda=6\kappa/L^{2}\) and \(r\) is the radial coordinate of the black hole with the horizon radius \(r_{0}\). The horizon radius is related to the temperature \(T\) of the black hole as \(4\pi T=r_{0}^{2}f^{\prime}(r_{0})=3r_{0}\), where prime denotes derivative w.r.t. \(r\). Now in the action (1), we have added a perturbative term \(\frac{1}{2}\alpha^{\prime}\zeta(\phi)\mathcal{R}_{GB}\), where \(\alpha^{\prime}\) is arbitrary coupling constant which is very small (\(\ll 1\)) real number. It acts as the perturbation parameter. \(\zeta(\phi)\) is a dimensionless real scalar functional of the minimally coupled scalar field \(\phi\) of mass \(m\). In this present study, we have considered \(\zeta(\phi)=L^{p}\phi^{p}\), \(p\in\mathbb{Z}^{+}\). In this present discussion, we will consider \(L=1\). The term \(\mathcal{R}_{GB}\) is the higher-ordered Gauss-Bonnet curvature term (in \(4d\)), which is coupled to the scalar \(\phi(r)\) through \(\zeta\). Gauss-Bonnet term can be written as, \[\mathcal{R}_{GB}=\mathcal{R}_{\mu\nu\rho\sigma}\mathcal{R}^{\mu\nu\rho\sigma} -4\mathcal{R}_{\mu\nu}\mathcal{R}^{\mu\nu}+\mathcal{R}^{2}.\] With this scalar-Gauss-Bonnet interaction term, the background action takes the following form as \[\mathcal{S}=\int d^{4}x\sqrt{-g}\left[\kappa\mathcal{R}+\Lambda+\frac{\alpha^{ \prime}}{2}\zeta(\phi)\mathcal{R}_{GB}\right]. \tag{4}\] For \(p=0\), pole-skipping has been exclusively studied previously in the five dimensions [24] and it has considered the back-reaction of the higher curvature on the background. In our study, we are interested in \(p\neq 0\) cases and treating \(\alpha^{\prime}\) as a perturbative parameter, our background will remain unaffected by the back-reaction of the scalar field. Now taking the variation of the metric tensor in (4), we get the Einstein equation as follows \[(\kappa-2\alpha^{\prime}\nabla_{\rho}\nabla^{\rho}\zeta(\phi)) \mathcal{G}_{\mu\nu}-\tfrac{1}{2}g_{\mu\nu}(\Lambda+\tfrac{1}{2}\alpha^{ \prime}\zeta(\phi)\mathcal{R}_{GB})+\alpha^{\prime}\zeta(\phi)\left(\mathcal{ R}_{\mu}^{\ \rho\sigma\tau}\mathcal{R}_{\nu\rho\sigma\tau}-4\mathcal{R}_{\rho\mu}\mathcal{R}_{\nu}^{ \rho}+\mathcal{R}\mathcal{R}_{\mu\nu}\right)\] \[-\alpha^{\prime}\left(\mathcal{R}\nabla_{(\mu}\nabla_{\nu)}\zeta (\phi)-4\mathcal{R}_{\rho(\mu}\nabla_{\nu)}\nabla^{\rho}\zeta(\phi)+2\left(g _{\mu\nu}\mathcal{R}_{\rho\sigma}+\mathcal{R}_{\mu(\rho\sigma)\nu}\right) \nabla^{\rho}\nabla^{\sigma}\zeta(\phi)\right)=0 \tag{5}\] where \(\mathcal{G}_{\mu\nu}\) is the Einstein tensor. The aforementioned scalar field \(\phi\) is a minimally coupled scalar in the black hole background (1). In the interaction term, the scalar couples with the second-order curvature terms. Taking this curvature coupling into account the Klein-Gordon equation of \(\phi\) becomes, \[\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\phi \right)-m^{2}\phi+\frac{\alpha^{\prime}}{2}\mathcal{R}_{GB}\frac{\partial}{ \partial\phi}\zeta(\phi)=0 \tag{6}\] Our aim would be to compute the near horizon in going modes and their properties. Therefore, it is fruitful to perform our calculations in the ingoing Eddington-Finkelstein coordinate. So, we consider \(v=t+r_{*}\), where \(v\) is the null co-ordinate and \(r_{*}\) is the tortoise co-ordinate. The metric (3) transforms into, \[ds^{2}=-r^{2}f(r)dv^{2}+2dvdr+r^{2}\left(dx^{2}+dy^{2}\right). \tag{7}\] The metric (3) is singular at \(r=r_{0}\). In this new coordinate, the apparent singularity is removed. The metric has rotational symmetry in the \((x,y)\) plane. In the background (7), \[\mathcal{R}=-12,\qquad\mathcal{R}_{GB}(r)=12\left(2+\frac{r_{0}^{6}}{r^{6}} \right).\] At horizon, \(\mathcal{R}_{GB}(r_{0})=36\) and at the boundary \(\mathcal{R}_{GB}(r\rightarrow\infty)\approx 24\). So, in the action (4), the scalar-Gauss-Bonnet interaction term can be considered as perturbation if \(\alpha^{\prime}\ll 1\) where the scalar is assumed to be constant of \(\mathcal{O}(1)\) at both ends. In this background, the Klein-Gordon equation turns out to be, \[r^{2}f(r)\phi^{\prime\prime}(r)+\left(r^{2}f^{\prime}(r)+4rf(r)\right)\phi^{ \prime}(r)-m^{2}\phi(r)+\frac{\alpha^{\prime}}{2}\mathcal{R}_{GB}\frac{ \partial}{\partial\phi}\zeta(\phi)=0. \tag{8}\] The asymptotic (\(r\rightarrow\infty\)) behavior of equation (8) gives the following \[\lim_{r\rightarrow\infty}\phi(r)=\mathcal{O}_{s}r^{\Delta-3}+\mathcal{O}_{c}r ^{-\Delta}. \tag{9}\] Where, at infinity (where is our boundary), the leading coefficient \(\mathcal{O}_{s}\) is the source, and the subleading coefficient \(\mathcal{O}_{c}\) is the condensation of the dual boundary dual operator. The scaling dimension of the dual operator \(\Delta=3/2+\sqrt{9/4+m^{2}}\). There is a lower bound on the scalar mass called the bound of BF (Breitenlohner and Freedman) which states that \(m^{2}\geq-d^{2}/4\) for \((d+1)\) gravitational background. Otherwise, the background solution will be unstable. In our case, this bound will be \(m^{2}>-9/4\). From equation (9), we can write, \[\lim_{r\rightarrow\infty}r\phi^{\prime}(r)=(\Delta-3)\mathcal{O}_{s}r^{\Delta -3}-\Delta\mathcal{O}_{c}r^{-\Delta}. \tag{10}\] Now, we can easily get the source and condensation from equations (9) and (10) by some algebra as shown in [33] as \[\mathcal{O}_{s} =\lim_{r\rightarrow\infty}\frac{r^{3-\Delta}\left(\Delta\phi(r)+r \phi^{\prime}(r)\right)}{2\Delta-3} \tag{11}\] \[\mathcal{O}_{c} =\lim_{r\rightarrow\infty}\frac{r^{\Delta}\left((\Delta-3)\phi(r)- r\phi^{\prime}(r)\right)}{2\Delta-3}. \tag{12}\] Since our background is neutral, the scalar field will not form any condensation. Rather, in the next sections, we will mainly see the effect of the source in the channels. calar field perturbation In this section, we study the dispersion relation associated with the scalar field \(\phi\) which is a minimally coupled scalar with mass \(m\). This scalar field \(\phi\) is regular at the horizon and decays in the asymptotic limit. With these conditions, the solution of the scalar can be found from the equation (8). Now assuming the scalar field is a function of the radial coordinate \(r\) only, i.e., \(\zeta(\phi)=\phi(r)^{p}\). We take the near-horizon expansion of the field as \[\phi(r)=\sum_{n=0}^{\infty}\phi^{(n)}(r_{0})\times(r-r_{0})^{n}=\phi(r_{0})+ \phi^{\prime}(r_{0})(r-r_{0})+\phi^{\prime\prime}(r_{0})(r-r_{0})^{2}+\cdots\] where, \(\phi^{(n)}\equiv\frac{d^{n}\phi(r)}{dr^{n}}|_{r=r_{0}}\). From these series, the first three derivatives of \(\phi\) at \(r=r_{0}\) can be found as, \[\phi^{\prime}\left(r_{0}\right) = \frac{m^{2}\phi\left(r_{0}\right)-18\alpha^{\prime}p\phi\left(r_{ 0}\right)^{p-1}}{3r_{0}}\] \[\phi^{\prime\prime}\left(r_{0}\right) = \frac{-18\alpha^{\prime}p\left(pm^{2}-12\right)\phi\left(r_{0} \right)^{p-1}+m^{2}\left(m^{2}-6\right)\phi\left(r_{0}\right)}{18r_{0}^{2}}\] \[\phi^{\prime\prime\prime}\left(r_{0}\right) = \frac{1}{162r_{0}^{3}}\left[-18\alpha^{\prime}p\left((2(p-2)p+3)m ^{4}+6(3-7p)m^{2}+432\right)\phi\left(r_{0}\right)^{p-1}\right.\right.\] \[\left.\left.+m^{2}\left((m^{2}-6)(m^{2}-9)-3(m^{2}-18)\right) \phi\left(r_{0}\right)\right]\] Similarly, we can also find the higher order derivatives in terms of \(\phi(r_{0})\). We can solve the scalar field from (8) numerically by providing some horizon value to the scalar field. From this solution, we can evaluate \(\mathcal{O}_{s}\) and \(\mathcal{O}_{c}\) as shown in (11). For the near horizon study, the regularity condition of the scalar field on the horizon is very important. So, for numerical evaluation of the source \(\mathcal{O}_{s}\) or to get a consistent solution of \(\phi(r)\); \(\phi(r_{0})\) should be finite and small enough so that the near-horizon expansion remains convergent. From the plot of \(\phi(r_{0})\) vs \(\mathcal{O}_{s}\), we can say that at lower values of source \(\mathcal{O}_{s}\), the relation between these two quantities is almost linear. But at higher values it becomes non-linear and the degree of non-linearity strongly depends on the power (\(p\)) of the interaction. Due to this fact, in this present work, we will confine our all numerical calculations to the low-value regime of the \({\cal O}_{s}\) or \(\phi(r_{0})\). Now to study the dispersion relation of the scalar field, we take the perturbation \(\phi(r)\to\phi(r)+e^{-i\omega v+ikx}\varphi(r)\). The linearized equation from (8) is \[r^{2}f(r)\varphi^{\prime\prime}(r)+\left(r^{2}f^{\prime}(r)+4rf( r)-2i\omega\right)\varphi^{\prime}(r)+\\ \left(6\alpha^{\prime}(p-1)p\left(f(r)^{2}-2f(r)+3\right)\phi(r)^ {p-2}-\frac{k^{2}+m^{2}r^{2}+2ir\omega}{r^{2}}\right)\varphi(r)=0 \tag{11}\] Expanding the solution near the horizon \(r=r_{0}\) and using the matrix method as given in [22], we get the pole skipping points \((\omega,k)\). We find the lowest order point is \(\omega_{1}=-\frac{3}{2}ir_{0}=-2i\pi T\) and \[k_{1}^{2}+r_{0}^{2}\left(m^{2}-18\alpha^{\prime}p(p-1)\phi\left(r_{0}\right)^ {p-2}+3\right)=0\] Without any perturbation (\(\alpha^{\prime}=0\)), we get the results for pure Schwarzchild black hole \(k_{1}^{2}=-(3+m^{2})r_{0}^{2}\), i.e., \(k_{1}\) is completely imaginary. But, due to the effect of the interaction, \(k_{1}\) can be real after a particular value of \({\cal O}_{s}\). Similar behaviour is also found for the higher-order pole-skipping points. Though we keep \(\alpha^{\prime}\) small enough in the perturbative regime, \(k_{1}^{2}\) becomes positive as the scalar source increases. So \(k\) becomes real. From the equation of the perturbed scalar (11), it is clear to predict that for \(p=0\) and \(1\), there is no effect on \((\omega,k)\), i.e., we get the values of the black hole background without any perturbation. For \(p\geq 2\), \(p\) effects \(k_{1}\) in similar way as \(\alpha^{\prime}\) does. For the small enough \({\cal O}_{s}\), we have found \(k\) in the imaginary plane which has been plotted in the left panel of Figure 2. Here we have plotted first three poles \((\omega,k)\) in the complex plane for \(p=1,\,2\,\&\,3\). For \(p=1\,\&\,2\), we have found \(2n\)-number of points for \(k_{n}\), i.e., \(n\) number of complex roots for \(k_{n}\). However, for \(p=3\), we have found one real and \(n-1\) complex root of each \(k_{n}\). Because of these real roots, we have three points on the \(\text{Im}(k)\) axis. For \(p=2\), the interaction imposes a constant shift in \(k\). But for \(p\geq 3\) the shift due to the interaction is proportional to the source. So, as the source goes to zero, \(k_{n}\) becomes the same as the pure AdS black hole. These have Figure 2: Left: The plot of \(\frac{\text{Im}[\omega]}{2\pi T}\) vs \(\text{Im}\left[k\right]\) at \(\alpha^{\prime}=0.01\) for \(p=1\) (orange circle), \(p=2\) (green rectangle) and \(p=3\) (red triangle). Right: The plot of \(k_{1}^{2}\) vs \({\cal O}_{s}\) for \(p=2\) (green dot-dashed line), \(p=3\) (red dashed line) and \(p=4\) (blue dotted line). Here we have taken scalar mass \(m^{2}=-2\), \(\alpha^{\prime}=0.01\) and \(r_{0}=1\). been shown in the right panel of the same figure. Here, we have presented the variation of \(k_{1}^{2}\) with the scalar source \({\cal O}_{s}\) for \(p=2\), \(3\,\&\,4\). It is found that \(k_{1}\) becomes real-valued above a certain value of \({\cal O}_{s}\). Now, if we allow only the imaginary values of \(k_{1}\), we need to put a cutoff on \({\cal O}_{s}\). The same behaviour can be found for the higher order \(k\). However as we go to higher order in poles or in interaction, we need to impose a smaller cutoff on the source value to get the pure imaginary root of \(k\). ## 4 Metric perturbations In the pole-skipping phenomena, we study the properties of the stress-energy tensor of the boundary field theory. Now with AdS/CFT duality, the bulk fields are mapped to boundary operators. Therefore, the boundary stress-energy tensors are associated with the metric perturbation of the bulk. In our bulk, we consider the metric perturbation \[g_{\mu\nu}\to g_{\mu\nu}+e^{-i\omega v+ikx}\delta g_{\mu\nu}(r), \tag{10}\] where \(\omega\) and \(k\) are energy and momentum parameters of the fluctuation and the fluctuation propagates radially. So, in the boundary field theory, we have the two points correlators which are \(<T_{vv},T_{vv}>\), \(<T_{vv},T_{vx}>\), \(<T_{vv},T_{xx}>\), \(<T_{vv},T_{yy}>\) in longitudinal mode and \(<T_{vy},T_{vy}>\), \(<T_{vy},T_{xy}>\), \(<T_{xy},T_{xy}>\) in a transverse mode where \(T_{\mu\nu}\) is the stress-energy tensor on the boundary. The metric perturbation: \(\delta g_{vv}\), \(\delta g_{vx}\), \(\delta g_{xx}\), \(\delta g_{yy}\) and \(\delta g_{vy}\), \(\delta g_{xy}\) are associated to the above two modes respectively. We impose the radial gauge condition \(\delta g_{r\mu}=0\) for all \(\mu\). We also use the traceless perturbation for simplicity, i.e., \(g^{\mu\nu}\delta g_{\mu\nu}=0\) which gives \(\delta g_{yy}=-\delta g_{xx}\). However the longitudinal modes are actually the scalar modes, it does not couple with a minimally coupled scalar. Therefore we can perturb only \(g_{\mu\nu}\) without effecting \(\phi\). Finally, we have three independent perturbations in longitudinal mode and two in transverse mode. ### Shear Channel As the momentum vector \((\omega,k)\) of the metric fluctuation is taken along \((v,x)\)-plane, for shear mode, we consider the components coupled to \(y\)-direction. Here we take \(g_{xy}\) and \(g_{vy}\) as the only non-vanishing perturbations and these are completely decoupled from the longitudinal perturbations. These are associated with \(T_{vy}\), \(T_{xy}\) on the boundary. The linearised Einstein equations will give the dynamics of these fluctuations. At some special values of \((\omega,\,k)\), the solution of those equations near the horizon becomes non-unique and gives more than one independent solution. Those special points \((\omega,\,k)\) in this holographic gravity background are connected to the coincidence of poles and zeros of the boundary Greens function, \(G_{\mu y,\nu y}\) where \(\mu,\,\nu=v,\,x\). Now we put these perturbations in the metric (7) and find the linearised form of the field equation (5) with only non-vanishing perturbations \(g_{xy}\) and \(g_{vy}\). We find that \(vy\), \(ry\) and \(xy\) components of the linearised equations are only non-trivial, whereas other equations are self-satisfied. Out of these three equations, we find two coupled second-order differential equations as \(\delta g^{\prime\prime}_{vy}(r)=f_{1}\left(\delta g^{\prime}_{vy},\,\delta g_{ yy},\,\delta g_{xy}\right)\) and \(\delta g^{\prime\prime}_{xy}(r)=f_{2}\left(\delta g^{\prime}_{vy},\,\delta g_{ yy},\,\delta g_{xy}\right)\). Again, under diffeomorphism transformation with the vector field \(e^{-i\omega v+ikx}\xi^{\mu}\), one can show that \(\delta g_{vy}\) and \(\delta g_{xy}\) will form a gauge invariant combination \({\cal Z}_{sh}\) as, \[{\cal Z}_{sh}=\frac{1}{r^{2}}\left(\omega\delta g_{xy}+k\delta g_{vy}\right).\] So, two second-order differential equations (DE) of \(\delta g_{vy}\) and \(\delta g_{xy}\) combine into a single second-order DE of \({\cal Z}_{sh}\). The final master equation is \[{\cal M}_{sh}{\cal Z}_{sh}^{\prime\prime}(r)+{\cal P}_{sh}{\cal Z}_{sh}^{ \prime}(r)+{\cal Q}_{sh}{\cal Z}_{sh}(r)=0. \tag{10}\] Where, the coefficients \({\cal M}_{sh},\,{\cal P}_{sh}\) and \({\cal Q}_{sh}\) are functions of \(\omega,k\) and \(\phi(r)\). The details expressions are given in Appendix A. There we have considered the coefficients up to \(\alpha^{\prime}\) order. As \(\alpha^{\prime}=0\) the master equation reduces to the same as the pure AdS black hole. The near horizon structure of the master variable is taken as follows. \[{\cal Z}_{sh}=\sum_{n=0}Z_{n}\times(r-r_{0})^{n}.\] Now we expand the master equation (10) around \(r=r_{0}\). At zeroth order \({\cal O}((r-r_{0})^{0})\), it gives the linear algebraic equation of \(Z_{0}\) and \(Z_{1}\). The coefficients of \(Z_{0}\) and \(Z_{1}\) are functions of two primary variables \(\omega\) and \(k\). So, at a particular point, \(\omega=\omega_{1}\) the vanishing of the coefficient of \(Z_{1}\) indicates that \(Z_{1}\) is arbitrary. Again at the same \(\omega\) value, we find a special value of \(k=k_{1}\) where the coefficient of \(Z_{0}\) vanishes. Therefore at the point (\(\omega_{1}\), \(k_{1}\)) the near horizon solution of \({\cal Z}_{sh}\) is defined with two arbitrary parameter \(Z_{0},\,Z_{1}\) and the solution is combination of two arbitrary solutions \(C_{1}(r-r_{0})Z_{0}+C_{2}(r-r_{0})Z_{1}\). So we find a non-unique solution at the point (\(\omega_{1},\,k_{1}\)) - which is the first order pole-skipping point. Here we find \(\omega_{1}=-\frac{3}{2}ir_{0}\) and \[k_{1}^{2}=3r_{0}^{2}\left[1-3\alpha^{\prime}\phi(r_{0})^{p}\frac{\xi(2\xi^{2} -\xi-17)}{2\xi+1}\right] \tag{11}\] where, \(\xi=mp^{2}/3\). We find \(\omega_{1}\) same as the previous result [8] for AdS\({}_{4}\) black hole. But \(k_{1}\) contains a non-trivial shift due to the interaction. At \(\alpha^{\prime}=0\), it gives the same \(k_{1}^{2}\) as given in [8]. With nonzero \(\alpha^{\prime}\), the shift in momentum depends on the details properties of the scalar field and its interaction namely, power \(p\) of the interacting field \(\phi\), the value of the field at horizon \(\phi(r_{0})\) and mass of it \(m\). Now the scalar mass \(m\) can not be zero to get the nonzero shift. Also, we need to maintain the value of \(\alpha^{\prime}\) in such a way that the shift remains small enough, i.e., the absolute value of the correction term inside the square bracket in (11) is always less than unity. Next few higher-order pole-skipping points are \(\omega_{n}=-\frac{3}{2}inr_{0}\) for \(n=2,\,3,\dots\) and \[k_{2}^{2} = 3\sqrt{2}r_{0}^{2}\left[1-\frac{3\alpha^{\prime}\xi\phi(r_{0})^ {p}}{4(2\xi+1-\sqrt{2})^{2}}\left(12\xi^{4}+4(21-\sqrt{2})\xi^{3}+(209-74\sqrt {2})\xi^{2}\right.\right. \tag{12}\] \[\left.\left.\hskip 142.26378pt+(134-238\sqrt{2})\xi+136+20\sqrt{2} \right)\right)\right]\] \[k_{3}^{2} = 3\sqrt{3}r_{0}^{2}\left[1+\frac{\xi(5-\sqrt{3})}{66(6\xi-3+2\sqrt {3})^{3}}\left(-3888\xi^{6}+54432\xi^{5}-(32400-21528\sqrt{3})\xi^{4}\right.\] \[+(976140-224964\sqrt{3})\xi^{3}-(1108017-786374\sqrt{3})\xi^{2}+(1134059-427507 \sqrt{3})\xi\] \[+295381\sqrt{3}-222201\Big{)}\Big{]} \tag{24}\] and so on. In all of these \(k\) values, the absolute value of the perturbative correction increases with \(\phi(r_{0})\) or the source \({\cal O}_{s}\) but the sign of the term is solely decided by the factor \(pm^{2}\). We find that \(k_{n}^{2}\) can be both greater or less than \(3\sqrt{n}r_{0}^{2}\) depending on the value of \(pm^{2}\). For example \(k_{1}^{2}>3r_{0}^{2}\) for \(\frac{3}{4}\left(1-\sqrt{137}\right)\leq m^{2}p<-\frac{3}{2}\) and \(-\frac{9}{4}\leq m^{2}\leq-\frac{5}{4}\). However, for other higher mode points \(k_{n}^{2}\) is always less than \(3\sqrt{n}r_{0}^{2}\). It puts no further restriction on scalar mass. In Figure 3, we have plotted \(\frac{k_{n}^{2}}{3r_{0}^{2}}\). The ratio has been varied with the scalar source \({\cal O}_{s}\) for four different order of interaction \(p=2\), \(3\), \(4\,\&\,5\) with perturbation parameter \(\alpha^{\prime}=0.001\) and scalar mass \(m^{2}=-2\). Fig.3 depicts that while the source is off the ratio is equal to unity. As the source increases from zero, the ratio deviates from unity and increases or decreases according to the power of the Scalar-Gauss-Bonnet interaction term \(p\). At the given mass value, for \(0<p\leq 4\), the ratio increases with source, and for \(p\geq 5\) the ratio decreases from unity. The same has been depicted in the left panel of the figure. Whereas on the right panel of the same figure, \(k_{1}^{2}/(3r_{0}^{2})\), \(k_{2}^{2}/(3\sqrt{2}r_{0}^{2})\) and \(k_{3}^{2}/(3\sqrt{3}r_{0}^{2})\) have been varied with the scalar source for \(p=2\) and \(p=5\). \(k_{1}^{2}/(3r_{0}^{2})\) increases with source \({\cal O}_{s}\) for \(p=2\) and decreases for \(p=5\) which is consistent with analytical observations as discussed above. On the other hand, \(k_{2}^{2}/(3\sqrt{2}r_{0}^{2})\) and \(k_{3}^{2}/(3\sqrt{3}r_{0}^{2})\) decrease with source for both \(p=2\,\&\,5\). It has already been observed that for pure Schwarzchild-AdS\({}_{4}\) background, the first order pole-skipping point obeying the dispersion relation \(\omega=-i{\cal D}_{s}k^{2}\) emerges from the boundary Greens function [8]. However, the first pole-skipping point gives an upper bound on the diffusion constant [34; 35]. The diffusion constant is related to the first order pole-skipping as \({\cal D}_{s}=\frac{i\omega_{1}}{k_{1}^{2}}\). Here, in our case, we find the diffusion constant \({\cal D}_{s}\) as \[{\cal D}_{s}=\frac{i\omega_{1}}{k_{1}^{2}}=\frac{1}{2r_{0}}\left[1+3\alpha^{ \prime}\phi(r_{0})^{p}\frac{\xi(2\xi^{2}-\xi-17)}{2\xi+1}\right] \tag{25}\] Its upper bound is \({\cal D}_{s}(\alpha^{\prime}\neq 0)<{\cal D}_{s}(\alpha^{\prime}=0)\). For \(d+2\) dimensional pure AdS-Schwarzchild black hole, the diffusion constant is bounded1 as \(1\leq 4\pi{\cal D}_{s}T\leq\frac{d+1}{d}\). If the scalar field follows the BF bound and unitarity condition, the scalar mass follows the bound \(-2.25<m^{2}<-1.25\). \({\cal D}_{s}T\) in (4.6) can be found in \(1\leq 4\pi{\cal D}_{s}T\leq\frac{3}{2}\) for all \(p\leq 6\) for the mass ranges given in Table 1. Footnote 1: For pure Schwarzchild-AdS\({}_{d+2}\), the shear mode diffusion rate is \(\frac{1}{4\pi T}\), where \(T=\frac{d+1}{4\pi}r_{0}\) and \(r_{0}\) is the horizon [36]. So \({\cal D}_{s}T\) is independent of the dimensions of the black hole. The first order pole-skipping point of the shear mode is dimension dependent, \(\omega=-\frac{d+1}{2}ir_{0}\) and \(k_{1}^{2}=\frac{d(d+1)}{2}r_{0}^{2}\). Therefore \(\frac{i\omega_{1}}{k_{1}^{2}}=\frac{1}{d}r_{0}=\frac{d+1}{d}\frac{1}{4\pi T}\) In Figure 4, the left panel have shown the plot of the pole-skipping points in the \(\omega-k\) plane. Here we have plotted the standard dispersion relation of the boundary theory in a low-frequency regime, \(\omega(k)=-i{\cal D}_{s}k^{2}\) where \({\cal D}_{s}=\frac{1}{4\pi T}\) given in [8]. When \(\alpha^{\prime}=0\) or the perturbative correction is very small, the first pole-skipping point falls on the dispersion curve. As the effect of interaction increases the first pole-skipping point skips the dispersion curve. However, the other pole-skipping points always stay away from the dispersion curve. At the right panel of Figure 4, we have plotted the diffusion constant obtained in (4.6). Here the \(4\pi{\cal D}_{s}T\) have been varied with the scalar source for three different \(p\) values. As the source is zero the diffusion constants for all \(2\leq p\leq 6\) become equal to the upper bound \(\frac{3}{8\pi T}\). In the plot, as the source increases from zero, the diffusion constants start falling from the highest bound. For the coupling function \(\zeta(\phi)\sim\phi^{2}\) and \(\phi^{3}\), the diffusion \begin{table} \begin{tabular}{c c} Interaction order \(p\) (\(\phi^{p}\)) & mass range \\ \hline \(p=1\) & \(-2.25<m^{2}<-1.5\) \\ \(p=2\) & \(-2.25<m^{2}<-1.25\) \\ \(p=3\) & \(-2.25<m^{2}<-1.25\) \\ \(p=4\) & \(-2.007<m^{2}<-1.25\) \\ \(p=5\) & \(-1.605<m^{2}<-1.25\) \\ \(p=6\) & \(-1.338<m^{2}<-1.25\) \\ \hline \end{tabular} \end{table} Table 1: The mass range associated to \(p\) to follow the allowed bound of the diffusion coefficient Figure 4: _Left_: The plot of PS points in \(\omega-k\) plane for \(\alpha^{\prime}=0\) (blue color) and \(\alpha^{\prime}=0.001\) (red color), \(\phi(r_{0})=1.1\), \(p=3\) and \(m^{2}=-2\). Three different shapes have been used for three different modes. The solid curve (gray color) is \(\omega=\frac{-ik^{2}}{4\pi T}\). _Right_: Plot of \(4\pi{\cal D}_{s}T\) vs \({\cal O}_{s}\) for \(p=2\) (green line), \(p=3\) (red line), \(p=4\) (blue line) and \(p=5\) (magenta line) \(m^{2}=-2.0\) and \(\alpha^{\prime}=0.001\). constant decreases monotonically. At a particular value of the source, \(4\pi{\cal D}_{s}T\) has become equal to unity, and for further increase of source value, it has fallen below its lower bound. However, for \(p=4\), the diffusion constant is found to remain very close to its upper bound for a comparatively long range of \({\cal O}_{s}\). After that, it started decreasing very rapidly and reached below 1. At these higher values of the source, the diffusion constant for \(p=4\) has two different values for a single value of scalar source \({\cal O}_{s}\). It seems unusual. So we should control the source not to exceed \(\sim 3\). Again though we know that the diffusion constant should not violate its lower bound, our results are not unphysical. Since our whole calculation is assumed to be in a perturbative regime, we are free to choose any tiny value of \(\alpha^{\prime}\) and any small range of the scalar source for the numerical evaluation. Thus the better estimation in our case always makes \(1\ll 4\pi{\cal D}_{s}T\leq\frac{3}{2}\) for \(1<p\leq 6\). ### Sound Channel The longitudinal components of the metric perturbation are called the scalar or sound modes of the perturbation. These are associated with the energy density correlation on the boundary. The corresponding stress-energy tensor in this mode are \(T_{vv},\,T_{vx},\,T_{xx}\) and \(T_{yy}\) on the boundary field theory. These make the two points correlation functions \(G_{vv,vv}\), \(G_{vv,vx}\), \(G_{vv,xx}\) and \(G_{vv,yy}\) which are induced by the metric perturbations. In holographic gravity theory the required perturbations are \(\delta g_{vv},\,\delta g_{vx}\) and \(\delta g_{xx}\) with the trace-less perturbation, i.e., \(\delta g_{yy}=-\delta g_{xx}\). Like the shear mode, the metric perturbations also combine into a diffeomorphism invariant master variable \({\cal Z}_{so}\). \[{\cal Z}_{so}=\frac{1}{r^{2}}\left[k^{2}\delta g_{vv}+2\omega k\delta g_{vx}- \frac{k^{2}}{2}\left(2f^{\prime}(r)+rf(r)-\frac{4\omega^{2}}{k^{2}}\right) \delta g_{xx}\right] \tag{4.7}\] The second-order differential equations of \(\delta g_{vv}(r),\,\delta g_{vx}(r)\) and \(\delta g_{xx}(r)\) are combined into the master equation. \[{\cal M}_{so}{\cal Z}_{so}^{\prime\prime}(r)+{\cal P}_{so}{\cal Z}_{so}^{ \prime}(r)+{\cal Q}_{so}{\cal Z}_{so}(r)=0 \tag{4.8}\] The coefficients of (4.8) are linear in \(\alpha^{\prime}\) which are given in appendix B. At \(\alpha^{\prime}=0\), the master equation reduces to the same for the pure Schwarzchild-AdS\({}_{4}\) background. Considering the near horizon structure of \({\cal Z}_{so}\) similar to \({\cal Z}_{sh}\), we find the pole-skipping points for various orders. Here we find two types of pole-skipping points from this master equation (4.8). The denominator of all the coefficients of the equation contains a common term \(3k^{2}-4\omega^{2}+k^{2}f(r)\). At the near horizon regime, it introduces a pole at \(3k^{2}-4\omega^{2}=0\). Now if we consider \(3k^{2}\neq 4\omega^{2}\) we get only \(\omega_{n}=-\frac{3}{2}inr_{0}\) for \(n=1,\,2\,\cdots\,\) at the lower-half plane of complex \(\omega\). But when we impose the condition \(3k^{2}=4\omega^{2}\), we can also find \(\omega\) in the upper-half plane of \(\omega\), \(\omega_{n}=\frac{3}{2}inr_{0}\). It will be discussed later. Now we focus on the unequal condition. For \(3k^{2}\neq 4\omega^{2}\), the first order pole-skipping point is found at \(\omega_{n}=-\frac{3}{2}inr_{0}=-2\pi nT\) and first few \(k_{n}^{4}\) are given as \[k_{1}^{4}+9r_{0}^{4}-\alpha^{\prime}(3+3i)m^{2}p\,r_{0}^{4}\left(m^{2}p+(6+6i )\right)\phi\left(r_{0}\right)^{p}=0 \tag{4.9}\] \[k_{2}^{4}+18r_{0}^{4}+\alpha^{\prime}\frac{3m^{2}p\,r_{0}^{4}\left(m^{2}p \left(\left(5\sqrt{2}-2i\right)m^{2}p+40i-64\sqrt{2}\right)+126\left(3\sqrt{2}-4 i\right)\right)\phi\left(r_{0}\right)^{p}}{5\sqrt{2}-2i}=0 \tag{34}\] \[k_{3}^{4}+27r_{0}^{4}-\alpha^{\prime}\frac{2m^{2}p\,r_{0}^{4} \phi\left(r_{0}\right)^{p}}{91\sqrt{3}+63i}\left[\left(37\sqrt{3}+3i\right)m^{ 6}p^{3}-21\left(61\sqrt{3}+11i\right)m^{4}p^{2}\right.\right.\] \[\left.\left.+63\left(306\sqrt{3}+31i\right)m^{2}p-27\left(5369 \sqrt{3}+69i\right)\right]=0 \tag{35}\] Higher order \(k\) can also be found in the same way. At \(\alpha^{\prime}=0\), we get the Schwarzchild-AdS\({}_{4}\) values \(k_{1}^{4}=-9r_{0}^{4}\), \(k_{2}^{4}=-18r_{0}^{4}\), \(k_{3}^{4}=-27r_{0}^{4}\) and so on. In (33), the imaginary part \(3\alpha^{\prime}\left(12+pm^{2}\right)m^{2}pr_{0}^{4}\phi(r_{0})^{p}\) is zero for \(m^{2}=-\frac{12}{p}\) which is beyond the BF bound \(-\frac{9}{4}<m^{2}\) for \(p\leq 5\). But for \(p\geq 6\) we can make \(k_{1}^{4}\) real at that specific value of \(m^{2}\). A similar behaviour is also expected from the higher order \(k\). Here we have compared the position of pole skipping points of \(\phi^{2}\) interaction with the absence of interaction (\(\alpha^{\prime}=0\)) in Figure 5. The real and imaginary parts of \(k\) have been separately plotted against \(\omega/2i\pi T\). In both cases, the real and imaginary parts are almost equal to each other in each mode. For each part, the values have mirror symmetry with respect to the Re\([k]=\) Im\([k]=0\) axes. The shift due to interaction is very hard to identify in \(k_{1}\). For \(k_{2}\) and \(k_{3}\) on the other hand, one observes a measurable amount of shift. It has been depicted in above Figure 5. Without interaction, in each of these three modes, only four real numbers make four different complex \(k\) (having equal absolute values of real and imaginary part). With interaction, the same happened for \(k_{1}\). But for \(k_{2}\) and \(k_{3}\) eight real numbers makes four complex values of \(k\). Again we have numerically shown the variation of \(k\) with the source \(\mathcal{O}_{s}\) of the scalar in Figure 6. At the left plot of this figure, we have plotted the real and imaginary parts of \(k^{4}/(9r_{0}^{4})\) against the scalar source. Here we have evaluated the ratio of our result with the result of pure AdS-Schwarzchild. This ratio has no explicit \(r_{0}\) dependent. It depends on scalar mass, interaction order, and scalar value on the horizon. For \(\alpha^{\prime}=0.001\), \(m^{2}=-2\), and for different \(p\) the ratio has been evaluated. When the source is off, the imaginary part of the mentioned ratio is zero, whereas the real part is \(-1\). Which is consistent with the case without interaction. The imaginary part in \(k^{4}\) is contributed only from the interaction. As we have seen at the small value source is linearly proportional to \(\phi(r_{0})\), so, \(\phi(r_{0})\) also goes to zero as the source becomes zero and thus vanishes the correction term in (33). For Figure 5: The plot of real part (right panel) and imaginary part (left panel) of \(k\) vs Im\([\frac{\omega}{2\pi T}]\) for \(p=2\), \(m^{2}=-2\), \(\phi(r_{0})=1.1\), \(\alpha^{\prime}=0\) (solid rectangle) and \(\alpha^{\prime}=0.01\) (open circle). \(p=2\), \(3\), \(4\,\&\,5\) we have found the same behaviour as \({\cal O}_{s}\to 0\). Now if the source is turned on and increased gradually, as long as the source is small enough, both the imaginary and real parts change slowly. However, as \(p\) increases the rate of change also increases. The reason is clear from the presence of \(\phi(r_{0})^{p}\) factor in the correction terms. During this change the imaginary part of the ratio \(k_{1}^{4}/9r_{0}^{4}\) shifts from \(0\) towards \(-1\) and the real part changes in the exact opposite direction. Therefore the absolute value of the real (imaginary) part decreases (increases). Thus at some point on \({\cal O}_{s}\), real and imaginary lines cross each other where their value is exactly equal and lie in between \(0\) and \(1\). Again after a certain amount of increase in the source, the real part crosses the horizontal axis. At that value \({\cal O}_{s}\), \(k_{1}^{4}\) becomes a completely imaginary number. These two cross-over points highly depend on \(p\), in the given plot, the \(p=3\) plot has made the first cross-over whereas the \(p=1\) plot has made the last cross-over. As the source value increases further the real (imaginary) values become more and more positive (negative). Since we are interested in the perturbative effect, we will not consider those high values of \(k_{1}^{4}\). At the right panel of the same figure, we have plotted the ratio \(k_{n}^{4}/(9nr_{0}^{4})\) where \(n=1\), \(2\,\&\,3\). Here interaction order is fixed at \(\phi^{3}\). We have noticed that the behaviour of the real and imaginary parts of the ratio is almost identical to the left panel. We have found the two cross-overs for each of the three modes of \(k\). At these cross-over points, the behaviour of \(k_{n}^{2}\) is completely identical to before. For the lowest order pole-skipping, \(k_{1}\), the cross-over happened at the highest \({\cal O}_{s}\) value, and the cross-over points come closer to \({\cal O}_{s}=0\) as the order of pole-skipping increases. Therefore the order of interaction and the order of the pole-skipping affect \(k\) in the same way. Mainly the location of the cross-over points is almost identically affected by these two parameters. However, the cross-over points can be found analytically from (4.9)-(4.11). For example, the real and imaginary parts of \(k_{1}^{4}\) are \[{\rm Re}[k_{1}^{4}]=-9r_{0}^{4}\left(1-\frac{1}{3}\alpha^{\prime}p^{2}m^{4} \phi(r_{0})^{p}\right),\hskip 56.905512pt{\rm Im}[k_{1}^{4}]=3\alpha^{ \prime}pm^{2}r_{0}^{4}\left(12+pm^{2}\right)\phi(r_{0})^{p}\] The first cross-over happens at the value of \({\cal O}_{s}\) corresponding to \(\phi(r_{0})=\left(-4\alpha^{\prime}m^{2}p\right)^{-1/p}\) where the real and imaginary part of \(k_{1}^{4}\) are equal to each other. The (second) cross-over on the \({\cal O}_{s}\) axis occurs for \(\phi(r_{0})=\left(\frac{3}{\alpha^{\prime}p^{2}m^{4}}\right)^{1/p}\). Here \(k_{1}^{4}\) is completely imaginary \(9ir_{0}^{4}\left(\frac{12}{m^{2}p}+1\right)\). The first cross-over occurs only if \(m^{2}<0\). For a moment if we assume that \(m^{2}>0\), then there is only the second cross-over where the \(k_{1}^{4}\) becomes completely imaginary. ## 5 Analysis of chaos ### From \(vv\) component of linearised Einstein equation From the shock wave analysis, it is found that the exponential factor of OTOC can be directly observed from the \(\delta E_{00}\) component of the linearized Einstein equation in the ingoing Eddington-Finkelstein co-ordinate. In the discussed background (7), the information about OTOC can be obtained from the \(vv\) component of the equation (5). Considering the metric perturbation coupled with the \(vv\) component of the metric (which are actually the perturbations associated with the sound mode) one can write the \(\delta E_{vv}\) at \(r=r_{0}\) as follows. \[\delta g_{vv}(r_{0})\left(k^{2}-2ir_{0}\omega\right)+k\delta g_{vx}(r_{0}) \left(2\omega-3ir_{0}\right)=0 \tag{10}\] Since it is well-known that at the special points \((\omega_{*},\,k_{*})\), we have no constraint on the perturbed metric components at \(r=r_{0}\)[7]. Therefore in the above equation the coefficients of \(\delta g_{vv}(r_{0})\) and \(\delta g_{vx}(r_{0})\) have to zero. Thus we have \[\omega_{*}=\frac{3ir_{0}}{2}=2\pi iT,\hskip 28.452756ptk_{*}^{2}=-3r_{0}^{2} \tag{11}\] This \((\omega_{*},k_{*})\) is the zeroth order pole-skipping point which is connected to the Lyapunov exponent and butterfly velocity as shown in (2). In our model, we get, \(\lambda_{L}=2\pi T\) and \(v_{B}=\frac{\sqrt{3}}{2}\), which is the exact result[8] as in the case of background where the coupling term is not present in the action. ### From the master equation In the last section, where we have discussed the pole-skipping of the sound mode perturbation, we took the condition that \(3k^{2}\neq 4\omega^{2}\). Because we have seen at the horizon the differential equation (10) encounters a singularity. Here we will discuss that issue. From past works [8, 24], we have seen that \(3k^{2}=4\omega^{2}\) had come with a new set of points \((\omega,\,k)\) in \(\text{Im}[\omega]>0\) plane which was actually related to the chaos parameters. In our case, we can re-arrange the master equation (10) as \[{\cal Z}_{so}^{\prime\prime}(r)+P(r){\cal Z}_{so}^{\prime}(r)+Q(r){\cal Z}_{so }(r)=0 \tag{12}\] In this equation, the denominators of both \(P(r)\) and \(Q(r)\) has a multiplicative factor of \((3+f(r))\,k^{2}-4\omega^{2}\) which reduces to \(3k^{2}-4\omega^{2}\) at \(r=r_{0}\). So to get the regular solution of (12) at \(r=r_{0}\), we must impose an extra condition on \(\omega\) or \(k\). Here we will find it. First we put \(k=\frac{2}{\sqrt{3}}\omega\) in (5.3) and expand it around \(r=r_{0}\). We find that \(P(r)\) and \(Q(r)\) process the first and second order pole at \(r=r_{0}\). \[P(r) = \frac{P_{-1}}{(r-r_{0})}+\mathcal{O}\left((r-r_{0})^{0}\right), \qquad P_{-1}=-1-\frac{2i\omega}{3r_{0}}-\frac{144\alpha^{\prime}ir_{0} \omega\zeta^{\prime}(r_{0})}{3r_{0}-2i\omega}\] \[Q(r) = \frac{Q_{-2}}{(r-r_{0})^{2}}+\mathcal{O}\left((r-r_{0})^{-1} \right),\qquad Q_{-2}=1+\frac{2i\omega}{3r_{0}}+\frac{4i\alpha^{\prime}\omega( 27r_{0}^{2}+12ir_{0}\omega+4\omega^{2})\zeta^{\prime}(r_{0})}{r_{0}(3r_{0}-2i \omega)}\] Therefore \(r=r_{0}\) is a regular singular point for the differential equation (5.3). Now, suppose \(\mathcal{Z}_{so}\) has a series solution near the singular point given as \[\mathcal{Z}_{so}=(r-r_{0})^{l}\sum_{n\in[0,\mathbb{Z}^{+})}\mathcal{Z}_{n}(r- r_{0})^{n} \tag{5.4}\] The only condition which makes this solution regular at the horizon is \(l=0,1,2,\cdots\). Therefore the first recursion relation coming from (5.3) is \[l^{2}+l\left(P_{-1}-1\right)+Q_{-2}=0. \tag{5.5}\] This gives two roots (say, \(l_{1}\) and \(l_{2}\)) in the following form. \[l_{1} = 1-6\alpha^{\prime}(3r_{0}-2i\omega)\zeta^{\prime}(r_{0})\] \[l_{2} = 1+\frac{2i\omega}{3r_{0}}+6\alpha^{\prime}\frac{(3r_{0}+2i\omega )^{2}}{3r_{0}-2i\omega}\zeta^{\prime}(r_{0})\] So for arbitrary interaction, the only possible integer roots are \(l_{1}=1\) and \(l_{2}=0\). This gives only two values of \(\omega\) as \(\pm\frac{3}{2}ir_{0}\). Therefore we get the same values of the chaos parameters as we have already found in the last subsection. ## 6 Discussions Here in this article, we have studied the pole-skipping phenomena in non-extremal gravity theory in presence of the Gauss-Bonnet-scalar interaction. We have considered a four-dimensional Schwarzchild-AdS black hole solution as the holographic bulk theory. On the boundary, we have a finite temperature conformal theory. The interaction is sourced by an operator of dimension \(\Delta\) of the boundary theory, which is dual to the scalar field \(\phi\) in the bulk. In the Einstein action, the interaction term is added perturbatively (2.1). In the perturbative approximation, this external scalar source has no effect on the original bulk solution but made a nontrivial contribution in the linearised field equations (2.5). We have found that \(k\) of the pole-skipping points (\(\omega\), \(k\)) corresponding to the scalar field and metric perturbation have been affected by the external scalar source \(\mathcal{O}_{s}\). Whereas, \(\omega\) remains unchanged. Unlike the unperturbed model, the minimally coupled scalar \(\phi\) has contained both real and imaginary \(k\) in the pole-skipping points. As the source is increased, the points of the imaginary \(k\) plane have moved into the real \(k\) plane. We have presented these facts pictorially in Figure 2. In Schwarzchild-AdS\({}_{4}\) without external effect [8], \(k\) is always real in the shear mode. Here we have found that the shear mode \(k\) has the possibility to have both real and imaginary values depending on the effect of the scalar source. We have analytically found the effect of the interaction on the first three poles located at \(\omega_{n}=-2in\pi T\) and corresponding \(k\sim T\) which are given in (4.3), (4.4) & (4.5). The first order pole-point \(k_{1}^{2}\) is always greater than \(3r_{0}^{2}\) for \(\zeta=\phi\), \(\phi^{2}\), \(\phi^{3}\,\&\,\phi^{4}\) and has decreased for other higher powers of \(\phi\). However, for the second and other higher orders of pole-skipping, \(k^{2}\) has always decreased with the increasing source for all positive integer powers of \(\phi\) in \(\zeta(\phi)\). These have been shown in Figure 3. Here, the increase (or decrease) of real \(k\) implies a slow (or fast) rate of momentum transportation in shear mode and the imaginary \(k\) means the exponential decay of the momentum density. As a result, when positive \(k_{1}^{2}\) has increased with the increasing source \(\mathcal{O}_{s}\), the mobility of the corresponding modes has decreased. Thus the decreasing mobility has decreased the value of diffusion coefficient \(\mathcal{D}_{s}\). In Figure 4, we have presented this consistent behaviour of diffusion coefficient. At \(\mathcal{O}_{s}\to 0\), \(k_{1}^{2}\) is at a minimum value, and therefore, momentum flow is maximum which has given the maximum value of \(\mathcal{D}_{s}\). So, due to the effect of the external source, the flow of momentum in shear mode has decreased for \(p\leq 4\), otherwise, it has increased. In the sound mode, the first three pole-skipping points have been derived from the master equation as \(\omega_{n}=-2\pi inT\) and corresponding \(k_{n}\sim T\) is given in (4.9), (4.10) & (4.11). In the non-perturbative case where either \(\alpha^{\prime}\to 0\) or \(\mathcal{O}_{s}\to 0\), our results have reduced into the pole-skipping points of pure Schwarzchild-AdS\({}_{4}\) background [8], i.e, \(k_{n}^{4}=-9nr_{0}^{4}\). It gives a complex value (of equal real and imaginary parts) of \(k\). As the source is turned on, we have found that an imaginary part has been added with the negative real part of \(k^{4}\). It means the real and imaginary part of \(k\) is no more equal. We have shown all of these in Figure 6. However, from the OTOC calculation in the last section, we have found the Lyapunov exponent \(\lambda=-i\omega=2\pi T\) and the butterfly velocity \(v_{b}=\frac{\sqrt{3}}{2}\) where \(\omega_{*}=2i\pi T\) and \(k_{*}=\pm\frac{4}{\sqrt{3}}i\pi T\). These results have been further verified with a different approach by analyzing the power series solution of the sound mode master equation near the horizon. Therefore (\(\omega_{*}\), \(k_{*}\)) is considered as the lowest order pole-skipping point in sound mode instead of (\(\omega_{1}\), \(k_{1}\)). So the pole-skipping points of sound mode are (\(\omega_{*}\), \(k_{*}\)), (\(\omega_{1}\), \(k_{1}\)), (\(\omega_{2}\), \(k_{2}\)), (\(\omega_{3}\), \(k_{3}\)) and so on. The pole-skipping points (\(\omega\), \(k\)) describe the flow of energy density. Here \(k\) has both the real and imaginary parts. It signifies that the real part is associated with the flow of the energy density in longitudinal mode whereas the imaginary part of \(k\) is related to the exponential decay of the energy density. Therefore with the effect of interaction, when the energy density diffusion has increased the exponential decay has decreased and vice-versa. It would be interesting to study these flows and decays quantitatively. However, we have found some non-trivial effects of the interaction on the sound mode and shear mode. We have not found any effect on the chaotic behaviour. The reason is mainly the perturbative approach to the interaction term. If one considers the backreaction of the interaction, the Lyapunov exponent and the butterfly velocity are expected to be affected by the interaction. With backreaction, one can expect \(k_{*}\) and \(k_{1}\) to be equal in the sound mode. Coefficient of Master Equation: Shear Channel Three coefficients of the master equation can be written in the linear order of the perturbation parameter \(\alpha^{\prime}\) \[\mathcal{M}_{sh}(r) = \mathcal{M}_{sh}^{(0)}+\alpha^{\prime}\mathcal{M}_{sh}^{(1)}+ \mathcal{O}(\alpha^{\prime 2})\] \[\mathcal{P}_{sh}(r) = \mathcal{P}_{sh}^{(0)}+\alpha^{\prime}\mathcal{P}_{sh}^{(1)}+ \mathcal{O}(\alpha^{\prime 2})\] \[\mathcal{Q}_{sh}(r) = \mathcal{Q}_{sh}^{(0)}+\alpha^{\prime}\mathcal{Q}_{sh}^{(1)}+ \mathcal{O}(\alpha^{\prime 2})\] We have found the above functions as follows. \[\mathcal{M}_{sh}^{(0)} = r^{2}f(r) \tag{101}\] \[\mathcal{P}_{sh}^{(0)} = \frac{\omega f(r)\left(5r\omega+2ik^{2}\right)-8k^{2}rf(r)^{2}+ \omega^{2}(3r-2i\omega)}{\omega^{2}-k^{2}f(r)}\] (102) \[\mathcal{Q}_{sh}^{(0)} = \frac{-10k^{2}r^{2}f(r)^{2}+f(r)\left(k^{4}+9ik^{2}r\omega+4r^{2 }\omega^{2}\right)+\omega\left(k^{2}(-\omega-3ir)+6r\omega(r-i\omega)\right)}{ r^{2}\left(\omega^{2}-k^{2}f(r)\right)}\] and \[\mathcal{M}_{sh}^{(1)} = 0 \tag{104}\] \[\mathcal{P}_{sh}^{(1)} = \frac{r^{2}f(r)}{\left(\omega^{2}-k^{2}f(r)\right)^{2}}\left[r \zeta^{\prime\prime}(r)\left(\omega^{2}-k^{2}f(r)\right)\left(f(r)\left(2k^{2 }f(r)+\omega^{2}\right)-3\omega^{2}\right)+\zeta^{\prime}(r)\left(f(r)\right.\] (105) \[\left.\left.\left.\left(k^{2}f(r)\left(4k^{2}f(r)-6k^{2}-11\omega^ {2}\right)+24k^{2}\omega^{2}-2\omega^{4}\right)-9k^{2}\omega^{2}\right)- \omega\mathcal{F}\right]\] \[\mathcal{Q}_{sh}^{(1)} = \frac{1}{r\left(\omega^{2}-k^{2}f(r)\right)^{2}}\left[r\zeta^{ \prime\prime}(r)\left(\omega^{2}-k^{2}f(r)\right)\left(f(r)\left(\omega^{2} \left(4k^{2}-ir\omega\right)-2k^{2}f(r)\left(-3r^{2}f(r)\right.\right.\right.\] (106) \[\left.\left.+k^{2}+3r^{2}+ir\omega+\omega^{3}(-2\omega+3ir)\right) +\zeta^{\prime}(r)\left(f(r)\left(f(r)\left(-k^{2}f(r)\left(-14k^{2}r^{2}f(r) +3k^{4}\right.\right.\right.\right.\right.\] \[\left.\left.\left.+4k^{2}r(6r+i\omega)+34r^{2}\omega^{2}\right)+ 3k^{6}+6k^{4}\left(3r^{2}+ir\omega+\omega^{2}\right)+k^{2}r\omega^{2}(72r+11 i\omega)\right.\] \[\left.\left.+2r^{2}\omega^{4}\right)+\omega^{2}\left(-6k^{4}-3k^{ 2}\left(18r^{2}+8ir\omega+\omega^{2}\right)+2r\omega(-6r+i\omega)\right) \right)+3\omega^{3}\left(6r^{2}\omega\right.\] \[\left.\left.+k^{2}(\omega+3ir)\right)\right)+ir\left(2\omega^{2}- f(r)\left(k^{2}-2ir\omega\right)\right)\mathcal{F}\right]\] where, \[\mathcal{F} = 6k^{2}r^{2}\omega(f(r)-1)\left[r(f(r)-3)f(r)\zeta^{\prime\prime} (r)-3((f(r)-2)f(r)+3)\zeta^{\prime}(r)\right]^{2}/\left[r\zeta^{\prime\prime} (r)\left(f(r)\left(f(r)\right.(f(r)\right.(f(r)\] \[\left.\left.\left.\left(k^{2}-3ir\omega\right)-3k^{2}+3ir\omega \right)-18ir\omega\right)-3\zeta^{\prime}(r)\left((f(r)-2)f(r)\left(k^{2}-ir \omega\right)+3\left(k^{2}+3ir\omega\right)\right)\right.\right.\] \[\left.\left.+ir^{3}\omega(f(r)-3)f(r)\zeta^{\prime\prime\prime }(r)\right]\] Here the \(\zeta\) function takes its appropriate form. ## Appendix B Coefficient of Master Equation: Sound Channel Three coefficients of the master equation can be written in the linear order of the perturbation parameter \(\alpha^{\prime}\) \[\mathcal{M}_{so}(r) = \mathcal{M}_{so}^{(0)}+\alpha^{\prime}\mathcal{M}_{so}^{(1)}+ \mathcal{O}(\alpha^{\prime 2})\] \[{\cal P}_{so}(r) = {\cal P}^{(0)}_{so}+\alpha^{\prime}{\cal P}^{(1)}_{so}+{\cal O}( \alpha^{\prime 2})\] \[{\cal Q}_{so}(r) = {\cal Q}^{(0)}_{so}+\alpha^{\prime}{\cal Q}^{(1)}_{so}+{\cal O}( \alpha^{\prime 2})\] where, \[{\cal M}^{(0)}_{so} = r^{4}f(r) \tag{122}\] \[{\cal P}^{(0)}_{so} = \frac{r^{2}\left(f(r)\left(11k^{2}rf(r)+2k^{2}(6r-i\omega)-20r \omega^{2}\right)+\left(3k^{2}-4\omega^{2}\right)\left(3r-2i\omega\right) \right)}{k^{2}f(r)+3k^{2}-4\omega^{2}}\] (123) \[{\cal Q}^{(0)}_{so} = \frac{1}{k^{2}f(r)+3k^{2}-4\omega^{2}}\left(-f(r)\left(-25k^{2}r^ {2}f(r)+k^{4}+12k^{2}r(r+i\omega)+16r^{2}\omega^{2}\right)-3k^{4}\right.\] (124) \[\left.\qquad+k^{2}(9r+2i\omega)(3r-2i\omega)-24r\omega^{2}(r-i \omega)\right)\] \[{\cal M}^{(1)}_{so} = 0 \tag{125}\] \[{\cal P}^{(1)}_{so} = \frac{r^{2}f(r)}{\left(2ir\omega+k^{2}\right)\left(k^{2}f(r)+3k^{ 2}-4\omega^{2}\right)^{3}}\left[r^{3}\zeta^{\prime\prime}(r)\left\{f(r)\left( k^{2}f(r)\left(k^{2}f(r)^{2}\left(9k^{4}\right.\right.\right.\] \[\left.\left.+2k^{2}r(-24r+25i\omega)+64r^{2}\omega^{2}\right)+2f(r )\left(-27k^{6}+6k^{4}\left(24r^{2}-9ir\omega+11\omega^{2}\right)\right.\right.\] \[\left.\left.+4k^{2}r\omega^{2}(-72r+17i\omega)+128r^{2}\omega^{4} \right)+12\left(3k^{2}-4\omega^{2}\right)\left(2k^{4}+k^{2}\left(-12r^{2}-4ir \omega\right.\right.\right.\] \[\left.\left.\left.+3\omega^{2}\right)+2r\omega^{2}(8r+3i\omega) )\right)+2\left(3k^{2}-2\omega^{2}\right)\left(3k^{2}-4\omega^{2}\right)^{2} \left(k^{2}+2ir\omega\right)\right)-3\left(3k^{2}\] \[-4\omega^{2}\right)^{3}\left(k^{2}+2ir\omega\right)\right\}+4 \zeta^{\prime}(r)\left\{f(r)\left(k^{2}f(r)\left(-k^{2}f(r)\left(5k^{4}+3k^{ 2}r(-12r\right.\right.\right.\] \[\left.\left.+7i\omega\right)+48r^{2}\omega^{2}\right)+9k^{6}+k^{4 }\left(-180r^{2}+33ir\omega-26\omega^{2}\right)+4k^{2}r\omega^{2}(96r+5i\omega)\] \[-192r^{2}\omega^{4})+3k^{8}+4k^{6}\left(36r^{2}+6ir\omega-\omega^ {2}\right)+k^{4}r\left(324r^{3}+243ir^{2}\omega-324r\omega^{2}\right.\] \[\left.\left.-32i\omega^{3}\right)-8k^{2}r^{2}\omega^{2}\left(90r^ {2}+81ir\omega-34\omega^{2}\right)+48r^{3}\omega^{4}(8r+9i\omega)\right)+\left( 3k^{2}-4\omega^{2}\right)\left(3k^{8}\right.\] \[\left.-k^{6}\left(63r^{2}+4\omega^{2}\right)-k^{4}r\left(108r^{3} +171ir^{2}\omega-126r\omega^{2}+8i\omega^{3}\right)+8k^{2}r^{2}\omega^{2}\left( 18r^{2}\right.\right.\] \[\left.\left.+27ir\omega-\omega^{2}\right)-16ir^{3}\omega^{5}) \right)+9k^{2}r^{2}\left(3k^{2}-4\omega^{2}\right)^{2}\left(k^{2}+2ir\omega \right)\right\}\right]\] \[{\cal Q}^{(1)}_{so} =\] (126) \[+f(r)\left(f(r)\left(f(r)\left(-3k^{8}+18\left(-19r^{2}-2i\omega r +\omega^{2}\right)k^{6}+4r(3r-i\omega)\left(108r^{2}+99i\omega r\right.\right.\right.\] \[\left.\left.\left.-26\omega^{2}\right)k^{4}-8r^{2}\omega^{2}\left( 360r^{2}+234i\omega r-85\omega^{2}\right)k^{2}+32r^{3}\omega^{4}(48r+35i\omega) +f(r)\left(3k^{8}\right.\right.\] \[\left.\left.+r(180r+23i\omega)k^{6}-2r^{2}\left(576r^{2}-108i \omega r+257\omega^{2}\right)k^{4}+64r^{3}\omega^{2}(30r-i\omega)k^{2}\right.\] \[\left.\left.-r^{2}\left(43k^{4}+6r(41i\omega-40r)k^{2}+320r^{2} \omega^{2}\right)f(r)k^{2}-512r^{4}\omega^{4})\right)-\left(3k^{2}-4\omega^{2 }\right)\left(9k^{6}\right.\right.\] \[\left.\left.+6\left(18r^{2}+3i\omega r-7\omega^{2}\right)k^{4}+8 ir\omega\left(45r^{2}+42i\omega r-11\omega^{2}\right)k^{2}+8r^{2}\omega^{3}( \omega-60ir)\right)\right)k^{2}\] \[+\left(k^{2}+2ir\omega\right)\left(3k^{2}-4\omega^{2}\right)^{2} \left(3k^{4}+\left(9r^{2}-18i\omega r+14\omega^{2}\right)k^{2}-4ir\omega^{3} \right)\right)\right\}\zeta^{\prime\prime}(r)r^{3}\] \[+72r^{4}\omega^{2}\right)\left(3k^{2}-4\omega^{2}\right)^{2}+f(r) \left(2\left(3k^{2}-4\omega^{2}\right)^{2}\left(3k^{10}+\left(63r^{2}+18i \omega r-4\omega^{2}\right)k^{8}\right.\] \[\left.\left.-r\left(189r^{3}+18i\omega r^{2}+66\omega^{2}r+28i \omega^{3}\right)k^{6}+2r^{2}\omega\left(-297ir^{3}+315\omega r^{2}+6i\omega^{2 }r+28\omega^{3}\right)k^{4}\right.\] \[\left.+8ir^{3}\omega^{3}\left(63r^{2}+18i\omega r+4\omega^{2} \right)k^{2}+32r^{4}\omega^{5}(6ir+\omega)\right)+f(r)\left(3k^{12}+\left(-90r^ {2}+36i\omega r\right.\right.\] \[\left.\left.-4\omega^{2}\right)k^{10}+12r\left(171r^{3}+63i\omega r ^{2}+24\omega^{2}r-4i\omega^{3}\right)k^{8}+4r^{2}\left(972r^{4}+1971i\omega r ^{3}\right.\right.\] \[-1863\omega^{2}r^{2}-186i\omega^{3}r-32\omega^{4}\right)k^{6}-32r^{3 }\omega^{2}\left(270r^{3}+531i\omega r^{2}-243\omega^{2}r-i\omega^{3}\right)k^{4}\] \[+64r^{4}\omega^{4}\left(72r^{2}+132i\omega r-13\omega^{2}\right)k^ {2}+2r^{2}f(r)\left(3\left(-15k^{8}-2\left(186r^{2}+37i\omega r-9\omega^{2} \right)k^{6}\right.\right.\] \[\left.\left.+2r\left(-396r^{3}-417i\omega r^{2}+339\omega^{2}r+50 i\omega^{3}\right)k^{4}+8r^{2}\omega^{2}\left(180r^{2}+232i\omega r-73\omega^{2} \right)k^{2}\right.\right.\] \[\left.\left.-64r^{3}(8r+13i\omega)\omega^{4}\right)+f(r)\left(-3 k^{8}+r(21r-20i\omega)k^{6}+2r^{2}\left(684r^{2}-24i\omega r+43\omega^{2} \right)k^{4}\right.\right.\] \[\left.\left.-8r^{3}(300r+91i\omega)\omega^{2}k^{2}+r^{2}\left(47 k^{4}+12r(17i\omega-30r)k^{2}+480r^{2}\omega^{2}\right)f(r)k^{2}\right.\right.\] \[\left.\left.\left.+768r^{4}\omega^{4}\right)\right)k^{2}+256ir^{ 5}\omega^{7}))\right)\right\}\zeta^{\prime}(r)\big{]}\] ## Acknowledgments We would like to acknowledge Debaprasad Maity for his useful suggestions.
2302.05669
Treat societally impactful scientific insights as open-source software artifacts
So far, the relationship between open science and software engineering expertise has largely focused on the open release of software engineering research insights and reproducible artifacts, in the form of open-access papers, open data, and open-source tools and libraries. In this position paper, we draw attention to another perspective: scientific insight itself is a complex and collaborative artifact under continuous development and in need of continuous quality assurance, and as such, has many parallels to software artifacts. Considering current calls for more open, collaborative and reproducible science; increasing demands for public accountability on matters of scientific integrity and credibility; methodological challenges coming with transdisciplinary science; political and communication tensions when scientific insight on societally relevant topics is to be translated to policy; and struggles to incentivize and reward academics who truly want to move into these directions beyond traditional publishing habits and cultures, we make the parallels between the emerging open science requirements and concepts already well-known in (open-source) software engineering research more explicit. We argue that the societal impact of software engineering expertise can reach far beyond the software engineering research community, and call upon the community members to pro-actively help driving the necessary systems and cultural changes towards more open and accountable research.
Cynthia C. S. Liem, Andrew M. Demetriou
2023-02-11T12:03:43Z
http://arxiv.org/abs/2302.05669v1
# Treat societally impactful scientific insights as open-source software artifacts ###### Abstract So far, the relationship between open science and software engineering expertise has largely focused on the open release of software engineering research insights and reproducible artifacts, in the form of open-access papers, open data, and open-source tools and libraries. In this position paper, we draw attention to another perspective: scientific insight itself is a complex and collaborative artifact under continuous development and in need of continuous quality assurance, and as such, has many parallels to software artifacts. Considering current calls for more open, collaborative and reproducible science; increasing demands for public accountability on matters of scientific integrity and credibility; methodological challenges coming with transdisciplinary science; political and communication tensions when scientific insight on societally relevant topics is to be translated to policy; and struggles to incentitize and reward academics who truly want to move into these directions beyond traditional publishing habits and cultures, we make the parallels between the emerging open science requirements and concepts already well-known in (open-source) software engineering research more explicit. We argue that the societal impact of software engineering expertise can reach far beyond the software engineering research community, and call upon the community members to proactively help driving the necessary systems and cultural changes towards more open and accountable research. open science, software engineering, open source, transdisciplinary research, responsible research practice ## I Introduction This article is a 'paper'1. At the moment it will reach broader readership with a formal citation attached, it will have passed peer review, and be part of a referenceable collection of proceedings of the ICSE 2023 Software Engineering in Society Track. This form and workflow have been the traditional template for communicating scientific outcomes, where getting papers accepted at prestigious venues has traditionally been treated as the major indicator of academic achievement. Footnote 1: Most likely, it will not reach the reader on paper, but as a digital PDF. Academic research has been operating under scarcity, both regarding job and research funding security. As a consequence, (not) getting major publications accepted and sufficiently cited thus has great career consequences. Still, for a long time, research communities have been acknowledging that contributions of scientific insight extend much beyond a paper, and proposals for open science have emerged, including ventures into open access, open and FAIR (Findable, Accessibile, Interoperable, Reusable) data, and open-source software. The software engineering research community has been acting upon this [1], with open science policies now being explicit parts of well-respected venues like ICSE and the Empirical Software Engineering Journal, open-source tools with artifact badging being explicitly encouraged, and the option to submit registered reports entering several sub-communities such as the Conference on Mining Software Repositories. Software engineering researchers also have actively contributed to discussions on applying FAIR principles to research software [2]. With this position paper, we wish to inspire the software community to look even beyond this. More specifically, considering empirical scientific insights in the broad sense (i.e., insights requiring empirical observation of phenomena, often expressed in the form of data measurements), we will argue that making these insights more open will require infrastructure and quality assurance mechanisms similar to those needed in developing complex open-source software artifacts. ## II Arguments for open science beyond the paper Already in 1942, Robert K. Merton noted that anti-intellectualism was rising and the integrity of science was under attack. In response, four 'institutional imperatives' were formulated as comprising the ethos of modern science: _universalism_ (the acceptance or rejection of claims entering the lists of science should not depend on personal or social attributes of the person bringing in these claims), _"communism"_ [sic] (common ownership of scientific findings, with the imperative to communicate findings, as opposed to secrecy), _disinterestedness_ (upholding scientific integrity by not having self-interested motivations), and _organized skepticism_ (judgment on the scientific contribution should be suspended until detached scrutiny is performed, according to institutionally accepted criteria) [3]. Many scientists still subscribe to these norms today [4]. These imperatives also implicitly echo in today's calls, manifestos and proposals for open science and open access [5, 6], which push for better science, which more people can access--but with which more people also can actively interact. Below, we further elaborate on several arguments and initiatives that argue that open science should not stop at a paper that more people can read. ### _Insufficient quality control on papers_ Open-access publishing may stimulate academic and societal uptake, transform the business models of publishers, and allow for publicly funded knowledge to be publicly available. Still, open access is only an aspect of open science, and insights and methods reported in a paper may not trivially be reproducible or replicable2, either because common specifications are not sufficiently detailed [9], or because claims may be outright false [10]. While researchers have been divided on which domains suffer from reproducibility crises [11], generally, many well-published works have failed to replicate in psychology [12] and cancer biology [13], and many concerns are arising about the replicability of machine learning outcomes [14, 15]. This leads to credibility crises, in which it is unclear whether results can actually be trusted and built upon. When policy-makers seek to base decisions on scientific insights, this can have severe consequences to human health and public trust [16, 17]. Footnote 2: Definitions of ‘reproducibility’ and ‘replicability’ have not always been used in crisp ways; e.g., compare the former [7] and current [8] ACM definitions, in which definitions are swapped. Generally spoken, in the current discussion, we do not need a sharp distinction, and rather want to refer to the overall concept that repeating an experiment should give consistent results. Officially, science should be self-correcting; through peer review and active continuous scrutiny processes, illegitimate claims should be detected and corrected. However, in practice, self-correction turns out painfully slow and reluctant [18, 19]. This may have to do with 'publish or perish' cultures being too strong in institutions, leading to unhealthy working environments [20, 21, 22], incentivizing Questionable Research Practices [23], and de-incentivizing investment in Responsible Research Practices [24]. ### _Joint resource investments for collaborative momentum_ With machine learning research, growing power and resource imbalances are observed between large industrial labs, and small labs in public institutions. A researcher at a university will likely not have sufficient computational resources and comprehensive data access to easily be able to replicate results as reported by big tech industry. Thus, joint investments in shared computation infrastructure are needed [25]. In psychological science, joint efforts have been coordinated into massive replication projects, where multiple teams tried to replicate canonically reported outcomes in parallel. Good examples of this are the five 'Many Labs' large-scale replication projects [26, 27, 28, 29, 30]. For such efforts, the joint investments need to focus on technical and intellectual infrastructure: i.e., the efforts required to reach a joint insight or paper, in such a way that many can indeed participate, without the transaction costs of getting started growing too large on an individual party. In other words, the focus needs to be on facilitating a shared process, rather than claiming limited-ownership output, which our present-day incentive systems still appear to push for. ### _Challenges when crossing disciplines_ When research becomes interdisciplinary or even transdisciplinary [31], methodology and consequent quality assurance mechanisms become more ambiguous than in the case of monodisciplinary work. While in the software engineering community, the SIGSOFT empirical standards [32] help articulating and standardizing what a reviewer should expect for different types of methodological contributions, when multiple disciplines are represented at the same time, a discipline-specific reviewer may only be capable of doing a thorough quality assessment for the parts of the contribution within their expertise, but not of the full intellectual work. In case of transdisciplinary work, a broader spectrum of stakeholders (that may not be academics) will be involved. This again causes ambiguity on how work should be reviewed and evaluated. At the same time, for societally relevant application domains, it has been argued that broader participation of stakeholders can help getting out of credibility crises with regard to modeling choices [17]. Furthermore, if academic insights are to be implemented in society, it is not unreasonable to not only push the view of academics, but also actively involve the perspectives and experiences of non-academic societal stakeholders who will be experiencing the impact of this implementation. ### _Societal relevance causes vulnerability_ Research on urgent, societally relevant challenges (e.g., climate change, public health) tend to be situated in dynamic, complex, socio-technical contexts, and require transdisciplinary approaches [31]. Problems of relevance may be wicked [33] or even super wicked [34], meaning that there is ambiguity on how the problem should be framed (while the solution depends on the framing), and one can assess whether a solution is 'better' or 'worse', but there are no hard binary outcomes of whether a result is absolutely 'true' or 'false'. In case of super wicked problem, there is high urgency and time is running out, while there is a lack of central authority. Acting under such dynamic uncertainty comes with challenges. While fast open publishing and knowledge-sharing can be further enabled through open science, too-hasty conclusions that have not been deeply reviewed may cause hazards to human safety [16, 19]. Furthermore, while the general public will demand high accountability on societally impactful outcomes, at the same time, ambiguity, uncertainty, and dynamically changing insights make it impossible to end up with static, firm insights. Potentially contradicting readings on topics requiring deeper expertise can cause feelings of uncertainty in people, harming credibility of scientific work and leading to distrust [35]. Distrust in science causes vulnerability to credibility attacks. Indeed, in Big Tobacco, health, climate change, and AI, concerted delegitizing efforts have been taking place as part of lobbying processes towards non-public interests [36, 37, 38]. Here again, more public transparency on how insights were obtained may help in sustaining trust and facilitating broader public scrutiny. ## III More holistic open science: from tools to conceptual parallels to open-source software In response to movements towards more open science, in recent years, a plethora of process improvements with supporting platforms and tools have emerged, that support releasing a more holistic scientific artifact than a paper alone. These include pre-registration (e.g., The Center for Open Science (COS)3, AsPredicted4), pre-print publication (e.g., arXiv5, COS), storage of additional materials beyond the PDF (e.g., COS, data repositories such as Zenodo6 and ResearchHub7), the co-publication of research code or software artifacts (e.g., Papers with Code8)), decomposed publication (e.g., Octopus9, ResearchEquals10, Desci Foundation11), open peer review (e.g., F1000Research12) and pre-print / post publication peer review (e.g., PubPeer13, PREreview14, Sciety15). Organizations like the COS and Psychological Science Accelerator16 have coordinated big-team data collection efforts. In parallel, traditional publication venues have started accepting more modern publication formats, such as registered reports [39]. This tooling space is presently fragmented, capturing different aspects that should improve openness in science. At a higher level, as discussed below, we however see clusters of intended functionality, that are very close to well-researched topics in software engineering research. Footnote 3: [https://www.cos.io/](https://www.cos.io/) Footnote 4: [https://saprodirected.org](https://saprodirected.org) Footnote 5: [https://arxiv.org/](https://arxiv.org/) Footnote 6: [https://zenodo.org/](https://zenodo.org/) Footnote 7: [https://www.researchhub.com/](https://www.researchhub.com/) Footnote 8: [https://paperswithcode.com/](https://paperswithcode.com/) Footnote 9: [https://www.octopus.ac](https://www.octopus.ac) Footnote 10: [https://www.researchquals.com](https://www.researchquals.com) Footnote 11: [https://descifoundation.org/](https://descifoundation.org/) Footnote 12: [https://f100research.com/](https://f100research.com/) Footnote 13: [https://pubpeer.com](https://pubpeer.com) Footnote 14: [https://prereview.org/](https://prereview.org/) Footnote 15: [https://sciety.org](https://sciety.org) Footnote 16: [https://psyciacac.org/](https://psyciacac.org/) ### _Inclusive contributorship with credit_ As opposed to the traditional authorship model of publication (where author names in a list denote some undisclosed contribution to the work, the list of authors is final, and author order may imply local hierarchies that are specific to a research sub-community), there is a need to be more specific and transparent about collaborators' contributions to the intellectual work. In the publication world, the Contributor Roles Taxonomy (CRediT) has been proposed and increasingly adopted as a possible taxonomy for this, with an explicit change from authorship to contributorship [40]. Models of contributorship have naturally been implemented, facilitated and acknowledged in open-source software. In case multiple contributors work on the same artifact, version control systems (typically, Git) will be employed that help tracking the degree and provenance of changes (i.e., who contributed what at what time on the development timeline). Contributors may work in parallel, both working on main features needing priority, but also on more experimental features. Through branches, this can be done while there still is a consensus of what currently is a working non-breaking artifact on the main branch. While parallel work may be done, version control systems have protocols for resolving potential conflicts arising from parallel contributions and changes. Regardless of the status of the branch, the history of contributions will always be transparent. In addition, they allow for 'orphan' components of unfinished projects to also be gathered and transparently disclosed. In psychology, attention has for long been drawn to the 'file drawer' problem [41]. Here, many studies with non-significant results may never have been reported, but still provide useful insights, and can help meta-scientific understanding of whether results reported as significant are indeed significant, or may have resulted from sampling bias. We can see a similar parallel to the building of scientific knowledge: a main branch can represent current stable insights, where other branches may represent work in progress, that down the road can make the overall artifact better. Where in software engineering, code review practices ensure quality control whenever a change is to be committed (regardless of whether this is on a main or experimental branch), the same can hold for peer review, where elevated reviewing safeguards can be implemented for merging into the main branch and 'pushing to production'. As we will discuss further down, the concept of the'main branch' and versioned releases has parallels to scientific consensus of current state-of-the-art. Where in terms of ownership, public open-source repositories may have an active team of maintainers and owners of an artifact, other people not in these groups are explicitly welcome to raise issues or feature requests if they see points for improvement, and implement and suggest contributions themselves, that the maintainers and owners may choose to incorporate. Similarly, in scientific insight, a core team may work on a particular project, but other researchers and interested parties may suggest changes or improvements that could be incorporated with visible provenance. Where open-source projects that actively seek public contributors will have clear documentation and guidelines on how to get started and contribute if one is an outsider, similar inclusion-facilitating practices can transfer to scientific research projects, as already have been demonstrated in e.g. the Many Labs large-scale replication projects. ### _Decomposition into maintainable units_ As discussed, potential reviewers to scientific work may not naturally be equipped to thoroughly review every aspect of a complete paper, especially if this paper reflects interdisciplinary work. Generally spoken, it seems unnatural to only review a complex intellectual contribution only at release time. With pre-registration and registered reports, publishing cultures already tried to solicit such feedback earlier, with positive effects on research quality and integrity [42]; however, this still involves the review of complete experimental setups. In the software engineering world, it has generally been seen as an example of good practice to organize a complex software artifact into smaller, clearly scoped modules and functions. When committing code contributions to this overall artifact, commits also would be organized in smaller, logical contributions with a clear focus, and code review would iteratively be solicited on these small contributions. This reviewing model resembles the tools facilitating decomposed publication. In software engineering, we have already seen that decomposition will help in fostering maintainability of the overall artifact, and making it easier on new contributors to quickly get onboarded on the parts of the artifact where they wish to contribute. We explicitly want to note that this model could work at the level of scientific artifacts (effectively, a digitally enriched form of work that currently only manifests as a paper), but also one step up, at the level of scientific insight that may source from different papers and other intellectual contributions. In scientific insight, we wish to stand on the shoulders of giants, and build upon earlier work. As such, we may source from other insights, similarly to how open-source software may make use of existing other libraries. Furthermore, again looking at functionalities offered by Git, if serious new contributions to an existing repository possibly warrant branching-off into a new strand of independent development, forking functionality allows for this, while again still keeping a living reference to the original repository. Where in science, the insights we build upon may still be under active research, and there are chances they still may change and update, the same holds for open-source software libraries. This may create a dependency hell, for which software engineering research is actively researching best practices to still make a complex artifact building upon other artifacts as maintainable as possible. We argue that a translation of these best practices will be beneficial in navigating how scientific insight building upon other insights can best be organized and updated, in case all insights dynamically will keep evolving. As for how to decompose and (re)organize complex code, the software engineering research community has consolidated a rich body of best practices or practices to avoid, consolidated in the form of software engineering methods, design patterns and smells. Equivalents can be formulated for the organization of scientific insight: what sub-experiments or analyses can be modularized or refactored for better reuse? Here, we would like to point out that software engineering methods tend to be taught as advanced-level programming knowledge, and as such may not as actively be part of the skillset of non-computer scientists who took an introductory programming courses--while we believe they are essential in thinking strategically about overall information organization. _Intermediate releases with consensus, and organizational safety to find weaknesses, iterate and improve_ When developing a software artifact, pushing code to production, and having formal versioned releases, we naturally agree we have not yet reached The Ultimate and Optimal Final Product--rather, what currently is running may be a Minimum Viable Product that is iterated upon, but that will likely still have many imperfections in need of improvement. In scientific publishing, we may acknowledge this in text, but there is less incentive to demonstrate progressive improvement over subsequent contributions. Furthermore, as we will argue below, it may culturally be unsafe to admit weaknesses and visibly correct them, as this could lead to retractions and consequences on citation track records. In software development, this however is no problem, as changes and releases that can be referenced by others are more clearly separated. To us, a scientific paper could be seen as a versioned release--a larger, but coherent collection of changes and reviewer consensus that can be frozen and referred to. Similarly, through containerization, we can freeze, save and share the entirety of a computational environment associated to a contribution. If available on a cloud-based platform, this allows for reproducibility, as well as immediate, rapid progress on both the review of material and its reuse and further development, since installation overhead will be reduced. At the same time, these freezes do not signal the end of development, and development can still actively continue. In our argument to not only organize scientific artifacts as open-source artifacts, but even group them at a higher level of scientific insight, the concept of currently agreed-on consensus can also be taken one level up, similar to how knowledge is established in Wikipedia: for a research problem that many people work on in parallel, a meta-scientific overview of what the collective insight and consensus currently is can be consolidated. Consensus-focused publications aim to condense the overall state of a thread of research, reporting first any consensus, while also indicating ambiguities or research opportunities. One might further conceive 'living' consensus-focused articles, i.e. systematic reviews, in a model similar to Wikipedia articles, where the review stays current, as authors continuously update it. This especially will be relevant for topics with increasingly unwieldy numbers of associated publications, where there is a clear benefit to finding a means to condense scientific information; even more so, when the consolidated insights may be looked-at in informing policy (e.g., with regard to climate change or public health). As software is developed under pressure and with short development timelines, compromises and simplifications will be made. This may lead to technical debt, in which issues needing deeper attention may pile up without being prioritized--up to the point that major and expensive fixes may be needed. Similarly, we would argue that current problems with self-correction in science and questionable insights may be a consequence of intellectual debt, as also suggested in [43]. Again, creating cultures in which this is as actively mitigated as possible will be beneficial. Here, we already mentioned problems with organizational and cultural safety in admitting weakness and implementing corrections in scientific contributions. With software artifacts, we up-front acknowledge that programmers are competent [44], but bugs, errors and issues may still have occurred. Through testing (and ideally, test-driven development) we include safeguards that help reducing the amount of problems that will need fixing--or otherwise will help us signaling and fixing them as early as possible. Still, we will never know whether a program will be fully bug-free. However, this does not prevent us from having justifiable perspectives on when code artifacts can be published and released. We feel that the culture of encouraging and appreciating testing and quality assurance in software engineering may be very inspirational to discussions on fostering Responsible Research Practices and academic integrity, without this being seen as a reputational threat or attack on character. Finally, the software engineering community has increasingly been acknowledging the social and contextual organizational surroundings of a software artifact, with emerging strands of research studying how team interactions and organizational policies will affect the quality of a software artifact, as well as the efficiency and effectiveness of the process leading to its development. Similarly, these social and organizational insights will be beneficial in efforts to address the culture of scientific research itself (as well as constructive directions for systems changes, if we indeed would decide to move beyond our current ways of sharing insights through static papers). ## IV Conclusion In this work, we have outlined how the development of scientific insights parallels the development of software. The shift from traditional publishing to open science involves challenging culture and systems changes. As the software engineering community has noticed in its own open science endeavors, investing in this is a serious, expensive and so far under-rewarded investment [1, 45, 46]. Yet, as we argued, the strong expertise of software engineering experts in acknowledging contributorship on complex larger collaborations, designing for robust maintainability, and developing based on iterative improvements, can more broadly benefit the development of scientific insight subscribing to the Mertonian norms of science--and be beneficial to society at large when complex societal challenges are addressed. We therefore call upon our software engineering colleagues committed to open science to both think more boldly in how academic incentives can be improved beyond the focus on output, and even look beyond the software engineering research field alone. As for the first, beyond current (com-mendable) efforts to integrate open science principles in the publication process of software engineering venues, it will be worthwhile to think of what a 'Many Labs' equivalent in software engineering may look like. As one thought, may it take aspects of current tool competitions and benchmarks, that also focus on collective understanding, but rather from the start be framed as a joint collaborative and iterative effort? As for the second, we invite our colleagues to join existing scientific reform movements, help developing and increasing interoperability of current tools, and critically reflect on what software engineering skills can best be taught outside of the own curriculum. As one possible thought experiment, what would a re-framing of the state-of-the-art in climate science as a complex software-like artifact look like? Which insights would need to be decomposed? Who reviews what, and what would a review discussion look like if multiple disciplines get involved? How can we allow for public scrutiny, while not feeding into public distrust? As one example, we as authors of this article have been actively attending meta-scientific and science improvement events (such as the meetings of the Society for the Improvement of Psychological Science17), and have started prototyping the idea of turning scientific publication processes into Git-supported software artifacts [47]. First prototypical development towards the latter mission was performed in the form of a software development project, which several bachelor students in Computer Science and Engineering at our institute took up as part of their software engineering coursework. The resulting work was presented as a non-archival contribution at a Scientific Progress Seminar [48]. While this was not yet a formal publication, it was an excellent way to get bachelor-level software engineering students interested in research processes, and many of them enthusiastically attended the seminar, that was highly interdisciplinary, also e.g. involving epistemological philosophical work. Currently, we are working with our local Open Science community and advertising new student projects to further develop this project. Footnote 17: [https://improvingpsych.org/](https://improvingpsych.org/) However, we ourselves are no software engineering researchers, and we are certain our colleagues with deeper expertise in the subject matter can push such developments much further. In doing this, we would argue that software engineering expertise can have even broader societal and scientific impact than it already does today. ## CRediT author statement **Cynthia C. S. Liem**: Conceptualization, Investigation, Methodology, Supervision, Writing - original draft, Writing - review & editing; **Andrew M. Demetriou**: Conceptualization, Investigation, Resources, Writing - original draft, Writing - review & editing.
2308.10559
Metaverse: A Vision, Architectural Elements, and Future Directions for Scalable and Realtime Virtual Worlds
With the emergence of Cloud computing, Internet of Things-enabled Human-Computer Interfaces, Generative Artificial Intelligence, and high-accurate Machine and Deep-learning recognition and predictive models, along with the Post Covid-19 proliferation of social networking, and remote communications, the Metaverse gained a lot of popularity. Metaverse has the prospective to extend the physical world using virtual and augmented reality so the users can interact seamlessly with the real and virtual worlds using avatars and holograms. It has the potential to impact people in the way they interact on social media, collaborate in their work, perform marketing and business, teach, learn, and even access personalized healthcare. Several works in the literature examine Metaverse in terms of hardware wearable devices, and virtual reality gaming applications. However, the requirements of realizing the Metaverse in realtime and at a large-scale need yet to be examined for the technology to be usable. To address this limitation, this paper presents the temporal evolution of Metaverse definitions and captures its evolving requirements. Consequently, we provide insights into Metaverse requirements. In addition to enabling technologies, we lay out architectural elements for scalable, reliable, and efficient Metaverse systems, and a classification of existing Metaverse applications along with proposing required future research directions.
Leila Ismail, Rajkumar Buyya
2023-08-21T08:23:10Z
http://arxiv.org/abs/2308.10559v2
Metaverse: A Vision, Architectural Elements, and Future Directions for Scalable and Realtime Virtual Worlds ###### Abstract With the emergence of Cloud computing, Internet of Things-enabled Human-Computer Interfaces, Generative Artificial Intelligence, and high-accurate Machine and Deep-learning recognition and predictive models, along with the Post Covid-19 proliferation of social networking, and remote communications, the Metaverse gained a lot of popularity. Metaverse has the prospective to extend the physical world using virtual and augmented reality so the users can interact seamlessly with the real and virtual worlds using avatars and holograms. It has the potential to impact people in the way they interact on social media, collaborate in their work, perform marketing and business, teach, learn, and even access personalized healthcare. Several works in the literature examine Metaverse in terms of hardware wearable devices, and virtual reality gaming applications. However, the requirements of realizing the Metaverse in realtime and at a large-scale need yet to be examined for the technology to be usable. To address this limitation, this paper presents the temporal evolution of Metaverse definitions and captures its evolving requirements. Consequently, we provide insights into Metaverse requirements. In addition to enabling technologies, we lay out architectural elements for scalable, reliable, and efficient Metaverse systems, and a classification of existing Metaverse applications along with proposing required future research directions. Artificial Intelligence, Augmented Reality (AR), Cloud Computing, Distributed Computing, Edge Computing, Energy Efficiency, Extended Reality (XR), Internet of Things (IoT), Machine Learning, Metaverse, Mixed Reality (MR), Quality of Services (QoS), Realtime, Scalability, Service Level Agreement (SLA), Smart City, Sustainability, Virtual Reality (VR), Virtual Worlds ## 1 Introduction Advances in information and computing technologies, such as the continuous availability of high-speed 5G and 6G networks to Internet users, Internet of Things enabled Human-Computer Interfaces, high precision of data-driven Machine and Deep Learning models, and the emergence of Generative Artificial Intelligence with ChatGPT, have enabled the rise of Metaverse systems and applications. The term Metaverse was first coined by Neal Stephenson in his science fiction novel "Snow Crash" in 1992 to describe a future virtual reality [1], has recently emerged as the _next Internet revolution_, providing information and communication facilities to a network of a virtual world, with the possibility to create and interact with a virtual space whereas disparate Internet of Things (IoT) and Internet of Humans communicate in a spatial environment. Metaverse is different from VR and AR Virtual Reality (VR) [2] and Augmented Reality (AR) [3] in the sense that Metaverse offers services that enable people to interact with each other and with the virtual environment via their avatars, who can shop, socialize, play interactive games, work, learn, do tourism, or consult a medical doctor virtually but has the sensation of being physically present. While VR and AR aim to create a continuous flow of 3D sensory images that represent a physical world, Metaverse offers services that have sustainable content. Despite that the definition of "Metaverse" has changed as the integrating technology evolved, the main goal is to have a digital twin of the physical world, while providing a realtime fully immersive virtual 3D space where the physical and the virtual world can interact, providing efficient, safe, and pleasant experiences in all domains, such as smart education, smart healthcare, smart transportation, and geospatial localizations. A radical evolution of the computing era where the Internet of Everything is transformed into a User-Centric Intelligent Internet of Everything. Metaverse has stringent requirements in terms of high data rate, high reliability, low latency, and connected intelligence with Machine Learning (ML) and Deep Learning (DL). The upcoming Sixth Generation (6G) of wireless network technology is designed to support efficient, dependable, and secure Metaverse applications in the smart city while considering the critical requirements of privacy, energy efficiency, high data rates, and ultra-low latencies of those applications [4]. Metaverse presents a massive research opportunity. It is expected that the global market for Metaverse technology will reach almost $679 billion in 2030, from over $47 billion in 2022. This is an unprecedented annual growth of more than 39% [5]. Gartner considers Metaverse as one of the top 10 strategic technology trends for 2023 [6] and predicted that by 2026, 25% of people will spend at least one hour a day on work, shopping, education, social, and/or entertainment [7]. Consequently, Facebook has rebranded into Meta Platform [8] and has been developing Metaverse applications, such as Horizon Workrooms [9] and Horizon World [10], and Microsoft showed increased interest in pioneering Metaverse by investing in Artificial Intelligence (AI)-enabled telehealth care, Interactive Voice Response (IVR) and virtual assistants [11]. This paper presents the current trends in Metaverse research driven by applications and the need for convergence in several interdisciplinary technologies. The rest of the paper is organized as follows. Section 2 compares related work. In Section 3, we present Metaverse trends and a retrospective analysis of the Metaverse temporal evolution through its definitions in terms of requirements. Section 4 discusses the Metaverse's overall vision and the enabling technologies. We present a layered Metaverse architecture and explain its elements in Section 5. We compare the existing Metaverse development platforms and discuss their advantages and limitations for the realization of Metaverse applications in Section 6, followed by a taxonomy of Metaverse applications in Section 7. We discuss open challenges and propose future directions in Section 8 and conclude in Section 9. ## 2 Related Works There have been few surveys on Metaverse [12, 13, 14, 15]. We categorize these surveys based on their focus into 3 categories, 1) definitions of Metaverse [12], 2) requirements [13], and 3) Metaverse enabling technologies, applications, and challenges [14, 15]. [12] performed a systematic review of the literature to synthesize the definition of the Metaverse. The author concluded that Metaverse as an immersive, synchronous, and persistent virtual world (overlapped with the physical world) that allows users, represented by avatars, to interact with each other and the environment. [13] described the current status of Metaverse characteristics which are realism, ubiquity, interoperability, and scalability. [14] discussed the existing security and privacy threats in the Metaverse. The authors classified the threats into authentication and access control, data management, privacy-related, network-related, economy-related, physical/social effects, and governance-related categories. Similarly, [15] analyzed different security and privacy issues in the Metaverse and presented some possible countermeasures. Furthermore, they described Metaverse enabling technologies and application areas. Different from the above related surveys [12, 13, 14, 15], we focus on a vision of achieving realtime and scalable Metaverse, its enabling technologies, requirements, an architecture perspective, challenges, and future directions. In this paper, we present a comprehensive survey of the Metaverse key requirements, enabling technologies, as well as architecture and development platforms to build realtime and scalable Metaverse. By discussing existing challenges and potential solutions, this survey provides critical insights on how to build realtime and scalable software solutions for developing green and dependable Metaverse. The contributions of this paper are six-fold. * We investigate the temporal evolution of Metaverse definitions to extract the key requirements (i.e., immersive and multisensory interaction, spatiotemporality, interoperability, scalability, heterogeneity, QoS (Quality of Service), and QoE (Quality of Experience)), which are fundamental to developing a realtime and scalable Metaverse by analyzing the temporal evolution of Metaverse definitions. * We discuss the components of Metaverse (i.e., environment, interface, interaction, and data security and privacy), and their enabling technologies (i.e., generative AI, deep learning and machine learning, IoT, blockchain, edge and cloud computing, digital twins, VR, AR, spatial computing, computer vision, web3, and network), to enable users to have an immersive, multisensory, interactive, and secure experience. * We propose a layered architecture, underpinned by the enabling technologies, toward building green and dependable Metaverse distributed computing applications. We divide the architecture into four decoupled layers (infrastructure, distributed computing, platform, and application), where each layer can evolve independently, to ensure the Metaverse applications requirements. * We survey the development platforms for Metaverse applications and compare them to provide insights and useful guidelines for developers into building domain specific Metaverse applications. * We provide a taxonomy of Metaverse applications based on their domains of development (i.e., education, smart healthcare, smart mobility, gaming and entertainment, business, social media, and manufacturing), and categorize the existing Metaverse application accordingly. * We discuss the critical challenges for Metaverse realization (i.e., realtime, scalability, high energy consumption, resource provisioning to optimize QoS and energy consumption, cost and complexity, security and data privacy, the need for governance against abuse, standardization and interoperability, as well as health-related risks), and outline possible solutions for future research directions. Table 1 summarizes the contribution of our work in comparison to the previous related surveys. ## 3 Metaverse Trends, Definitions, and Requirements ### Trends According to a Gartner report [6], by 2027, over 40% of large organizations worldwide will be using a combination of Web3, spatial computing, and digital twins in Metaverse-based projects aimed at increasing revenue. These technologies are enabled by IoT, edge, and cloud computing integrated systems to satisfy the stringent requirements of the Metaverse realtime applications in terms of QoS and respecting a Service-Level Agreement (SLA). We envision that Metaverse will be rapidly and widely adopted thanks to the projection of being a vendor-independent/portable computing platform. It will have a great impact on the economy, as Metaverse will increase the potential customers' QoE, and enable rapid, and realtime interactions with users. In addition, it will have its digital economy enabled by blockchain [16], where users can sell and buy their digital assets using digital currency. Metaverse is pointed out as one of the 2023 emerging technologies trends, as shown in Gartner's radar (Figure 1). The radar [6] shows the stages over time of each emerging technology from early adoption to majority adoption by applications. The range in the radar measures the number of years it will take the technology to cross over from emergence to maturity. The mass represents how substantial the impact of the technologies will be on existing products and markets. It is projected that the Metaverse will take 6 to 8 years for market adoption. As shown in Figure 2, during the last 5 years the search popularity of Metaverse has been increasing, with a very sharp increase in January 2021 due to the announcement of Facebook to change its name to Meta to realize the vision of Metaverse, but since then, VR has been falling behind [17]. This is thanks to the rapid technological advancement of VR and its wearable devices [18] (Figure 3), which are integrating parts of the Metaverse, so the latter took over VR. Wearable devices include VR Metaverse devices (such as Meta Quest 2 by Reality Labs in Meta, Valve Index, Sony PlayStation VR, HP Reverb G2 by Hewlett Packard, and Haptx Gloves) and AR Metaverse devices (such as Microsoft Hololens, Magic Leap, Mojo Vision, Epson Moverio BT-350, and Google Glass Edition 2) [19], [20]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & & & & & & & \\ \hline [12] & ✓ & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline [13] & \(\mathbf{\times}\) & ✓ & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline [14] & \(\mathbf{\times}\) & ✓ & \(\mathbf{\times}\) & ✓ & \(\mathbf{\times}\) & ✓ & \(\mathbf{\times}\) & ✓ \\ \hline [15] & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) & ✓ & \(\mathbf{\times}\) & ✓ & \(\mathbf{\times}\) & ✓ \\ \hline This survey & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of related surveys. Figure 1: Emerging technologies trends (Source: Gartner 2023 [6]). Figure 3: Metaverse wearable devices. Figure 2: Web search trends worldwide since 2018 for the terms “Metaverse” and “Virtual Reality (VR)”. ### Definition, Evolution, and Requirements of Metaverse Over the last two decades, Metaverse has undergone quite a few reformations/transformations, alongside technological advances, to address the need for a more reliable, seamless, and immersive experience and accordingly many definitions have evolved. To extract these definitions, we reviewed published articles and reports on Metaverse. This is by searching relevant literature in the Association for Computing Machinery (ACM), Elsevier, Institute of Electrical and Electronics Engineers (IEEE), MEDLINE, PubMed, Scopus, and Web of Science databases. Table 2 presents the evolving definitions of the Metaverse. ### Definition, Evolution, and Requirements of Metaverse Over the last two decades, Metaverse has undergone quite a few reformations/transformations, alongside technological advances, to address the need for a more reliable, seamless, and immersive experience and accordingly many definitions have evolved. To extract these definitions, we reviewed published articles and reports on Metaverse. This is by searching relevant literature in the Association for Computing Machinery (ACM), Elsevier, Institute of Electrical and Electronics Engineers (IEEE), MEDLINE, PubMed, Scopus, and Web of Science databases. Table 2 presents the evolving definitions of the Metaverse. ### 2013 [13] "an integrated _network of 3D virtual worlds_" \begin{table} \begin{tabular}{c c c c} \hline **Number** & **Year** & **Source** & **Definition** \\ \hline 1 & 1996 & [21] & “a future version of the Internet which appears to its participants as a _quasi \(-\) physical world_. Participants are represented by _fully articulate human figures, or avatars_. Body movements of avatars are computed automatically by the system” \\ 2 & 1998 & [22] & “a _virtual reality world_ envisioned as a large cyber-planet. It contains _homes, corporate headquarters, nightclubs, and virtually every other type of building_ found in reality and some that are not. Individuals from around the world materialize on this cyber-planet, and are represented there by _avatars_” \\ 3 & 2007 & [23] & “a virtual world which is a genre of online community that often takes the form of a computer-based _simulated environment_, through which users can _interact_ with one another and use and create objects” \\ 4 & 2008 & [24] & “an extensive 3D networked virtual world capable of supporting a _large number of people simultaneously_ for social interaction [...] implies the interaction of real people with the virtual environments and agents including avatars with increasing levels of _immersion and presence_ [...] the word metaverse (Meta -Universe) suggests the emergence of a new class of augmented _social interaction_ which we term ‘augmented duality” \\ 5 & 2008 & [25] & “a system of _numerous_, interconnected _virtual_ and typically _user-generated worlds_ (or Metaworlds) all _accessible through a single-user interface_” \\ 6 & 2010 & [26] & “a user can walk around _realistically_ through an _avatar_ [...] The most realistic interface is a 3D virtual world, also known as a ‘metaverse’ **”** \\ 7 & 2013 & [13] & “refers to a fully _immersive three dimensional digital environment_ in contrast to the more inclusive concept of cyber space that reflects the totality of shared online space across all dimensions of representation” \\ 8 & 2013 & [13] & “an integrated _network of 3D virtual worlds_” \\ \hline \end{tabular} \end{table} Table 2: The definitions of Metaverse. * [27] "a _3D-based virtual reality_ in which daily activities and economic life are conducted through _avatars_ representing the real themselves" * [28] "metaverse means a world in which virtual and reality _interact_ and co-_evolve_, and social, economic, and cultural activities are carried out in it to _create value_." * [29] "an evolving virtual world with unlimited _scalability_ and _interoperability_. The operators need to construct the basic elements, while innovative user-generated content (UGC) fulfill the universe through users. Therefore, _high efficiency_ content creation is another significant component for interactions between users and the metaverse." * [30] "the next evolution in _social connection_ and the successor to the _mobile internet_" * [31] "refers to a _created world_, in which _people can live_ under the rules defined by the creator" * [32] "is a _massively scaled_ and _interoperable network_ of _real-time rendered 3D virtual worlds_ which can be experienced synchronously and persistently by an _effectively unlimited number of users_ with an individual _sense of presence_, and with continuity of data, such as _identity, history, entitlements, objects, communications, and payments_" * [33] "is the post-reality universe, a perpetual and persistent _multiuser environment merging physical reality with digital virtuality_ [...] enable _multisensory interactions_ with virtual environments, digital objects and people such as virtual reality (VR) and augmented reality (AR) [...] is an interconnected web of social, _networked immersive environments in persistent multiuser platforms_. It enables _seamless embodied user communication_ in _real-time_ and _dynamic interactions_ with digital artifacts" * [34] "it can be something that transcends physical reality described in terms of _time and space_ [...] It can denote a universe distinct from the physical universe but referring to it by summarizing, condensing, or depicting its various aspects [...] It can refer to _one or more potential possible alternatives to the existing universe_" Based on these definitions, we provide a retrospective analysis of the Metaverse requirements which evolved over time. These requirements are as follows (Figure 4): * _Immersive and multisensory interaction:_ Metaverse should allow users to feel and experience emotional as well as psychosocial involvement by rendering a realistic virtual environment [26]. This is achieved through sensory perceptions (e.g., temperature, sound, touch, and sight) and expressions (for instance, gestures). Sensory images need to be produced fast enough for users to perceive them as a continuous flow rather than discrete events. Realtime rendering of Metaverse is crucial to the success of Metaverse, aiming at increasing the Metaverse QoS while respecting SLAs requirements and consequently improving users' QoE. * _Spatiotemporality:_ Metaverse should allow users to freely navigate across different digital worlds with disparate spatiotemporal dimensions which is not possible in the real physical world due to the finiteness of space and irreversibility of time [14]. It should break these boundaries of space and time. * _Interoperability:_ Metaverse should enable interoperability between digital assets and data, collected using different Metaverse wearable devices and IoT devices, disparate virtual spaces implemented using different platforms, for the rendering of a virtual world [13]. Consequently, users can seamlessly shuttle across different virtual worlds without interruption. * _Scalability:_ Metaverse should perform efficiently with the increasing number of simultaneous users or avatars, scene complexity, and interactions between users from different virtual worlds [13]. It should scale up automatically, supporting the increasing number of connections, processing, and Input/Output operations, to support realtime interactions and rendering the virtual worlds, without affecting the QoS and the users' QoE. * _Heterogeneity:_ Metaverse should support the use of heterogeneous wearable devices (e.g., hand-based, non-hand-based, and head-mounted), heterogenous data types (e.g., structured or non-structured, text or image), different communication modes and network protocols (e.g., cellular, Wi-Fi, 5G, and 6G), and diverse human perception, behavior, and psychology [34]. * _Quality of Service:_ Metaverse should satisfy QoS requirements in terms of ultra-low latency, high bandwidth, and high data rate for realtime rendering of complex scenarios Figure 4: Requirements of Metaverse (_Note: The numbers represent the definition number from Table 2_). for multiple users. The 5G and its next-generation 6G networks consist of promising ultra-rapid communication protocols in a highly heterogeneous and dynamic setup such as the Metaverse [4]. * _Quality of Experience:_ Metaverse should provide a seamless and uninterrupted experience to the users by ensuring there is no lag/delay in the interaction between the physical and virtual world, high visual quality, realtime tactile, and control experience [35]. The users should have an effective and immersive virtual encounter using an easy-to-use interface. ## 4 Enabling Technologies for Metaverse Computing The Metaverse is a new computing era where the Internet becomes a shared, persistent, and immersive 3D virtual dynamic, open, and interoperable space, where people, avatars, Robotics, and the Internet of Things can interact as if they are in the physical world. Figure 5 depicts Metaverse components, which comprise four basic components: 1) environment, 2) interface, 3) interaction, and 4) data security and privacy [36]. The environment component is necessary to identify the sights and sounds from the physical world to design and render a digital world. The visual environment can be composed by recognizing and rendering objects and scenes, whereas the sound environment can be synthesized using sound and speech recognition. Furthermore, the movement of avatars in a virtual world is facilitated by motion rendering. The interface component enables users to have an immersive and multisensory experience. The interaction component aids in bi-directional interactivity between the physical and virtual worlds via multimodal interaction. In addition, multi-request interaction allows avatars to perform multiple tasks simultaneously such as motion, conversation, and sensing. This component enables 3D interaction and human behaviour modelling to enrich the overall experience. The security and privacy component ensures the integrity and privacy of users' data confidential data such as biometrics information and personal identities. Furthermore, it provides security for the communication network, wearable devices, and underlying software. Figure 5: Metaverse: Enabling Technologies In the following, we explain different enabling technologies for the Metaverse (Figure 5). * _Generative Artificial Intelligence:_ It uses learning models for examining patterns in existing data to generate new content. Generative AI will enable users to customize their virtual environments for a more personalized and engaging experience. It will make the Metaverse more accessible by providing tools for bi-directional text and speech translations, accommodating different languages and cultural backgrounds, as well as assisting differently-abled users while promoting equity, diversity, and inclusion. * _Deep Learning and Machine Learning:_ For the Metaverse to provide an immersive and personalized experience to avatars and users, it should support context awareness, which relies on smart observations of information related to avatars such as their locations. For instance, to provide recommendations on nearby virtual places to visit or create a suitable virtual environment for a patient knowing health status and conditions (such as, such as a particular disability) to provide comfortable user experiences. This can be realized using the power of the machine and deep learning by developing context-aware predictive models. * _Internet of Things:_ IoT devices and sensors are used to generate data in the physical world for the creation of a virtual world. Context-aware data generated from a network of IoT sensors and devices would lead to different decision-making approaches. For instance, conditions under which the data has been generated, e.g., heart acceleration for a running user versus sitting. * _Blockchain:_ It is a privacy-preserving enabler for data and associated transactions generated in the virtual world [16][37] Furthermore, for Metaverse applications that involve financial transactions, a Blockchain-based marketplace can be used for buying and selling virtual good known as Non-Fungible Tokens (NFTs) (art, music, videos, photography, trading cards, etc.) [38]. Transactions in the virtual worlds can be tracked via a persistent and immutable distributed ledger. * _Edge and Cloud Computing:_ It will be a crucial part of the Metaverse providing an integrated computing system utility for the virtual platform. Cloud is highly heterogeneous and provides highly scalable storage and computing capabilities [39] for the Metaverse applications, with various tools to develop models for reactive and/or predictive data analytics to support a fully immersive and interactive virtual world, based on data generated from the physical world. Edge data centers, close to users, enable low-latency and realtime computing and inferences to Metaverse applications and avatars. * _Digital Twins:_ It is a digital replica of a physical entity or object that is continuously updated with the performance and maintenance data of the physical system [40], [41]. It will aid in determining and predicting the behavior of any physical entity in the Metaverse environment. * _Virtual Reality:_ It is an immersive experience where the physical space is replaced by a computer-simulated digital environment with realtime interaction [2]. A user interacts with the digital world using input devices such as hand-held controllers and haptic gloves. VR includes the digital world and excludes physical space. * _Augmented Reality:_ It supplements the physical world, instead of replacing it as in VR, by digitally superimposing digital information and objects on physical objects for simultaneous interactions with both virtual and physical spaces [3]. AR does not allow interactions with digitally imposed data. In Enhanced AR, known as Mixed Reality, the physical world interacts with the superimposed digital data in realtime [42]. MR mitigates the limitations of both VR and AR by including the physical world and the ability to interact with the digital world. * _Spatial Computing:_ It is a form of human-computer interaction that retains and controls objects in the physical world. Spatial computing in the Metaverse will enable an immersive and interactive experience for users that will diminish the line of difference between the physical and virtual worlds [32]. * _Computer Vision:_ It refers to the class of AI that develops learning models for visual data such as images and videos. In the Metaverse, computer vision will be used to track users in the physical space and represent them as avatars in the virtual world [43]. * _Web3:_ It refers to the 3\({}^{\text{rd}}\) generation of the web to realize the vision of a decentralized web, web1 is static content, web2 is dynamic content and Social Media features, and web3 advocates more decentralization so that users are in control of their data, originally called the Semantic Web by inventor the Semantic Web Tim Berners-Lee [44], allow to process information more intelligently through Big Data, Machine Learning, and Decentralized Blockchain ledger technology [45]. * _Network:_ Metaverse applications require ultra-reliability and a high data rate for the wireless system to ensure Quality of Experience. The previous generation 4G network is not capable of providing low latency and high reliability for the Metaverse applications. The Enhanced Mobile Broadband (eMBB), Massive Machine-type Communications (mMTC), and Ultra-Reliable Low-Latency Communications (URLLCs) services of 5G networks can support VR and AR applications by provisioning large bandwidth and low latency with a certain level of reliability. However, 5G might degrade QoE for dynamic, interactive, and immersive applications. The Computation Oriented Communications (COC), Contextually Agile eMBB Communications (CAeC), and Event-Defined Ultra-Reliable Low-Latency Communications (EDuRLLC) application services of the 6G network [4] will ensure the stringent visual and physical requirements, of high bandwidth and low latency communication, for an immersive Metaverse experience. ## 5 Metaverse: An Architecture Perspective Figure 6 represents an overview of Metaverse layered architecture that enables an immersive experience for users in different application domains such as smart healthcare, smart mobility, smart education, business, manufacturing, gaming and entertainment, and social media. We divide the architecture into four layers: infrastructure, distributed computing, platform, and application. The infrastructure layer consists of IoT sensors and Metaverse wearable devices, edge and cloud computing systems, storage, and networking components. IoT sensors aid in collecting data from the physical world to construct a virtual world, whereas wearable devices provide an immersive experience to the user through a simulated avatar that replicates a user's natural reactions and emotions in a digital world. For instance, when playing an online game, the users' body language, heart rate, and breathing pattern will be replicated on the corresponding avatar creating an immersive sensation and increasing QoE. IoT enables the creation of an immersive virtual ecosystem. For instance, it enables the implementation of a virtual smart city Metaverse where avatars can participate in testing the development of the smart city ecosystem, such as testing vehicles in specific conditions, or the implementation of logistics deployments, such as fleets and traffic flow for urban planning and constructions. Head-mounted wearable devices (such as Meta Quest 2 and Valve Index) show the virtual world (3D images and videos) on the device display, and a Beacon connects to mobile devices and provides content-local-based services to the user [20]. Global Market Insights predicted that the beacon technology market would surpass US$25 billion by 2024 [46]. Combined with HMD, beacons can provide content services to HMD so that the user's virtual world is provided with information of interest to users based on their locations. Connected to HMD via Bluetooth in a museum virtual tour, beacons exhibit contents to allow visitors better understand the artifacts on display, thus providing the grounds for building a Metaverse museum tour [47]. Metaverse wearable devices make use of the IoT, digital twins, VR, and AR technologies to satisfy the immersiveness and multisensory interaction requirement of the Metaverse. The gateway devices, such as Raspberry Pi, mobile phones, and computers, enable data authentication, aggregation, and preprocessing. The edge servers are placed within routers and base stations in proximity to IoT devices, whereas the cloud servers are remote and geo-distributed. The virtual world is rendered in the cloud data centers as they have higher computing and storage capabilities than edge servers. However, cloud servers result in higher latency than edge servers, as they are situated far from the end users. The edge and cloud storage components store the ledger that contains data for different events such as the creation of Metaverse, simulation, optimization, prediction, monitoring, and controlling. The storage capabilities of cloud servers are higher than that of edge servers. The cloud and edge computing paradigms ensure the scalability requirement of the Metaverse. The network component provides communication of user behaviors and rendered virtual scenes among different components and end-users. Figure 6: Overview of Metaverse layered architecture. The distributed computing layer provides local access to data, replication for fault tolerance and availability, a consensus mechanism, and interoperability. Consensus protocol is used by the blockchain network to reach an agreement regarding data access, virtual world creation, and ledger update in a peer-to-peer manner. In addition, this layer uses AI approaches for data sharing, context-aware data caching, and energy-performance-aware resource management, scheduling, and communication. The platform layer enables Remote Procedure Calls (RPC) [16], web Application Programming Interface (API) [16], REpresentational State Transfer (REST) APIs [16], Digital Twin APIs, blockchain APIs, and avatar APIs for the creation of virtual worlds and communication between the network participants. The application developed for different domains such as smart healthcare, smart mobility, smart education, business, manufacturing, gaming, and social media can be accessed by users over a 6G network. The 6G network with high data rates and ultra-reliable low latency communication enables an immersive experience for the users in realtime. The energy consumption of edge and cloud computing resources is increasing at an alarming rate, leading to high carbon emissions [48]. Consequently, the infrastructure layer uses AI approaches for energy-aware resource management, application communication, and execution. This ensures the sustainability of the underlying architecture and reduces global warming. VR and AR technologies are used across the platform and application layers to provide immersive and interactive experiences to users in virtual worlds. Furthermore, to provide security, blockchain technology is underpinned across all four layers for user authentication by encryption technique [49] and data integrity using hashing mechanism [50]. These layers use generative AI, DL, and ML approaches as the underlying hardware and software components are AI-enabled to server metaverse AI applications. The four layers are decoupled for flexibility where each layer can evolve independently. The realization of the Metaverse depends on a massive amount of data generated using wearable and IoT devices. This data can be categorized into 1) Big Stream transient data, such as realtime localization captured from GPS, and 2) Big Streams persistent data, such as digital twin replicas including avatars, digital sensors, and digital space stored in a cloud data center. Metaverse applications require realtime decision-making and motion rendering for immersive and user-centric QoE that eliminates the line of difference between the physical and virtual worlds. In addition, in the context of the collaborative Metaverse, there is a need for strict security and privacy requirements, adding overhead on how to identify avatars and how to process and transmit users' sensitive information over the network. These requirements in terms of ultra-low latency, high reliability, high data rates, security, and privacy make Metaverse applications bandwidth-compute-storage-hungry. Despite cloud computing can handle such processing and storage requirements for massive data, it is unable to guarantee the QoS (e.g., latency) and QoE (e.g., visual quality) requirements of applications. Consequently, edge computing is introduced to process applications close to the users to provide a realtime immersive experience. However, directing all users' requests from a Metaverse application to be processed by the edge may lead to overhead. Consequently, AI-based distributed computing and task offloading in an edge and cloud-integrated system has been introduced [51] which is used to satisfy the QoS, QoE, and interoperability requirements of Metaverse applications. ## 6 Metaverse Development Platforms Several platforms have been developed to aid in the creation of Metaverse applications. Though these platforms are still in their infancy, they are a step forward to the realization of Metaverse applications. We divide these platforms into 3 categories: 1) gaming, 2) social media, and 3) open platform to create new applications. Most of the advancement in the Metaverse so far is done in the domain of online gaming, where entertainment and socializing among peers take place [52]. Table 3 presents a comparison between the Metaverse development platforms. As stated in the table, the majority of the development platforms support immersive experiences supporting the creation of avatars [53, 54, 55, 10, 56, 57]. However, among those platforms, the Horizon workrooms development platform [9] does not provide supporting documents that would aid the developers. In addition to the creation of avatars, the development platforms [53, 54, 55, 10] involve interactive 3D VR and AR, visual editor/drag and drop functionality, realistic physics simulation, and visual and interactive elements. These platforms run on top of multiple operating systems and devices. In particular, [53] runs on Windows, macOS, Linux, iOS, and Android operating systems, and on Xbox, PlayStation, and Nintendo Switch devices, [10] on the Microsoft Windows operating system and Oculus Quest device, [54] on Windows operating system, and on VR headset devices including Oculus Rift and HTC Vive, [55] on Windows, Linux, macOS, Classic Mac OS operating systems, and [57] on Android, iOS operating system, and VR devices. A report published in July 2023 stated that Roblex [58] is one of the most used Metaverse development platforms with over 56 million daily active users [59]. Regarding security, Gather [60] stores data in a cloud using encryption techniques, whereas the Unreal engine uses blockchain functionalities to ensure security [61]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & & & & & \multicolumn{8}{c|}{Characteristics} & & & & & & & & Windows, maxG, Linux, iOS, & & & & & & & & & & & & & & & & & & & & & & \\ \hline Unity [53] & \begin{tabular}{c} Unity \\ Technologies, \\ Denmark \\ \end{tabular} & 062005 & ✓ & ✓ & ✓ & ✓ & ✓ & \(\star\) & \(\ ## 7 A Taxonomy of Metaverse Applications Metaverse is a user-centric computing paradigm that enables immersive virtual and physical interactions for users and their avatars in dynamic, realtime, immersive, and interactive smart city applications. It opens up opportunities for a thriving economy in a world where people can game, shop, meet, work, and learn from each other, all from their physical location. Metaverse goes beyond VR and AR by making applications' contextual contents persistent, whereas users can go out and in Metaverse to resume collaboration and transactions that have been taking place. Figure 7 shows a taxonomy of Metaverse applications. We categorize these applications into education, smart healthcare, smart mobility, gaming and entertainment, business, real estate, social media, and manufacturing. * _Education:_ Metaverse can enhance the learning experience by allowing teachers to create 3D surroundings wherein students can interact with virtual objects in space to understand different concepts such as biology, anatomy, cosmology, and geometry. education for students, for example, the ability to inspect 3D objects, moving and rotating virtual objects in space to understand certain concepts, important in topics like biology, anatomy, cosmology, and geometry. Furthermore, students can simulate 3D space to conduct scientific experiments in different scenarios and analyze their results from different angles. In addition, Metaverse can aid in training professionals using 3D models in digital space will be effective, efficient, and safer compared to training them on physical machines. * Gather developed by Gather Presence Inc. [60] - It enables users to create virtual classrooms for education without an immersive experience. In Gather virtual classroom, students and faculty are represented using 3D characters that can move around the digital space. Students can talk to each other and collaborate on projects, and faculty can hold open office hours and talk with students pre or post-lectures. * _Smart healthcare:_ Metaverse can revolutionize the healthcare industry by providing personalized patient-centric care regardless of patients' and healthcare providers' Figure 7: Taxonomy of Metaverse applications. locations. Furthermore, 3D models can aid in more accurate diagnostics and surgeries as doctors can examine a patient from different angles in the digital world. Metaverse can ensure the security and privacy of patients' sensitive information by storing and sharing data in virtual environments using blockchain. To our knowledge, there is no deployed application in this category so far. * _Smart mobility:_ Metaverse can reshape transportation by the concept known as MetaMobility in which smart devices or robots will interact with users to provide mobility services. It can aid in traffic management and parking by allowing users to view 3D models of different locations in realtime. In addition, the Metaverse can allow the creation of intelligent vehicles by creating and simulating 3D models before production. To our knowledge, there is no deployed application in this category so far. * _Gaming and entertainment:_ Metaverse-based virtual ecosystems connect gamers in a virtual shared space to interact, play games, and socialize. It gives users a multisensory immersive experience while gameplay experience. Furthermore, it allows users to create their subgames in 3D space. * Second Life developed by Linden Lab [55] - Is an online multiplayer gaming platform for an immersive experience that enables users to create representative avatars and interact with other users and content created by other users within a multiplayer digital world. It simulates a free market economy as players can buy and sell virtual goods with virtual money. * Sandbox developed by Animoca Brands [63] - Is an online gaming platform that enables users to create games, monetize games and play games created by other users. * Roblex developed by Roblex Corporation [58] - Is an Ethereum-based online gaming platform that enables users to program games and play games created by other users. Players can use Robux, virtual currency, to make in-app purchases. * The Oasis [64] - Is an online multiplayer gaming platform that harnesses the power of NFTs and enables the creation of multiplayer social games with decentralized finance. * Minecraft developed by Mojang Studios, Xbox Game Studios, Telltale Games, 4J Studios, Double Eleven, and Other Ocean Interactive [65] - It is an online multiplayer gaming platform for an immersive experience that enables users to create and share their virtual world online. * Pokemon Go developed by Niantic, Inc. [66] - Is an AR-based multiplayer gaming platform for an immersive experience that uses a mobile device's GPS to locate, capture, train, and battle virtual creatures known as Pokemon. * Axie Infinity developed by Sky Mavis [67] - Is an NFT-based online video game that enables users to play games and do in-game purchases using Ethereum-based cryptocurrencies. Users can collect NFTs that represent digital pets in Axie Infinity knowns as Axies and can perform battle between Axies collected by other users in the game. * Fortnite developed by Epic Games and People Can Fly [68] - Is an online gaming platform that enables users to create games and play games with other users. * Illuvium developed by Illuvium Decentralized Autonomous Organization [69] - It is an interoperable blockchain game developed on the Ethereum blockchain. Users in Illuvium are immersed in a 3D digital world where they explore and collect digital beasts called Illuvials. Illuvials collected by players are represented with NFTs on the blockchain, and they are used to battle other players to win ether cryptocurrency. * Metahero developed by Metahero [70] - It is a 3D scanning technology that digitally renders a real-world object based on collected appearance data. Metahero uses NFTs smart contracts to enable the creation of avatars. Users can use their avatars to play immersive games. * Sansar developed by Linden Lab and Metaverse Investment Ltd. [54] - It is a social virtual reality platform where users are represented using avatars with speech-driven facial emotions and motion-driven body animations. Sansar allows users to design interactive and immersive games for VR and desktop, and play games created by other users. * Horizon Worlds developed by Meta [10] - Is a VR online gaming platform that enables users to create online virtual games and move and interact with other players in the virtual world. The game uses full 3D motion and can be played using Oculus Rift S or Meta Quest 2 VR headset. * Bloktopia developed by Bloktopia [71] - Is a platform designed as a decentralized 21 floors VR skyscraper that allows users (represented using avatars) to play games and earn revenue. * Dungeons & Dragons by Gary Gygax and Dave Arneson [72] - Is a series of online games which allow avatars to adventure into interconnected virtual worlds with their regulations and recompense. * _Business:_ Metaverse can enable users to create virtual 3D stores where customers can remotely browse and order products. Furthermore, it can allow customers to with brands more personally leading to trust. Virtual worlds can aid in creating interactive marketing advertisements that enable users to explore and interact with products digitally. Metaverse can also be useful for remote sales and meetings in a 3D virtual space where people from around the world can interact. It enables users to buy and sell properties virtually using NFT. Realtors can create a 3D replica of the property and potential buyers have an immersive virtual tour. * Bloktopia developed by Bloktopia [71] - It is a platform designed as a decentralized 21 floors VR skyscraper that allows users (represented using avatars) to learn, play, and earn revenue by purchasing, selling, or leasing virtual real estate. * Horizon Workrooms developed by Meta [9] - It is a virtual and immersive office that allows team members to meet, collaborate, brainstorm ideas, and share presentations. Horizon workrooms can be accessed using a Meta Quest VR headset or a web browser. * Gather developed by Gather Presence Inc. [60] - It enables users to create virtual headquarters for remote teams without an immersive experience. In Gather virtual headquarters, team members are represented using 3D characters that can move around the digital space, communicate with other team members, and schedule meetings. It supports the integration of Slack, Google/Outlook calendar, and Outlook. * Decentraland developed by Decentraland Foundation [73] - Is a 3D virtual world where users can buy virtual real estate plots as NFTs using the MANA cryptocurrency based on the Ethereum blockchain. Furthermore, Decentraland enables designers to create and sell clothes and accessories for the avatars to use in the digital space. * SuperWorld developed by SuperWorld [74] - Is a virtual estate marketplace where a virtual world in AR is digitally mapped over the earth. SuperWorld consists of approximately 64.8 billion blocks of virtual estate that users can buy or sell using a crypto wallet. Each virtual real estate transaction is recorded in the blockchain. * Voxels developed by Nolan Consulting Limited [75] - It enables users to build, explore, and buy digital arts and NFTs using the Ethereum blockchain. In addition, Voxels allows users to create costumes for the avatars and make friends in the digital space. * _Social media:_ Metaverse can allow users to interact with each other in an immersive and lifelike manner using avatars. * IMVU developed by IMVU [76] - Is an online and immersive social network that enables users to create 3D avatars, connect to other users, and chat with different users around the world. * Sansar developed by Linden Lab and Metaverse Investment Ltd. [54] - Is a social virtual reality platform where users are represented using avatars with speech-driven facial emotions and motion-driven body animations. Sansar allows users to have conversations in VR, watch videos, and play games with each other. Enables users to create virtual interactive sessions and events. * Uhive developed by Uhive [77] - It enables users to create virtual spaces that will allow them to share their content in digital space, view content created by other users, and interact with other users. It includes Uhive Token (HVE2), a native cryptocurrency of the Uhive social network, that allows users to buy virtual spaces, reward content creators, send tips or donations to other users, trade on cryptocurrency exchanges, and make in-app purchases. * _Manufacturing:_ Metaverse can be used to test products under different scenarios and perform predictive maintenance. It can play the role of assistant for workers in the field to manipulate objects virtually before the physical repair aiding in saving time. It can include digital twins for machines in the field to repair equipment across the globe using 3D virtual space without human intervention in the field. To our knowledge, there is no deployed application in this category so far. ## 8 Open Challenges and Research Directions The proposed realtime scalable-centric vision comprises an architecture that is user-centric and enables different users and avatars to interact in the Metaverse framework. It enables interaction in a way to increase the QoS and satisfy the SLA requirements. The framework promotes scaling up to meet the heterogeneous and dynamic requirements of Metaverse applications. The open challenges to the Metaverse framework include the following: * **Realtime:** Metaverse applications require ultra-low latency response for realtime virtual world rendering to provide multiple users with seamless and immersive QoE. These applications have stringent network bandwidth and ultra-low latency requirements for realtime responses. For instance, a latency of less than 100 ms is required for realtime gaming [78]. The latency requirements become more stringent with an increasing level of interaction with a Metaverse application. One of the solutions to achieve low latency is to process a portion of the Metaverse application on close by edge servers whereas the computation-hungry scene rendering can be performed in the cloud. Huynh et al. [79] proposed to optimize offloading portion, edge caching policies, bandwidth allocation, and computation resources to guarantee stringent low latency requirements of the digital twin-enabled Metaverse. Yu, Chua, and Zhao [80] proposed a multi-agent reinforcement learning approach to ensure low latency in Metaverse by optimizing computation offloading, transmission power, and channel allocation decisions. * **Scalability:** In the Metaverse market, the number of users is expected to amount to 1,461.00m users by 2030 [81]. The applicability of Metaverse for scenarios that require the creation of multiple avatars simultaneously, for instance, in a virtual conference, might be limited. This is because, with an increasing number of avatars, there will be a communication and computation overhead leading to violations of Metaverse's stringent low latency and high bandwidth requirements. Consequently, the scalability of the Metaverse is a major issue. [82] found that Workrooms, a Metaverse platform, may suffer from scalability issues when there are more than 10 participants. This is due to an increase in downlink bandwidth requirement. This leads to communication overhead which puts a burden on the network scalability. To address the scalability issues, several solutions have been proposed in the literature. One of the most prominent approaches to use is peer-to-peer communication and computing model [83] where data will be stored and processed on multiple network-wide peer servers and then combined on a centralized server to render the virtual world [82]. However, rendering multiple avatars on a centralized server will still be resource hungry. This issue could be further addressed by performing rendering on remote cloud servers with higher computing capabilities [84]. Consequently, the entire scene with multiple users will be rendered on remote servers and only the scene will be transmitted to the client. However, data transmission from the cloud servers to the client requires ultra-low latency. Furthermore, the scalability of the blockchain, trading, and privacy-preserving layer for the Metaverse is a major issue. A blockchain network grows rapidly in terms of participants and data, leading to an increasing number of transactions and block validations. One of the notable approaches is to implement a lightweight peer-to-peer blockchain network is divided into clusters based on the geographical locations of network participants [85]. A copy of the ledger is maintained per cluster by the cluster head. The scalability of a blockchain network can be further improved by using a non-encapsulated integrated blockchain-cloud architecture where the data from the physical world, used to render a virtual world, is stored in a cloud database while the metadata, such as data hash, scene rendering events, and access control policies, is recorded in blockchain ledge for security and privacy [86]. * **High energy consumption:** Metaverse involves IoT devices, edge data centers, and cloud data centers to deploy learning, prediction, and rendering models. These devices and data centers consisting of thousands of servers consume a high amount of energy. It is predicted that by 2025, data centers will consume 4.5% of the total global energy consumption [87]. High energy consumption leads to increased electricity costs and global warming due to carbon emissions. It is estimated that by 2040, Information and Communications Technology (ICT) will account for 14% of global carbon emissions [88]. Possible solutions include introducing energy-efficient hardware components [89] and/or energy-aware resource provisioning strategies in edge and cloud data centers [48]. * **Resource provisioning to optimize QoS and energy consumption:** Metaverse is expected to be accessed by multiple users that might be in different parts of the world. Rendering of this huge amount of data is computationally expensive and requires critical provisioning of edge and cloud resources to optimize energy consumption and Metaverse QoS. For instance, when a user is mobile, the latency between the user's motion and its avatar, which is perceived by other users, is a crucial QoS metric to optimize. One of the important approaches is devising algorithms, to efficiently allocate resources in a distributed and cloud computing for Metaverse applications, that can be evaluated using formal modelling [90]. In addition, several works in literature have proposed QoS-aware [91], [92] resource provisioning and energy-aware [48], [93] approaches in an integrated IoT, edge, and cloud computing environment that aid in the realization of seamless Metaverse applications while optimizing the energy consumption of the edge and cloud infrastructure alongside the performance of IoT applications with stringent requirements. * **Cost and complexity:** The complexity of implementing and deploying a Metaverse application, and the associated cost for hardware components and devices are major obstacles to the adoption of the technology. For instance, the cost of Meta Quest Pro, a hardware component, costs up to 1000 USD [94]. Cost-effective hardware devices and simpler implementation solutions should be developed to mitigate this issue. * **Security and data privacy:** Metaverse requires users to provide their identification as well as biometric information to access headset devices, leading to security and privacy concerns. The traditional secure communication protocols, such as TLS and DTLS, will no longer be enough to ensure data privacy. Furthermore, the Metaverse will highly rely on Digital Twin technology to communicate between physical and virtual worlds. If learning models, supporting digital twins, are attacked in the virtual world, then the consequences in the physical world will be a threat to security and data privacy. One of the possible solutions to ensure the security and privacy of users' information and virtual models is the use of blockchain [16]. In addition, avatars and digital twins can hide identities leading to inequalities and biases, making it challenging to achieve fairness in virtual worlds. * **The need for governance against abuse:** Harassment and bullying are major concerns in the Metaverse that require governance, as this may lead to mental health issues. Metaverse is not only limited to text or voice-based bullying, such as in social media but promotes body movement-based harassment through avatars [95, 96]. To address this issue, Meta introduced a safety feature called Safe Zone, which allows users to activate a protective bubble when feel threatened, which will not allow other users to talk, touch, or interact with the safe zone-enabled user [97]. However, existing solutions are either blocking/muting the user who is bullying/harassing or reporting community standard violations. There is a lack of governance towards content moderation that should be addressed. * **Standardization and interoperability:** With increasing attention toward Metaverse, various companies, developers, and organizations are developing their Metaverse platforms. These platforms have disparate architectures, and programming languages, and use different hardware components and IoT devices. Consequently, it becomes difficult to create avatars using data from different platforms and devices. Standardization will help organizations to develop platforms that allow interoperability. To address this issue, initiatives such as Metaverse Standards Forum are established [98]. * **Health-related risks:** An extensive use of Metaverse can lead to physical and/or mental health issues [99, 100, 101]. Physical health conditions include motion sickness, accidents due to collisions with nearby objects, and eye fatigue [99, 100]. Mental health problems, such as depression and anxiety can be caused by a lack of physical interactions, or dissociation from reality [101]. ## 9 Summary and Conclusions Metaverse is an online immersive 3D virtual environment where people via their avatars can interact with digital objects as if they were real. Unlike VR/AR platforms which focus on the development of monolithic centralized applications, the Metaverse platform enables the development of distributed VR/AR applications, giving rise to virtual 3D immersive social interactions between avatars representing humans. The evolution of the design of the upcoming generation of networks and mobile systems will depend on the innovation of the users in designing new applications. Metaverse is an ideal emerging technology to influence this domain by providing new types of immersive interactivity, dynamic and evolving data, and the required computation resources for creating revolutionary applications. With Metaverse, the Internet goes beyond being a facilitator of the exchange of information and ideas among users, to being an enabler for sharing virtual objects and 3D immersive communication in a peer-to-peer network in realtime. In this paper, we presented a taxonomy of Metaverse applications and their limitations to shed light on smart city applications domains where creative Metaverse applications should be developed. The proliferation of development platforms motivates us to provide a comparison among the existing platform along with their features and limitations for the developers. While AI is an enabler for developing immersive Metaverse applications, it is also a driving force to provide intelligent solutions for realtime and scalable Metaverse as highlighted by our proposed layered architecture. Empowering edge and cloud computing data centers with AI-based solutions is crucial to efficiently schedule Metaverse requests and dynamically provision the necessary resources for increasing the Metaverse QoS, and QoE, and satisfying its SLA requirements. In proposing a new framework for Metaverse applications, we highlighted the associated challenges, ranging from realtime, scalability, provisioning, and high energy consumption, to cost and complexity, data privacy, need for governance, standardization and interoperability, and health-related risks. ## Acknowledgement The first author would like to thank the HCI Lab team members of the School of Computing and Information Systems of the Faculty of Engineering and Information Technology at The University of Melbourne for the Lab tour of VR equipment and applications. This research is funded by the National Water and Energy Center of the United Arab Emirates University (Grant 12R126).
2303.05540
BRST Cohomology is Lie Algebroid Cohomology
In this paper we demonstrate that the exterior algebra of an Atiyah Lie algebroid generalizes the familiar notions of the physicist's BRST complex. To reach this conclusion, we develop a general picture of Lie algebroid isomorphisms as commutative diagrams between algebroids preserving the geometric structure encoded in their brackets. We illustrate that a necessary and sufficient condition for such a diagram to define a morphism of Lie algebroid brackets is that the two algebroids possess gauge-equivalent connections. This observation indicates that the aforementioned set of Lie algebroid isomorphisms should be regarded as equivalent to the set of local diffeomorphisms and gauge transformations. Moreover, a Lie algebroid isomorphism being a chain map in the exterior algebra sense ensures that isomorphic algebroids are cohomologically equivalent. The Atiyah Lie algebroids derived from principal bundles with common base manifolds and structure groups may therefore be divided into equivalence classes of isomorphic algebroids. Each equivalence class possesses a local representative which we refer to as the trivialized Lie algebroid, and we show that the exterior algebra of the trivialized algebroid gives rise to the BRST complex. We conclude by illustrating the usefulness of Lie algebroid cohomology in computing quantum anomalies, including applications to the chiral and Lorentz-Weyl (LW) anomalies. In particular, we pay close attention to the fact that the geometric intuition afforded by the Lie algebroid (which was absent in the naive BRST complex) provides hints of a deeper picture that simultaneously geometrizes the consistent and covariant forms of the anomaly. In the algebroid construction, the difference between the consistent and covariant anomalies is simply a different choice of basis.
Weizhen Jia, Marc S. Klinger, Robert G. Leigh
2023-03-09T19:04:06Z
http://arxiv.org/abs/2303.05540v3
# BRST Cohomology is Lie Algebroid Cohomology ###### Abstract In this paper we demonstrate that the exterior algebra of an Atiyah Lie algebroid generalizes the familiar notions of the physicist's BRST complex. To reach this conclusion, we develop a general picture of Lie algebroid morphisms as commutative diagrams between algebroids preserving the geometric structure encoded in their brackets. We illustrate that a necessary and sufficient condition for such a diagram to define a morphism is that the two algebroids possess gauge-equivalent connections. This observation indicates that the set of Lie algebroid morphisms should be regarded as equivalent to the set of local diffeomorphisms and gauge transformations. Moreover, a Lie algebroid morphism being a chain map in the exterior algebra sense ensures that morphic algebroids are cohomologically equivalent. The Atiyah Lie algebroids derived from principal bundles with common base manifolds and structure groups may therefore be divided into equivalence classes of morphic algebroids. Each equivalence class possesses a representative which we refer to as the _trivialized Lie algebroid_, and we show that the exterior algebra of the trivialized algebroid gives rise to the BRST complex. We conclude by illustrating the usefulness of Lie algebroid cohomology in computing quantum anomalies. In particular, we pay close attention to the fact that the geometric intuition afforded by the Lie algebroid (which was absent in the naive BRST complex) provides hints of a deeper picture that simultaneously geometrizes the consistent and covariant forms of the anomaly. In the algebroid construction, the difference between the consistent and covariant anomalies is simply a different choice of basis. Introduction The geometric analysis of gauge theories is a rich area of physics which is deeply interconnected with mathematics [1, 2, 3, 4, 5, 6]. The historical approach to quantifying topological behavior in gauge theories runs through the BRST formalism, which was originally introduced to facilitate the covariant quantization of gauge theories [7, 8, 9]. It was subsequently realized that the BRST formalism gives rise to an exterior algebra, later dubbed the BRST complex [10, 11, 12, 13, 14, 15, 16], which can be used to calculate cohomology classes relevant to quantum anomalies [17, 18, 19, 20, 21, 22, 23, 24, 25]. Starting from a principal bundle \(P(M,G)\), the basic objective of the BRST complex is to design an exterior algebra that combines the de Rham cohomology of the base manifold \(M\) with the cohomology of the local gauge algebra associated with the structure group \(G\). The BRST complex accomplishes this task in a series of steps. First, it takes a local section of \(P(M,G)\) to define the gauge field \(A\), which descends from a bona-fide principal connection. In this way, it forgets about the vertical sub-bundle of \(TP\), and restricts its attention only to the de Rham cohomology of the base manifold. Next, the vacuum left behind by the vertical sub-bundle is filled by introducing a graded algebra generated by a set of Grassmann valued fields \(c^{A}(x)\) called ghosts. In this way, one obtains the BRST complex as an exterior bi-algebra consisting of \(p\)-forms on \(M\) contracted with \(q\) factors of the ghost field, where the number \(q\) is referred to as the ghost number. A priori, the ghost fields have no geometric interpretation, rather being interpreted as a computational device. However, it has been argued that a geometric interpretation for the ghost fields exists as the "vertical components" of an extended gauge field [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]. The basic idea behind this interpretation is to contract the ghost fields with the set of Lie algebra generators \(c=c^{A}\otimes\underline{t}_{A}\) and define the extended "connection" form \(\hat{A}=A+c\) by appending the ghost field to the gauge field. Viewing \(\hat{A}\) as a connection, it is natural to define an associated curvature \(\hat{F}=\mathrm{d}_{\mathrm{BRST}}\hat{A}+\frac{1}{2}[\hat{A},\hat{A}]\), where the coboundary operator of the BRST complex is identified as \(\mathrm{d}_{\mathrm{BRST}}=\mathrm{d}+\mathrm{s}\), which is simply the combination of the de Rham differential \(\mathrm{d}\) and the BRST operator \(\mathrm{s}\). Enforcing the extra condition that the curvature should have extent only in the de Rham part of the BRST complex, one arrives at a pair of equations defining the action of the BRST operator which can be identified with the Chevalley-Eilenberg differential appearing in Lie algebra cohomology [40, 41, 42]. In addition, the action of \(\mathrm{s}\) on the gauge field \(A\) can be interpreted as that of an infinitesimal gauge transformation generated by \(c(x)\). With the "connection" \(\hat{A}\), "curvature" \(\hat{F}\), and coboundary operator \(\mathrm{d}_{\mathrm{BRST}}\) in hand, one can construct "characteristic classes" in the BRST complex by naively following the Chern-Weil theorem [43, 44]. Due to the fact that \(\hat{F}\) was manufactured to have zero ghost number, the Chern-Simons form associated with a given characteristic class in the BRST complex can be shown to satisfy a series of equations known as the descent equations [41, 45, 46, 47]. One of the resulting equations is the Wess-Zumino consistency condition [48], which ultimately determines the algebraic form of any candidate of quantum anomaly. The success of the BRST approach is undeniable. However, it motivates a series of questions. Why should the Grassmann valued fields \(c^{A}(x)\), which started their life in the BRST quantization procedure have an interpretation as the generators of a local gauge transformation? Why is it reasonable to combine the de Rham complex and the ghost algebra into a single exterior bi-algebra? On a related note, why is it reasonable to consider the combination \(\hat{A}=A+c\) as a "connection", and moreover what horizontal distribution does it define? Why should the "curvature" \(\hat{F}\) be taken to have ghost number zero, and why does enforcing this constraint turn the BRST operator \(\mathrm{s}\) into the Chevalley-Eilenberg operator for the Lie algebra of the structure group? These are the questions that we will answer in this paper. Quite serendipitously, we will show that there is not an answer to each of these questions individually, but rather each of these individual questions are resolved by the answer to a single question: What is the appropriate geometric interpretation for the BRST complex? Indeed, our main objective will be to demystify the BRST complex once and for all, and in doing so provide a unified geometric picture of quantum anomalies. The mathematical language which is up to this task is that of Lie algebroids [49, 50, 51, 52, 53, 54, 55], the existing uses of which in the context of gauge theories can be found in, e.g., [56, 57, 58, 59, 60, 61, 62, 63] and the citations therein. In [63] it was argued that the exterior algebra of an Atiyah Lie algebroid derived from a principal \(G\)-bundle \(P(M,G)\) is a geometrization of the physicist's BRST complex. In this note we will provide a novel perspective on this correspondence by elaborating on the concept of the _trivialized Lie algebroid_, which pushes the discussion in [63] further. After reviewing the necessary background on Atiyah Lie algebroids, we demonstrate that every Lie algebroid derived from a principal bundle with base manifold \(M\) and structure group \(G\) is morphic to a trivialized Lie algebroid. Moreover, we demonstrate that such a Lie algebroid morphism is a chain map in the exterior algebra sense, and therefore preserves cohomological data. In other words, the trivialized Lie algebroid provides a universal model for the exterior algebra of any Atiyah Lie algebroid with connection. Then, we take up a detailed study of the exterior algebra of the trivialized Lie algebroid and illustrate that it reproduces the familiar formulae of the BRST complex. Finally, we illustrate the usefulness of Lie algebroid cohomology in computing quantum anomalies, placing en emphasis on its ability to quantify both the consistent and covariant anomaly polynomials. This paper is one in a series of ongoing projects intended to synthesize the local properties of gauge theories using the mathematical language of Atiyah Lie algebroids, in route towards a consistent approach to quantizing gauge theories including gravity. ## 2 Background on Atiyah Lie Algebroids In this section we provide an introduction to Atiyah Lie algebroids focusing on their exterior algebras. We begin by reviewing the construction of an Atiyah Lie algebroid derived from a principal bundle. We subsequently recall the formulation of the exterior algebra of an arbitrary Atiyah Lie algebroid and the coboundary operator \(\hat{\mathrm{d}}\). Here, our intention is to include enough detail relevant to the present paper; for more detailed discussions of Lie algebroids, see [63] or [55]. ### The Lie Algebroid Derived from a Principal Bundle Let \(P(M,G)\) be a principal \(G\)-bundle over the base manifold \(M\) with structure group \(G\). We will denote the Lie algebra of \(G\) by \(\mathfrak{g}\). The principal bundle \(P\) comes equipped with two canonical maps: \[\pi:P\to M\,,\qquad R:P\times G\to P\,, \tag{1}\] corresponding respectively to the projection and the free right action. The Atiyah Lie algebroid derived from the principal bundle \(P(M,G)\) is given by the vector bundle \(A\equiv P\times_{G}TP=TP/G\) over \(M\). In particular, \(A\) is obtained as the quotient of the tangent bundle \(TP\) by the canonically defined right action of \(G\). We note that while \(TP\) is a bundle over \(P\), \(A=TP/G\) is importantly a vector bundle over \(M\). Furthermore, \(A\) is a Lie algebroid because it inherits a bracket algebra from \(TP\), denoted by \([\cdot,\cdot]_{A}\), and possesses an anchor map \(\rho\) in the form of the pushforward by the projection, i.e., \(\rho=\pi_{*}:A\to TM\). Moreover, the map \(\rho\) can easily be seen to be surjective, and hence the algebroid \(A\) is automatically transitive. This means that we have the following short exact sequence of vector bundles over \(M\): \[\begin{CD}0@>{}>{}>L@>{j}>{}>A@>{\rho}>{}>TM@>{}>{}>0\,.\end{CD} \tag{2}\] \(L\) is the kernel of the anchor map \(\rho\), called the isotropy bundle over \(M\). The short exact sequence (2) therefore dictates that a section of \(A\) can be identified (locally) with the direct sum of a local gauge transformation generated by \(\underline{\mu}\in\Gamma(L)\) and a diffeomorphism generated by \(\underline{X}\in\Gamma(TM)\). The Atiyah Lie algebroid \(A\) has a canonically defined _vertical sub-bundle_\(V\subset A\) given by the image of \(L\) under the morphism \(j\) as \(V=j(L)\). This predicates the notion of a Lie algebroid connection as the choice of a _horizontal sub-bundle_ which is complimentary to \(V\). In the context of the Atiyah Lie algebroid, a connection is quantified by a pair of maps1\(\omega:A\to L\) and \(\sigma:TM\to A\) satisfying \(\ker(\omega)=\mathrm{im}(\sigma)\), defining a second short exact sequence in the direction opposite to the first one: Footnote 1: The map \(\omega\) is called the _connection reform_, and must also satisfy the condition \(\omega\circ j=-Id_{L}\). \[0\] \[\xy(0,0)*{\bullet}{\bullet}\] \[L\] \[\xy(0,0)*{\bullet}{\bullet}\] \[\xy(0,0)*{\bullet}{\bullet}\] \[TM\] \[\xy(0,0)*{\bullet}{\bullet}\] \[0\] . (3) In terms of the connection, the horizontal sub-bundle is given by \(H=\ker(\omega)=\mathrm{im}(\sigma)\), and the connection corresponds to a globally defined split of \(A\), namely \(A=H\oplus V\). ### The Exterior Algebra of an Atiyah Lie Algebroid The main focus of this work is to analyze the exterior algebra of \(A\), denoted by \(\Omega(A)=\oplus_{p=1}^{\mathrm{rank}\,A}\Omega^{p}(A)\). Each \(\Omega^{p}(A)\equiv\wedge^{p}A^{*}\) consists of totally antisymmetric \(p\)-linear maps from \(A^{\otimes p}\) into \(C^{\infty}(M)\). The exterior algebra \(\Omega(A)\) has a well-defined coboundary operator \(\tilde{\mathrm{d}}:\Omega^{p}(A)\to\Omega^{p+1}(A)\) determined by the anchor map \(\rho\) and the bracket on \(A\), via the Koszul formula [64, 40]: \[\hat{\mathrm{d}}\eta(\underline{\mathfrak{X}}_{1},\ldots, \underline{\mathfrak{X}}_{p+1})= \sum_{i}(-1)^{i+1}\rho(\underline{\mathfrak{X}}_{i})\eta( \underline{\mathfrak{X}}_{1},\ldots,\underline{\widehat{\mathfrak{X}}_{i}}, \ldots,\underline{\mathfrak{X}}_{p+1})\] \[+\sum_{i<j}(-1)^{i+j}\eta([\underline{\mathfrak{X}}_{i}, \underline{\mathfrak{X}}_{j}]_{A},\underline{\mathfrak{X}}_{1},\ldots, \underline{\widehat{\mathfrak{X}}_{i}},\ldots,\underline{\widehat{\mathfrak{ X}}_{j}},\ldots,\underline{\mathfrak{X}}_{p+1})\,, \tag{4}\] where \(\underline{\mathfrak{X}}_{1},\ldots,\underline{\mathfrak{X}}_{p+1}\) are arbitrary sections on \(A\), and \(\eta\) a section of \(\Omega^{p}(A)\), with \(\eta(\underline{\mathfrak{X}}_{1},\ldots,\underline{\mathfrak{X}}_{p})\in C ^{\infty}(M)\) the complete contraction of \(\eta\) with sections of \(A\). The exterior algebra \(\Omega(A)\) can be extended to \(\Omega(A;E)\), namely the exterior algebra on \(A\) with values in the vector bundle \(E\), by introducing a suitable differentiation of sections of \(E\). Such a notion comes in the form of a Lie algebroid representation, which is a morphism \(\phi_{E}:A\to\mathrm{Der}(E)\) compatible with the anchor. We note that \(\mathrm{Der}(E)\) is itself a Lie algebroid, with isotropy bundle given by \(\mathrm{End}(E)\) and bracket given via the composition of derivations. The morphism condition simply means that \(\phi_{E}\) has a vanishing curvature: \[R^{\phi_{E}}(\underline{\mathfrak{X}},\underline{\mathfrak{Y}}):=[\phi_{E}( \underline{\mathfrak{X}}),\phi_{E}(\underline{\mathfrak{Y}})]_{\mathrm{Der}( E)}-\phi_{E}([\underline{\mathfrak{X}},\underline{\mathfrak{Y}}]_{A})=0\,,\qquad \forall\underline{\mathfrak{X}},\underline{\mathfrak{Y}}\in\Gamma(A)\,. \tag{5}\] The compatibility condition ensures that \(\phi_{E}\) maps into a derivation by enforcing the Leibniz-like identity \[\phi_{E}(\underline{\mathfrak{X}})(f\underline{\psi})=f\phi_{E}(\underline{ \mathfrak{X}})(\underline{\psi})+\rho(\underline{\mathfrak{X}})(f)\underline{ \psi}\,,\qquad\forall\underline{\mathfrak{X}}\in\Gamma(A)\,,\quad f\in C^{ \infty}(M)\,,\quad\underline{\psi}\in\Gamma(E)\,. \tag{6}\] Given such a representation, there is a corresponding Koszul formula generalizing (4): \[\hat{\mathrm{d}}^{E}\eta(\underline{\mathfrak{X}}_{1},\ldots, \underline{\mathfrak{X}}_{p+1})= \sum_{i}(-1)^{i+1}\phi_{E}(\underline{\mathfrak{X}}_{i})\eta( \underline{\mathfrak{X}}_{1},\ldots,\widehat{\underline{\mathfrak{X}}_{i}}, \ldots,\underline{\mathfrak{X}}_{p+1})\] \[+\sum_{i<j}(-1)^{i+j}\eta([\underline{\mathfrak{X}}_{i}, \underline{\mathfrak{X}}_{j}]_{A},\underline{\mathfrak{X}}_{1},\ldots, \widehat{\underline{\mathfrak{X}}_{i}},\ldots,\widehat{\underline{\mathfrak{ X}}_{j}},\ldots,\underline{\mathfrak{X}}_{p+1})\,. \tag{7}\] The operator \(\hat{\mathrm{d}}^{E}\) can be seen to be nilpotent as a combination of (5) and the fact that the bracket on \(A\) satisfies the Jacobi identity. For simplicity, we will later refer to the coboundary operator as simply \(\hat{\mathrm{d}}\), leaving the particular representation \(E\) implicit. A connection on \(A\) specified by \(\omega\) and \(\sigma\) induces a Lie algebroid representation on any vector bundle \(E\) that furnishes a representation space of \(L\). Such a representation is determined through the combination of (1) a covariant derivative operator on \(E\), \(\nabla^{E}:TM\to\mathrm{Der}(E)\), and (2) an endomorphism on \(E\), \(v_{E}:L\to\mathrm{End}(E)\). In particular, we take [63] \[\phi_{E}(\underline{\mathfrak{X}})(\underline{\psi})=\nabla^{E}_{\rho( \underline{\mathfrak{X}})}\underline{\psi}-v_{E}\circ\omega(\underline{ \mathfrak{X}})\underline{\psi}\,. \tag{8}\] \(\phi_{E}\) being a Lie algebroid representation through (8) implies two things. Firstly, \(v_{E}\) must be a morphism, or in other words a linear representation of \(L\). Secondly, the curvature of \(\nabla^{E}\) viewed as a connection on \(TM\) is determined entirely by the curvature of the horizontal distribution \(H\): [63] \[R^{\nabla^{E}}(\underline{X},\underline{Y})=[\nabla^{E}_{\underline{X}}, \nabla^{E}_{\underline{Y}}]_{\text{Der}(E)}-\nabla^{E}_{[\underline{X}, \underline{Y}]_{TM}}=-v_{E}\circ\omega(R^{\sigma}(\underline{X},\underline{Y} ))\,. \tag{9}\] Given the covariant derivative \(\nabla^{E}\), the corresponding connection coefficients are given by \[\nabla^{E}_{\rho(\underline{X})}\underline{e}_{a}=\mathcal{A}^{b}{}_{a}( \underline{\mathfrak{X}}_{H})\underline{e}_{b}\,, \tag{10}\] where \(\underline{e}_{a}\) is a basis section of \(E\). Hence, we can see that the representation \(\phi_{E}\) acts as \[\phi_{E}(\underline{\mathfrak{X}})(\underline{e}_{a})=\Big{(}\mathcal{A}^{b}{ }_{a}(\underline{\mathfrak{X}}_{H})-(v_{E}(\omega(\underline{\mathfrak{X}}_ {V})))^{b}{}_{a}\Big{)}\underline{e}_{b}\,. \tag{11}\] ## 3 Lie Algebroid Morphisms Given that \(\hat{\text{d}}\) is nilpotent on \(\Omega(A,E)\), it provides a well-defined notion of cohomology, which we refer to as _Lie algebroid cohomology_. In this section, our intention is to explain how this cohomology is related to the usual notion of BRST cohomology. In [63], it was shown that the action of \(\hat{\text{d}}\) can be thought of as containing within it the BRST transformation. In this section, we will emphasize the role played by morphisms of Lie algebroids. We will show that two Lie algebroids with connection that are related by morphism are different representatives of a topological class, and the cohomology of the respective \(\hat{\text{d}}\) agree. In this sense, the \(\hat{\text{d}}\) cohomology is invariant under morphism. In [63] the notion of a local trivialization of a Lie algebroid was reviewed. This is a map \(\tau_{U}:A\Big{|}_{U}\to TU\oplus L\Big{|}_{U}\), through which the connection on \(A\) can be expressed locally as a gauge field. The map \(\tau_{U}\) can be regarded as the local description of a Lie algebroid morphism \(\tau:A\to A_{\tau}\). We will show below that it is in this description that the usual physics notation \(\hat{\text{d}}_{\tau}\to\text{d}+s\) makes sense. This morphism may then be used to relate Lie algebroid cohomology to the usual physics notions of BRST cohomology. ### Lie Algebroid Morphisms A Lie algebroid morphism is a map \(\varphi:A_{1}\to A_{2}\) between two Lie algebroids, which preserves the geometric structure of the Lie algebroids as encoded in their brackets. That is for all \(\underline{\mathfrak{X}},\underline{\mathfrak{Y}}\in\Gamma(A_{1})\), \[R^{\varphi}(\underline{\mathfrak{X}},\underline{\mathfrak{Y}}):=-\varphi([ \underline{\mathfrak{X}},\underline{\mathfrak{Y}}]_{A_{1}})+[\varphi( \underline{\mathfrak{X}}),\varphi(\underline{\mathfrak{Y}})]_{A_{2}}=0\,. \tag{12}\] Since \(\varphi\) is an isomorphism, it also has an inverse map which we denote by \(\overline{\varphi}:A_{2}\to A_{1}\). The map \(\varphi\) also induces a linear transformation on bundles associated to \(A_{1}\) and \(A_{2}\) to preserve Lie algebroid representations. Let \(E_{1}\) and \(E_{2}\) be isomorphic vector bundles over \(M\) which are associated, respectively, to \(A_{1}\) and \(A_{2}\) by Lie algebroid representations \(\phi_{E_{j}}:A_{j}\to\text{Der}(E_{j})\), with \(j=1,2\). Then, accompanying the Lie algebroid morphism \(\varphi\), there is a corresponding map on the associated bundles; we write this as \[g_{\varphi}:E_{1}\to E_{2}\,. \tag{13}\] Consider a set of Lie algebroids that share the same base manifold and structure group. In general, two such algebroids may be topologically distinct. In this paper we will emphasize that two algebroids \(A_{1}\) and \(A_{2}\) will be topologically _equivalent_ if there exists a morphism between them. Such a Lie algebroid morphism \(\varphi:A_{1}\to A_{2}\) is defined by the following commutative diagram: (14) Note that \(J=\sigma_{2}\circ\rho_{1}\) is a map from \(H_{1}\) to \(H_{2}\), while \(K=j_{2}\circ\omega_{1}\) is a map from \(V_{1}\) to \(V_{2}\). Hence, a Lie algebroid morphism respects the horizontal and vertical splittings of the two algebroids.2 As we will now see, the map \(\varphi\) will be a morphism if and only if the horizontal distributions of \(A_{1}\) and \(A_{2}\) as defined by their respective connections share the same curvature. This condition implies that there is a homotopic correspondence between the topologies of the underlying principal bundles from which the Lie algebroids were derived if such a morphism exists. Footnote 2: Here, we are discussing morphisms using an _active_ language; in the corresponding _passive_ description, a morphism would be understood as a change of basis for the same algebroid. Let \(\varphi^{*}:\Omega(A_{2};E_{2})\to\Omega(A_{1};E_{1})\) denote the Lie algebroid pullback map induced by the morphism \(\varphi\). Explicitly, given \(\eta\in\Omega^{r}(A_{2};E_{2})\) and \(\underline{\mathfrak{X}}_{1},\dots,\underline{\mathfrak{X}}_{r}\in\Gamma(A_ {1})\) we have \[(\varphi^{*}\eta)(\underline{\mathfrak{X}}_{1},\dots,\underline{\mathfrak{X}} _{r})=g_{\varphi}^{-1}\Big{(}\eta(\varphi(\underline{\mathfrak{X}}_{1}),\dots, \varphi(\underline{\mathfrak{X}}_{r}))\Big{)}\,. \tag{15}\] Using this notation we will establish two results that follow from \(\varphi\) being a morphism. The first is that \[\hat{\mathrm{d}}_{1}\circ\varphi^{*}=\varphi^{*}\circ\hat{\mathrm{d}}_{2}\,, \tag{16}\] which means that \(\varphi\) is a _Lie algebroid chain map_ in the exterior algebra sense. While this is not difficult to establish in general, we illustrate a simple example: consider \(\underline{\mu}_{2}^{(1)}\in\Gamma(A_{2}^{*}\times L)\) and take \(\underline{\mu}_{1}^{(1)}=\varphi^{*}\underline{\mu}_{2}^{(1)}\in\Gamma(A_{1} ^{*}\times L)\). Taking \(g_{\varphi}=Id_{L}\) in this case, we then have using (15) \[(\hat{\mathrm{d}}_{2}\underline{\mu}_{2}^{(1)})(\varphi( \underline{\mathfrak{X}}),\varphi(\underline{\mathfrak{Y}})) =(\varphi^{*}\hat{\mathrm{d}}_{2}\underline{\mu}_{2}^{(1)})( \underline{\mathfrak{X}},\underline{\mathfrak{Y}})\,, \tag{17}\] \[(\hat{\mathrm{d}}_{1}\underline{\mu}_{1}^{(1)})(\underline{ \mathfrak{X}},\underline{\mathfrak{Y}}) =(\hat{\mathrm{d}}_{1}\varphi^{*}\underline{\mu}_{2}^{(1)})( \underline{\mathfrak{X}},\underline{\mathfrak{Y}})\,. \tag{18}\] Thus (16) just implies that if \(\underline{\mu}_{2}^{(1)}\) pulls back through \(\varphi\), so does \(\hat{\mathrm{d}}_{2}\underline{\mu}_{2}^{(1)}\). It should be noted though if one evaluates these expressions using the Koszul formula, one actually finds agreement iff \(\varphi\) is a morphism. The second result is that the morphism condition implies \[\Omega_{1}=\varphi^{*}\Omega_{2}\,. \tag{19}\] This follows directly from the fact that we can express \[R^{\varphi}(\underline{\mathfrak{X}}_{H},\underline{\mathfrak{Y }}_{H}) =R^{\sigma_{2}}(\rho_{1}(\underline{\mathfrak{X}}_{H}),\rho_{1}( \underline{\mathfrak{Y}}_{H}))+j_{2}(R^{-\omega_{1}}(\underline{\mathfrak{X }}_{H},\underline{\mathfrak{Y}}_{H})\] \[=j_{2}(\Omega_{2}(\varphi(\underline{\mathfrak{X}}),\varphi( \underline{\mathfrak{Y}})))-j_{2}(\Omega_{1}(\underline{\mathfrak{X}}, \underline{\mathfrak{Y}}))\,, \tag{20}\] by using \(\varphi=J-K\) and Eq. (43) of [63], \[R^{\sigma}(\rho(\underline{\mathfrak{X}}),\rho(\underline{\mathfrak{Y}}))=j (\Omega(\underline{\mathfrak{X}},\underline{\mathfrak{Y}}))=-j(R^{-\omega }(\underline{\mathfrak{X}}_{H},\underline{\mathfrak{Y}}_{H}))\,. \tag{21}\] The curvature of a connection reform \(\omega\) is the horizontal \(L\)-valued form given by3 Footnote 3: We have introduced the graded Lie bracket between \(L\)-valued differential forms. For \(\alpha\in\Omega^{m}(A;L)\) and \(\beta\in\Omega^{n}(A;L)\), \([\alpha,\beta]_{L}\) is defined as \[[\alpha,\beta]_{L}(\underline{\mathfrak{X}}_{1},\ldots,\underline{\mathfrak{X }}_{m+n})=\sum_{\sigma}\text{sgn}(\sigma)[\alpha(\underline{\mathfrak{X}}_{ \sigma(1)},\ldots,\underline{\mathfrak{X}}_{\sigma(m)}),\beta(\underline{ \mathfrak{X}}_{\sigma(m+1)},\ldots,\underline{\mathfrak{X}}_{\sigma(m+n)})]_{L }\,, \tag{22}\] where \(\underline{\mathfrak{X}}_{1},\ldots,\underline{\mathfrak{X}}_{m+n}\) are arbitrary sections on \(A\), \(\sigma\) denotes the permutations of \((1,\ldots,m+n)\), and \(\text{sgn}(\sigma)=1\) for even permutations and \(\text{sgn}(\sigma)=-1\) for odd permutations. \[\Omega=\hat{\text{d}}\omega+\frac{1}{2}[\omega,\omega]_{L}\,. \tag{23}\] Eq. (19) again indicates that a Lie algebroid morphism of the form (14) involves a _topological_ consideration about the algebroids in question. In Section 4 we introduce a version of the Chern-Weil homomorphism which is applicable to Lie algebroid cohomology. This will provide a recipe for constructing Atiyah Lie algebroid cohomology classes in terms of characteristic polynomials in curvature. Recall that a characteristic class satisfies a so-called "naturality" condition, which essentially implies that the pullback commutes through the characteristic class; i.e., if \(\lambda(\Omega)\) is a characteristic class of a curvature \(\Omega\), then \[\lambda(\varphi^{*}\Omega)=\varphi^{*}\lambda(\Omega)\,. \tag{24}\] Hence, two Lie algebroids whose curvatures are related as (19) will possess an isomorphism between their cohomologies. Eq. (16) similarly implies that morphic Lie algebroids possess isomorphic cohomology classes. In light of these observations, we can view the Lie algebroid morphism as a device for organizing the set of Atiyah Lie algebroids with connection into topological equivalence classes. Let \((A,\omega)\) denote an Atiyah Lie algebroid \(A\) with connection reform \(\omega\). Then, \[[(A,\omega)]:=\{(A^{\prime},\omega^{\prime})\mid\exists\varphi:A\to A^{ \prime}\text{ s.t. }\Omega=\varphi^{*}\Omega^{\prime}\} \tag{25}\] can be regarded as the set of topologically equivalent Atiyah Lie algebroids with connection. Eqs. (16) and (19) are perhaps too formal for the casual reader to discern their meaning; by writing them out in terms of local bases of sections for the various bundles, it is straightforward to establish that the connection coefficients [see Eq. (10)] satisfy \[(\mathcal{A}_{1})_{\alpha_{1}}{}^{a_{1}}{}_{b_{1}} =J^{\alpha_{2}}{}_{\alpha_{1}}(g_{\varphi}^{-1})^{a_{1}}{}_{a_{2} }\Big{(}(\mathcal{A}_{2})_{\alpha_{2}}{}^{a_{2}}{}_{b_{2}}+\delta^{a_{2}}{}_{b _{2}}\rho(\underline{E}_{\alpha_{2}})\Big{)}g_{\varphi}^{b_{2}}{}_{b_{1}}\,, \tag{26}\] \[(v_{E}(\omega_{1}))_{\underline{A}_{1}}{}^{a_{1}}{}_{b_{1}} =K^{\underline{B}_{2}}{}_{\underline{A}_{1}}(g_{\varphi}^{-1})^{a_ {1}}{}_{a_{2}}(v_{E}(\omega_{2}))_{\underline{B}_{2}}{}^{a_{2}}{}_{b_{2}}g_{ \varphi}^{b_{2}}{}_{b_{1}}\,. \tag{27}\] That is, the components of \(\mathcal{A}\) and \(\omega\) transform like a gauge field and a gauge ghost, respectively. Eq. (26) is compatible with (19); recall that the curvatures of gauge fields related by a gauge transformation are equivalent up to a conjugation. In this respect, we can also identify the Lie algebroid morphism (14) as encoding the data of a gauge transformation. In other words, the set \([(A,\omega)]\) can be regarded as an orbit of gauge equivalent algebroids. In a separate work [65], we use this remark to construct the _configuration algebroid_, which can be regarded as a concise definition of the space of gauge orbits of connections that can be employed in any gauge theory formulated in terms of Atiyah Lie algebroids. ### Trivialization and Trivialized Lie Algebroids In the last section we have shown that there exists a Lie algebroid morphism of the form (14) between Lie algebroids with connection whose horizontal distributions have curvatures related by (19). It is perhaps worth mentioning that this very same construction was used in constructing a representation of a Lie algebroid \(A\) by the Lie algebroid \(\mathrm{Der}(E)\), for some associated vector bundle \(E\). In fact, this is a slight generalization of what we presented above, in that whereas the morphism in question is \(\phi_{E}:A\to\mathrm{Der}(E)\), these two algebroids do not share the same isotropy bundle, but instead there is a morphism \(v_{E}:L\to\mathrm{End}(E)\) between them. Locally this morphism can be thought to give a matrix representation (on the fibres of \(E\)) of the Lie algebra. A local trivialization of a Lie algebroid can also be thought of as an example of a morphism, the details presented in terms of local data. Indeed, using notation of [63], on open sets \(U_{i}\subset M\), we have \[\tau_{i}:A^{U_{i}}\to TU_{i}\oplus L^{U_{i}}\,, \tag{28}\] and so local sections of \(A\) can be expressed in terms of local bases for \(TM\) and \(L\) \[\tau_{i}(\mathfrak{X}_{H})=\mathfrak{X}_{i,H}^{\alpha}{\tau_{i}}^{\mu}{}_{ \alpha}(\partial_{\mu}^{U_{i}}+b_{i\mu}^{\;A}\underline{\iota}_{A}^{U_{i}})\,, \qquad\tau_{i}(\mathfrak{X}_{V})=\mathfrak{X}_{i,V}^{\underline{A}}{\tau_{i}} ^{A}{}_{\underline{\iota}}\underline{\iota}_{A}^{U_{i}}\,. \tag{29}\] The coefficients \(b_{i}{}^{A}_{\mu}\) are the components of a \(\mathfrak{g}\)-valued \(1\)-form on \(M\), that transforms on overlapping open sets as a gauge field. In order to use the language of morphisms presented above, we now introduce the _trivialized Lie algebroid_. This is an algebroid \((A_{\tau},\omega_{\tau})\) that we regard as morphic to \(A\), \(\tau:A\to A_{\tau}\). Suppose \(\{U_{i}\}\) is an open cover of \(M\), then \(A_{\tau}\) is constructed given an atlas for \(M\) by gluing \[A_{\tau}=\bigcup_{i}\big{(}TU_{i}\oplus L^{U_{i}}\big{)} \tag{30}\] in such a way that its exterior derivative \(\hat{\mathrm{d}}_{\tau}\) is related to \(\hat{\mathrm{d}}\) via this specific instance of Eq. (16), and its curvature is related to that of \(A\) by an instance of (19). The morphism \(\tau:A\to A_{\tau}\) should therefore be understood as a series of local morphisms \(\tau_{i}:A^{U_{i}}\to A_{\tau}^{U_{i}}\). As such, \(A_{\tau}\) is in the same topological class as \(A\), but its connection can be presented in a familiar form, as we will see below. Note that here we are using the notion of morphism in the active sense, and hence we distinguish \(A\) from \(A_{\tau}\). It what follows, the reader may find it profitable to think from a passive perspective: indeed our use of \(A\) versus \(A_{\tau}\) can be thought of as simply corresponding to a different choice of basis, the first natural from the \(H\oplus V\) split, the second natural from the local \(TU\oplus L\) split. Below, we will refer to these as the covariant and consistent splittings, respectively. Due to its simple structure, the trivialized Lie algebroid has an exterior algebra which gives rise to very efficient computations and transparent results that are particularly interpretable from a physical perspective. Indeed, below we will show that by considering the trivialized Lie algebroid directly, there is a sense in which one obtains the usual physics representation, namely \(\hat{\mathrm{d}}_{\tau}\to\mathrm{d}+\mathrm{s}\). To be precise about details, we will introduce explicit bases for the various vector bundles; although we will not indicate so, these should be understood to be valid locally on some open set of \(M\). So we introduce Figure 1: A visualization of a Lie algebroid. A connection gives a global split \(A=H\oplus V\), which locally can be viewed as determined by a gauge field \(b\) defined with respect to “axes” corresponding to sub-bundles \(TM\) and \(L\). the notation for bases for the bundles \(TM\) and \(L\) and their dual bundles: \[\begin{split} TM&=\operatorname{span}\{\underline{ \partial}_{\mu}\}\,,\qquad T^{*}M=\operatorname{span}\{\mathrm{d}x^{\mu}\}\,, \qquad\mu=1,\dots,\dim M\,,\\ L&=\operatorname{span}\{\underline{t}_{A}\}\,, \qquad L^{*}=\operatorname{span}\{t^{A}\}\,,\qquad A=1,\dots,\dim G\,.\end{split} \tag{31}\] These bases are dual in the sense that \[\mathrm{d}x^{\mu}(\underline{\partial}_{\nu})=\delta^{\mu}{}_{\nu},\qquad t^{ A}(\underline{t}_{B})=\delta^{A}{}_{B}\,,\qquad\mathrm{d}x^{\mu}(\underline{t}_{A}) =0\,,\qquad t^{A}(\underline{\partial}_{\mu})=0\,. \tag{32}\] Given the above notation, we have a choice to make for a basis of sections of the trivialized Lie algebroid \(A_{\tau}\). We will refer to such choices as "splittings", and we will make reference to two natural choices which we refer to as the _consistent splitting_ and the _covariant splitting_, respectively. The relevance of this nomenclature will become clear shortly. These two splittings correspond in fact to the two sets of axes shown in Figure 1, and they are distinguished precisely because of the non-trivial connection on \((A_{\tau},\omega_{\tau})\). By a covariant splitting, we mean a split basis as described in [63]. Consider an algebroid \((A,\omega)\) for which we take a basis of sections \(\{\underline{E}_{\underline{\alpha}},\underline{E}_{\underline{\lambda}}\}\) (where \(\underline{\alpha}=1,\dots,\dim M\), \(\underline{A}=1,\dots,\dim G\)). Such a basis has the virtue that \(\omega(\underline{E}_{\underline{\alpha}})=\underline{0}\) and \(\rho(\underline{E}_{\underline{\lambda}})=\underline{0}\), namely they span \(H\) and \(V\) respectively. Given a morphism \(\tau:A\to A_{\tau}\), it is natural to choose a basis \(\{\tau(\underline{E}_{\underline{\alpha}}),\tau(\underline{E}_{\underline{ \lambda}})\}\) for \(A_{\tau}\). Since we will now deal directly with \(A_{\tau}\), we will for brevity denote such a basis by \(\{\underline{E}_{\underline{\alpha}},\underline{E}_{\underline{\lambda}}\}\). Thus a covariant splitting corresponds to a choice of basis sections that are aligned with the global split \(A_{\tau}=H_{\tau}\oplus V_{\tau}\). Locally, these sections can be expressed in terms of the bases for \(TM\) and \(L\) as \[\hat{\underline{E}}_{\underline{\alpha}}=\rho_{\tau\underline{\alpha}}^{\mu}( \underline{\partial}_{\mu}+b_{\mu}^{A}\underline{t}_{A})\,,\qquad\hat{ \underline{E}}_{\underline{A}}=(j_{\tau}^{-1})^{A}{}_{\underline{A}} \underline{t}_{A}\,, \tag{33}\] while the dual bases can be written as \[\hat{\underline{E}}^{\underline{\alpha}}=(\rho_{\tau}^{-1})^{\underline{ \alpha}}\mathrm{d}x^{\mu}\,,\qquad\hat{E}^{\underline{A}}=j_{\tau}^{\underline {A}}{}_{A}(t^{A}-b_{\mu}^{A}\mathrm{d}x^{\mu})\,. \tag{34}\] Suppose \(\underline{X}=X^{\mu}\underline{\partial}_{\mu}\in\Gamma(TM)\) and \(\underline{\mu}=\mu^{A}\underline{t}_{A}\in\Gamma(L)\), then a section \(\underline{\mathfrak{X}}\) of \(A_{\tau}\) may be expressed in this covariant splitting as \[\underline{\mathfrak{X}}=X^{\mu}\sigma_{\tau}(\underline{\partial}_{\mu})- \mu^{A}j_{\tau}(\underline{t}_{A})=X^{\mu}\sigma_{\tau}^{\underline{\alpha} }\mu\hat{\underline{E}}_{\underline{\alpha}}-\mu^{A}j_{\tau}^{\underline{A} }A\hat{\underline{E}}_{\underline{A}}=X^{\mu}(\underline{\partial}_{\mu}+b_{ \mu}^{A}\underline{t}_{A})+\mu^{A}\underline{t}_{A}\,. \tag{35}\] On the other hand, by a consistent splitting, we mean a choice of basis for \(A_{\tau}\) that is aligned with the bases for \(TM\) and \(L\). That is, in the consistent splitting, we can write a section of \(A_{\tau}\) as \[\underline{\mathfrak{X}}=\mathfrak{X}^{\mu}\underline{\partial}_{\mu}+ \mathfrak{X}^{A}\underline{t}_{A}\,. \tag{36}\] By comparing to the covariant split (35), we see that \[\mathfrak{X}^{\mu}=X^{\mu}\,,\qquad\mathfrak{X}^{A}=\mu^{A}+X^{\mu}b_{\mu}^{A }\,, \tag{37}\] and thus in the consistent splitting, the gauge field is contained in an off-block-diagonal piece of \(\sigma_{\tau}\). In the current set up, the connection reform \(\omega_{\tau}\) which defines the horizontal distribution through its kernel can be written in the consistent splitting as \[\omega_{\tau}=\omega_{\tau}^{A}\hat{\underline{E}}^{\underline{A}}\otimes \underline{t}_{A}=\omega_{\tau}^{A}\triangle j_{\tau}^{\underline{A}}B(t^{B}- b_{\mu}^{B}\mathrm{d}x^{\mu})\otimes\underline{t}_{A}=(b_{\mu}^{A}\mathrm{d}x^{\mu}-t^{A}) \otimes\underline{t}_{A}=b-\varpi\,. \tag{38}\] where we defined \[\varpi=\varpi^{A}\otimes\underline{t}_{A}=t^{A}\otimes\underline{t}_{A}\,, \tag{39}\] which can be interpreted as the Maurer-Cartan form on \(L\). Eq. (38) explicitly shows that the connection reform can be understood as the sum of two pieces, the first related to the gauge field, and the second related to the Maurer-Cartan form of the gauge algebra, if we interpret it in the consistent splitting (i.e., in terms of the bases for \(TM\) and \(L\) and their duals). This equation should be compared with the idea of an extended "connection" in the BRST complex which is typically taken to be of the form \(\hat{A}=A+c\) where \(A\) is a local gauge field and \(c\) is the ghost field [26, 27, 28]. However, (38) has an advantage over the conventional extended "connection" because it possesses a manifestly geometric interpretation as a genuine connection in the algebroid context. ### The Cohomology of Trivialized Lie Algebroids We now turn our attention to the main focus of this section--understanding the exterior algebra of the trivialized algebroid. The bracket on \(A_{\tau}\) can be written explicitly for the basis sections as \[[\underline{\hat{E}}_{\underline{\alpha}},\underline{\hat{E}}_{ \underline{\beta}}]_{A_{\tau}} =\sigma_{\tau}\left([\rho_{\tau}(\underline{\hat{E}}_{\underline{ \alpha}}),\rho_{\tau}(\underline{\hat{E}}_{\underline{\beta}})]_{TM}\right)+j_{ \tau}(\Omega_{\underline{\alpha}\underline{\beta}})\,, \tag{40}\] \[[\underline{\hat{E}}_{\underline{\alpha}},\underline{\hat{E}}_{ \underline{B}}]_{A_{\tau}} =-j_{\tau}\left(R^{-\omega_{\tau}}(\underline{\hat{E}}_{ \underline{\alpha}},\underline{\hat{E}}_{\underline{B}})\right)=j_{\tau}\left( \nabla^{L}_{\underline{\hat{E}}_{\underline{\alpha}}}(\omega^{A}_{\tau} \underline{B}\underline{t}_{A})\right)=j_{\tau}\left(\phi_{L}(\underline{\hat {E}}_{\underline{\alpha}})(\omega^{A}_{\tau}\underline{B}\underline{t}_{A}) \right)\,,\] (41) \[[\underline{\hat{E}}_{\underline{A}},\underline{\hat{E}}_{ \underline{B}}]_{A_{\tau}} =j_{\tau}\left([\omega_{\tau}(\underline{\hat{E}}_{\underline{A }}),\omega_{\tau}(\underline{\hat{E}}_{\underline{B}})]_{L}\right)=-\omega^{A} _{\tau}\underline{\lambda}\omega^{B}_{\tau}\underline{B}f_{AB}{}^{C}\underline {\hat{E}}_{\underline{C}}j_{\tau}^{\underline{C}}\,. \tag{42}\] The coboundary operator for the complex \(\Omega(A_{\tau};E)\), denoted by \(\hat{\mathrm{d}}_{\tau}\), is defined precisely by the Koszul formula (7). In terms of the morphism \(\tau:A\to A_{\tau}\), we have, as in (16), \(\hat{\mathrm{d}}\circ\tau^{*}=\tau^{*}\circ\hat{\mathrm{d}}_{\tau}\). Working in \(A_{\tau}\), we now have two different ways of splitting \(\Omega(A_{\tau};E)\) into a bi-complex. Firstly, we can use the covariant splitting of \(A_{\tau}\) to identify \[\Omega^{p}(A_{\tau};E)=\bigoplus_{r+s=p}\Omega^{(r,s)}(H_{\tau},V_{\tau};E)\,, \tag{43}\] where \(\Omega^{(r,s)}(H_{\tau},V_{\tau};E)\) consists of bi-forms of degree \(r\) in the algebra of \(H_{\tau}\) and degree \(s\) in the algebra of \(V_{\tau}\). This is certainly the most natural splitting of the exterior algebra, as it is globally defined given a connection. We will show that this is equivalent to, but not the same as, the usual splitting, where \(r\) counts the de Rham form degree and \(s\) counts ghost number. Alternatively, using the consistent splitting for \(A_{\tau}\) we can identify \[\Omega^{p}(A_{\tau};E)=\bigoplus_{r+s=p}\Omega^{(r,s)}(M,L;E)\,, \tag{44}\] where \(\Omega^{p}(A_{\tau};E)\) now consists of bi-forms of degree \(r\) in the de Rham cohomology of \(M\) and degree \(s\) in the Chevalley-Eilenberg algebra of \(L\). To understand precisely how this works, we consider the action of \(\hat{\mathrm{d}}_{\tau}\) on sections of various bundles. We will show that the action of \(\hat{\mathrm{d}}_{\tau}\) can be interpreted as acting as \(\mathrm{d}+\mathrm{s}\) on the components of sections, reproducing the usual physics notation [63] (apart from the fact that the usual Grassmann quantities appear instead as forms). As a first example, we consider an \(E\)-valued scalar \(\underline{\psi}=\psi^{a}\underline{e}_{a}\in\Gamma(E)\). Using the Koszul formula, we have \[\hat{\mathrm{d}}_{\tau}\underline{\psi} =\hat{E}^{\underline{M}}\otimes\phi_{E}(\underline{\hat{E}}_{ \underline{M}})(\underline{\psi})\] \[=\rho^{\mu}_{\tau\,\underline{\alpha}}\left(\partial_{\mu}\psi^{a }+v_{E}(b_{\mu})^{a}{}_{b}\psi^{b}\right)\hat{E}^{\underline{\alpha}}\otimes \underline{e}_{a}-v_{E}(\omega_{\hat{A}})^{a}{}_{b}\psi^{b}\;E^{\hat{A}}\otimes \underline{e}_{a}\] \[=\left(\mathrm{d}\psi^{a}+v_{E}(\underline{t}_{A})^{a}{}_{b} \varpi^{A}\psi^{b}\right)\otimes\underline{e}_{a}\,, \tag{45}\] which we identify with4 Footnote 4: It should be noted that in [63] this was written as \(\hat{\mathrm{d}}\underline{\psi}=\nabla^{E}\underline{\psi}+\mathrm{s} \underline{\psi}\). These results are consistent, given that \(\hat{\mathrm{d}}\underline{\psi}=\nabla^{E}\underline{\psi}+\psi^{a} \underline{s}\underline{e}_{a}+\psi^{a}\otimes\underline{e}_{a}=\mathrm{d} \psi^{a}\otimes\underline{e}_{a}+\psi^{a}\otimes\underline{e}_{a}\). This is a general feature: by extracting the basis elements, the gauge fields in the covariant derivative are canceled by those coming from \(\underline{s}\underline{e}_{a}\). We will see this pattern repeated in additional examples. \[\hat{\mathrm{d}}_{\tau}\underline{\psi}=(\mathrm{d}+\mathrm{s})\psi^{a} \otimes\underline{e}_{a}\,, \tag{46}\] if we interpret \[\mathrm{s}\psi^{a}:=v_{E}(\underline{t}_{A})^{a}{}_{b}\varpi^{A}\psi^{b}\,. \tag{47}\] As a second example, consider a section \(\beta\in\Gamma(A_{\tau}^{*}\times E)\). Employing the Koszul formula (which is most easily employed by translating \(\alpha\) into the covariant split basis), we find \[\hat{\mathrm{d}}_{\tau}\beta =\frac{1}{2}\hat{E}^{\underline{M}}\wedge\hat{E}^{\underline{N}} \otimes\left(\phi_{E}(\hat{\underline{E}}_{\underline{M}})(\beta^{a}_{ \underline{N}}\underline{e}_{a})-\phi_{E}(\hat{\underline{E}}_{\underline{N}}) (\beta^{a}_{\underline{M}}\underline{e}_{a})-\beta([\hat{\underline{E}}_{ \underline{M}},\hat{\underline{E}}_{\underline{N}}]_{A_{\tau}})\right)\] \[=\left[\left(\mathrm{d}(\sigma^{\underline{\alpha}}_{\tau}\omega ^{a}_{\underline{\alpha}}-j^{B}_{\tau}B_{\beta}\underline{B}^{a}_{\underline{ B}}b^{B}_{\nu})+v_{E}(\underline{t}_{A})^{a}{}_{b}t^{A}(\sigma^{\underline{ \alpha}}_{\tau}\omega^{a}_{\underline{\alpha}}-j^{B}_{\tau}B_{\beta}\underline {B}^{a}_{\underline{B}}b^{B}_{\nu})\right)\wedge\mathrm{d}x^{\nu}\] \[\quad\quad+\left(\mathrm{d}(j^{B}_{\tau}B_{\beta}\underline{B}^{a }_{\underline{B}})+v_{E}(\underline{t}_{A})^{a}{}_{b}t^{A}(j^{B}_{\tau}B_{ \beta}\underline{B}^{b}_{\underline{B}})-\frac{1}{2}f_{AB}{}^{C}(j^{B}_{\tau }C\beta^{b}_{\underline{B}})t^{A}\right)\wedge t^{B}\right]\otimes \underline{e}_{a}\,. \tag{48}\] Recognizing \(\beta^{a}_{\nu}=\sigma^{\underline{\alpha}}_{\tau}\omega^{a}_{\underline{ \alpha}}-j^{B}_{\tau}B_{\beta}\underline{B}^{a}_{\underline{B}}b^{B}_{\nu}\) and \(\beta^{a}_{A}=j^{B}_{\tau}A\beta^{a}_{\underline{B}}\), we have \[\hat{\mathrm{d}}_{\tau}\beta=\left(\mathrm{d}\beta^{a}_{\nu}+v_{E}(\underline{ t}_{A})^{a}{}_{b}t^{A}\beta^{a}_{\nu}\right)\wedge\mathrm{d}x^{\nu}\otimes \underline{e}_{a}+\left(\mathrm{d}\beta^{a}_{B}+v_{E}(\underline{t}_{A})^{a}{} _{b}t^{A}\beta^{b}_{B}-\frac{1}{2}f_{AB}{}^{C}\beta^{a}_{C}t^{A}\right)\wedge t ^{B}\otimes\underline{e}_{a}\,, \tag{49}\] so we see that \[\hat{\mathrm{d}}_{\tau}\beta=(\mathrm{d}+\mathrm{s})\beta^{a}_{\mu}\wedge \mathrm{d}x^{\mu}\otimes\underline{e}_{a}+(\mathrm{d}+\mathrm{s})\beta^{a}_{A} \wedge t^{A}\otimes\underline{e}_{a}\,, \tag{50}\] if \[\mathrm{s}\beta^{a}_{\nu}=v_{E}(\underline{t}_{A})^{a}{}_{b}\varpi^{A}\beta^{ a}_{\nu}\,,\qquad\mathrm{s}\beta^{a}_{B}=v_{E}(\underline{t}_{A})^{a}{}_{b} \varpi^{A}\beta^{b}_{B}-\frac{1}{2}f_{AB}{}^{C}\beta^{b}_{C}\varpi^{A}\,. \tag{51}\] We note that this is of a similar form to the previous example in (46). As a final example, we consider the connection reform \(\omega_{\tau}\), which we regard as an element of \(\Omega^{1}(A_{\tau},L)\). We have \[\hat{\mathrm{d}}_{\tau}\omega_{\tau} =\hat{\mathrm{d}}_{\tau}(b-\varpi)\] \[=(\Omega^{A}_{\tau}-\frac{1}{2}f_{BC}{}^{A}\omega^{B}_{\tau} \wedge\omega^{C}_{\tau})\otimes\underline{t}_{A} \tag{52}\] \[=(\mathrm{d}b^{A}+f_{BC}{}^{A}\varpi^{B}\wedge b^{C}-\frac{1}{2}f _{BC}{}^{A}\varpi^{B}\wedge\varpi^{C})\otimes\underline{t}_{A}\,, \tag{53}\] where in the last line we made use of the result (38), writing \(\varpi=\varpi^{A}\otimes\underline{t}_{A}\). We note that if we identify5 Footnote 5: The reader should be uncomfortable with the term \(\mathrm{d}\varpi^{A}\). We note that this is consistent with \(\hat{\mathrm{i}}_{-j(\underline{\mu})}\varpi^{A}=-\mu^{A}\), \(\hat{\mathcal{L}}_{-j(\underline{\mu})}\varpi^{A}=0\) (where \(\hat{\mathcal{L}}_{\underline{\chi}}=\hat{i}_{\underline{\chi}}\hat{\mathrm{d} }+\hat{\mathrm{d}}\hat{\mathrm{i}}_{\underline{\chi}}\)) and \(\delta_{\underline{\mu}}b^{A}=\hat{i}_{-j(\underline{\mu})}sb^{A}=d \underline{\mu}+[b,\underline{\mu}]\). \[sb^{A}=\mathrm{d}\varpi^{A}+f_{BC}{}^{A}\varpi^{B}\wedge b^{C}\,,\qquad\mathrm{ s}\varpi^{A}=\frac{1}{2}f_{BC}{}^{A}\varpi^{B}\wedge\varpi^{C}\,, \tag{54}\] then we obtain \[\hat{\mathrm{d}}_{\tau}\omega_{\tau}=(\mathrm{d}+\mathrm{s})\omega^{A}_{\tau} \otimes\underline{t}_{A}\,. \tag{55}\] Therefore, starting from the formal definition (7) of the nilpotent coboundary operator in the algebroid exterior algebra, we established the identification between \(\hat{\mathrm{d}}_{\tau}\) and the BRST differentiation \(\mathrm{s}\). Again, we emphasize that this result is a natural consequence of the geometric structure of the algebroid. Anomalies from Lie Algebroid Cohomology We have now demonstrated that the fundamental features of the BRST complex are geometrically encoded in the Atiyah Lie algebroid. Working in the consistent splitting, the exterior algebra of the trivialized algebroid is a bi-complex consisting of differential forms on the base manifold \(M\) and differential forms in the exterior algebra associated to the local gauge group. This is the state of affairs described in the BRST complex but only after making a series of rather mysterious choices [8, 22, 24, 46, 66]. We have now shown why these choices are reasonable. For example, the counterpart of the extended "connection" \(\hat{A}=A+c\) is identified with \(\omega_{\tau}=b-\varpi\) in the algebroid context; \(b\) corresponds to the gauge field \(A\), and \(\varpi\) corresponds to the ghost field \(c\) (up to a sign difference). Significantly, \(\omega_{\tau}\) is a genuine connection which defines a horizontal distribution on the algebroid. Moreover, the appearance of the Maurer-Cartan form \(\varpi\) justifies the interpretation of the ghost field \(c\) in the BRST formalism as a generator of gauge transformations. As discussed in [63], the "Russian formula" central to the BRST analysis (see, for example, [27, 21]) is also simply a geometric fact in the algebroid context arising from the observation that the curvature of a Lie algebroid connection is zero when contracted with vertical vector fields. Working in the consistent splitting of the trivialized algebroid, this version of the Russian formula can be stated in a more familiar form as: \[\Omega_{\tau}=\hat{\mathrm{d}}_{\tau}\omega_{\tau}+\frac{1}{2}[\omega,\omega] _{L}=(\mathrm{d}+\mathrm{s})(b^{A}-\varpi^{A})\otimes\underline{t}_{A}+\frac{ 1}{2}[b-\varpi,b-\varpi]_{L}=\mathrm{d}b+\frac{1}{2}[b,b]_{L}=F\,, \tag{56}\] where \(F\equiv\mathrm{d}b+\frac{1}{2}[b,b]_{L}\) is the gauge field strength of the gauge field \(b\). In words, the curvature \(\Omega_{\tau}\) is automatically "ghost free" without the need to apply any artificial requirements. In the BRST context, the Russian formula leads to the descent equations which subsequently characterize anomalies from a topological point of view [22, 46, 48, 67, 24]. In this final section we will demonstrate how this story carries over into the algebroid language. Moreover, we will give an illustration of how the algebroid may afford us with a more complete picture by demonstrating that it is capable of geometrizing the covariant form of the anomaly as well as the consistent form. The conventional analysis of the BRST complex can only cover the former. Here we will be computing anomalies from a purely cohomological perspective which is independent of any particular physical theory. In other words, we simply mean that the consistent and covariant anomaly polynomials we derive have the correct topological and algebraic properties to be the anomalous divergences of the consistent and covariant currents that appear in physical considerations. ### Characteristic Classes and Lie Algebroid Cohomology The cohomological formulation of anomalies begins by considering characteristic classes and their associated Chern-Simons forms. In this section we will work in the context of an arbitrary Atiyah Lie algebroid \(A\), with connection reform \(\omega\). Recall that the curvature of the connection reform is given by \(\Omega=\hat{\mathrm{d}}\omega+\frac{1}{2}[\omega,\omega]_{L}\). We begin by computing \[\hat{\mathrm{d}}\Omega=-[\omega,\Omega]_{L} \tag{57}\] which can be recognized as the Bianchi Identity. The pair of equations \[\hat{\mathrm{d}}\omega=\Omega-\frac{1}{2}[\omega,\omega]_{L},\qquad\hat{ \mathrm{d}}\Omega=-[\omega,\Omega]_{L} \tag{58}\] imply that the ring of polynomials generated by \(\omega\) and \(\Omega\) form a closed subalgebra of \(\Omega(A)\). This is the basis of the Chern-Weil homomorphism, which states that one can formulate cohomology classes in \(\Omega(A)\) using such polynomials [68, 43, 69, 44]. To be exact, let \(Q:L^{\otimes l}\to\mathds{R}\) correspond to a symmetric, order-\(l\) polynomial function on \(L\) which is invariant under Lie algebroid morphisms. Such an object can be represented by a symmetric \(l\)-linear map in the tensor algebra of \(L\). In other words, given the dual basis \(\{t^{A}\}\) for \(L^{*}\), with \(A=1,\ldots,\dim(G)\), we can write \(Q=Q_{A_{1}\ldots A_{l}}\bigotimes_{j=1}^{l}t^{A_{j}}\). In terms of such a symmetric, invariant polynomial we can define the _characteristic class_ \[\lambda_{Q}(\Omega)=Q(\underbrace{\Omega,\ldots,\Omega}_{l})=Q_{A_{1}\ldots A_ {l}}\wedge_{j=1}^{l}\Omega^{A_{j}}\in\Omega^{2l}(A). \tag{59}\] The Chern-Weil theorem6 establishes that each \(\lambda_{Q}(\Omega)\) defines an element of the cohomology class of degree \(2l\) in the exterior algebra \(\Omega(A)\). Specifically, it consists of the following two statements [70]: Footnote 6: Strictly speaking, the Chern-Weil theorem is proven in the context of principal bundle cohomology. However, the basis of the proof hinges on the fact that the principal connection and curvature satisfy the same algebraic relations as the algebroid connection and curvature given in (58). Hence, the proof carries over to this case as well. See [64] for a more rigorous discussion. 1. Characteristic classes are closed \(2l\)-forms in \(\Omega(A)\): \[\hat{\mathrm{d}}\lambda_{Q}(\Omega)=l!Q(\hat{\mathrm{d}}\Omega,\underbrace{ \Omega,\ldots,\Omega}_{l-1})=l!Q(\hat{\mathrm{d}}\Omega+[\omega,\Omega]_{L}, \underbrace{\Omega,\ldots,\Omega}_{l-1})=0\,,\] (60) which follows from the symmetry of \(Q\) and the Bianchi identity. 2. Given two different connections \(\omega_{1}\) and \(\omega_{2}\), with respective curvatures \(\Omega_{1}\) and \(\Omega_{2}\), we have that \(\lambda_{Q}(\Omega_{2})-\lambda_{Q}(\Omega_{1})\in\Omega^{2l}(A)\) is \(\hat{\mathrm{d}}\)-exact. The relevant \((2l-1)\)-form potential is defined by introducing a one parameter family of connections \(\omega_{t}=\omega_{1}+t(\omega_{2}-\omega_{1})\) which interpolates between \(\omega_{1}\) and \(\omega_{2}\) as \(t\) goes from \(0\) to \(1\). Then, \[\lambda_{Q}(\Omega_{2})-\lambda_{Q}(\Omega_{1})=\hat{\mathrm{d}}\left[Q_{A_{1 }\cdots A_{l}}\int_{0}^{1}\mathrm{d}t\;(\omega_{2}-\omega_{1})^{A_{1}}\wedge_ {j=2}^{l}\left(\hat{\mathrm{d}}\omega_{t}+\frac{1}{2}[\omega_{t},\omega_{t}] _{L}\right)^{A_{j}}\right]\,.\] (61) An immediate corollary of the Chern-Weil theorem is that the characteristic class \(\lambda_{Q}(\Omega)\) will be globally exact if there exists a one parameter family of connections for which \(\omega_{2}=\omega\) and \(\omega_{1}\) is any connection that has zero curvature. This inspires the topological interpretation of the characteristic class which will be cohomologically trivial if and only if any connection \(\omega\) can be homotopically connected to the trivial connection. Nonetheless, it will always be true locally that any characteristic class can be written as \(\hat{\mathrm{d}}\) acting on a \((2l-1)\)-form defined using (61). That is, \[\lambda_{Q}(\Omega)=\hat{\mathrm{d}}\mathscr{C}_{Q}(\omega)\,, \tag{62}\] where \[\mathscr{C}_{Q}(\omega):=Q_{A_{1}\cdots A_{l}}\int_{0}^{1}\mathrm{d}t\,\omega ^{A_{1}}\wedge_{j=2}^{l}\left(t\hat{\mathrm{d}}\omega+\frac{1}{2}t^{2}[\omega,\omega]_{L}\right)^{A_{j}} \tag{63}\] is the Chern-Simons form associated with the symmetric invariant polynomial \(Q\). Note that (62) indicates that there does not exist \(\gamma\in\Omega^{2l-2}(A)\) such that \(\mathscr{C}_{Q}=\hat{\mathrm{d}}\gamma\), and \(\mathscr{C}_{Q}\) can only be determined up to a \(\hat{\mathrm{d}}\) closed term. As we will see, a characteristic class \(\lambda_{Q}(\Omega)\) and its associated Chern-Simons form \(\mathscr{C}_{Q}(\omega)\) play central roles in the cohomological analysis of anomalies. ### Descent Equations and the Consistent Anomaly Now, let us move into the trivialized algebroid and work in the consistent splitting. As we have shown, in the consistent splitting \(\omega_{\tau}=b-\varpi\), and \(\hat{\mathrm{d}}_{\tau}\to\mathrm{d}+s\). It is therefore natural to organize the Chern-Simons form order by order in the bicomplex \(\Omega(M,L)\) as \[\mathscr{C}_{Q}(b-\varpi)=\sum_{r+s=2l-1}\alpha^{(r,s)}(b,\varpi)\,, \tag{64}\] where \(\alpha^{(r,s)}(b,\varpi)\in\Omega^{(r,s)}(M,L)\), and \(\alpha^{(2l-2,1)}(b,\varpi)=\mathscr{C}_{Q}(b)\). Combining (56) and (62) yields \[\hat{\mathrm{d}}_{\tau}\mathscr{C}_{Q}(b-\varpi)=\lambda_{Q}(\Omega)=\lambda_{ Q}(F)=\mathrm{d}\mathscr{C}_{Q}(b)\,. \tag{65}\] From this point it is straightforward to derive the descent equations simply by plugging (64) into (65), and enforcing the equality order by order in the bi-complex \(\Omega^{(r,s)}(M,L)\). The descent equations can be expressed as \[\mathrm{d}\alpha^{(r,s)}(b,\varpi)+\mathrm{s}\alpha^{(r+1,s-1)}(b,\varpi)=0\,, \qquad r+s=2l-1\,,\quad r\neq 2l-1\,, \tag{66}\] In particular, the term with \(r=2l-3\) yields the Wess-Zumino consistency condition: \[\mathrm{d}\alpha^{(2l-3,2)}(b,\varpi)+\mathrm{s}\alpha^{(2l-2,1)}(b,\varpi)=0\,. \tag{67}\] On the other hand, from the fact that \(\mathscr{C}_{Q}(b-\varpi)\) is not \(\hat{\mathrm{d}}_{\tau}\) exact we also have \[\alpha^{(2l-2,1)}(b,\varpi)\neq\mathrm{d}\gamma^{(2l-3,1)}(b,\varpi)+\mathrm{ s}\gamma^{(2l-2,0)}(b,\varpi)\,. \tag{68}\] The term \(\alpha^{(2l-2,1)}(b,\varpi)\) satisfying (67) and (68) is a candidate to be the density of the consistent anomaly (see [24, 46, 71] for a description from a physical and algebraic perspective). Thus, we have now demonstrated that the consistent anomaly arises naturally in the algebroid context: \[\mathfrak{a}_{\mathrm{con}}=\int_{M}\alpha^{(2l-2,1)}(b,\varpi)\,. \tag{69}\] ### The Horizontal-Vertical Splitting and the Covariant Anomaly Strictly speaking, the results discussed in the previous subsection are merely a reformulation of those obtained in the BRST analysis [72], although now they come from a transparent formal and geometric foundation which makes their origin and meaning clear. However, beyond simply improving our interpretation of the BRST analysis, we would now like to demonstrate that the algebroid approach has the potential to produce new results in the study of anomalies. As we have stressed, the trivialized algebroid has two relevant splittings. By analyzing the cohomology of the consistent splitting above we found the consistent anomaly. This inspires the question of whether the covariant splitting also has an interpretation related to an anomaly. Following the previous subsection, we can instead organize the Chern-Simons form on \(A_{\tau}\) order by order in the bi-complex \(\Omega^{(r,s)}(H_{\tau},V_{\tau})\). The most transparent way of doing this is by expanding the Chern-Simons form as a polynomial in the connection \(\omega\in\Omega^{1}(V;L)\) and its curvature \(\Omega\in\Omega^{2}(H;L)\). Here again we see the Russian formula playing a crucial role in dictating that the curvature can generate a sub-algebra of \(\Omega(H_{\tau})\). The expansion of the Chern-Simons form can now be written as \[\mathscr{C}_{Q}(\omega)=\sum_{r+s=2l-1}\beta^{(r,s)}(\omega,\Omega)\,, \tag{70}\] where \(\beta^{(r,s)}(\omega,\Omega)\in\Omega^{(r,s)}(H,V)\) contains \(r/2\) factors of the curvature and \(s\) factors of the connection. We will now show that the covariant splitting directly produces the covariant anomaly. As was established in [73, 74, 67] the covariant anomaly is obtained from the free variation of the Chern-Simons form with respect to the connection. Computing this variation in the algebroid context, one arrives at the following formula: \[\delta\mathscr{C}_{Q}(\omega)=l\beta^{(2l-2,1)}(\delta\omega,\Omega)+\hat{ \mathrm{d}}\Theta(\omega,\delta\omega). \tag{71}\] It can further be shown that \(\beta^{(2l-2,1)}(\delta\omega,\Omega)\) has the explicit form:7 Footnote 7: The derivation of this result was given in [67] in the context of the principal bundle, but the algebra carries over to the Lie algebroid case directly as we mentioned in the preceding footnote. \[\beta^{(2l-2,1)}(\delta\omega,\Omega)=Q(\underbrace{\Omega,\ldots,\Omega}_{l-1},\delta\omega)\,. \tag{72}\] Hence, the covariant anomaly can be read off from the first term in (71). We therefore recognize that the covariant anomaly is intimately related to the term of order one in the vertical part of the Lie algebroid exterior algebra appearing in the expansion of the Chern-Simons form. This establishes a pleasant symmetry between the covariant anomaly and the consistent anomaly, since the consistent anomaly was proportional to the "ghost number" one term in the expansion of the Chern-Simons form when viewed in the consistent splitting. We should note that from this point of view, the consistent and covariant anomalies do not coincide precisely because \(V^{*}\) is not canonical, depending on the connection. The covariant anomaly does not come with a series of descent equations that leads to a consistency condition. Instead, its defining property is that it is covariant with respect to the gauge transformation. In fact, we can now readily interpret the geometric difference between the consistent and covariant anomalies in the algebroid formulation. The former, being written in the consistent splitting of the algebroid, respects the nilpotency of the coboundary operator \(\hat{\mathrm{d}}\) in both factors of its associated bi-complex but spoils the gauge covariance. Conversely, the latter respects the covariant splitting defined by the connection \(\omega\) and this is endowed with the gauge covariance. Such a conclusion was not possible from the perspective of the BRST complex, precisely because it lacked a geometry for its connection to define a covariant splitting. ### Examples We close this section by exploring a pair of illuminating examples, namely the chiral anomaly and the (type A) Lorentz-Weyl anomaly in \(2d\). In both cases the covariant and consistent forms of the anomaly are deduced by analyzing an appropriate characteristic class and its associated Chern-Simons form. The analysis done here can easily be generalized to arbitrary even dimension. #### 4.4.1 Chiral Anomaly in \(2d\) The analysis of the chiral anomaly arises in the context of an Atiyah Lie algebroid \(A\) derived from a principal bundle \(P(M,G)\), where \(G\) is a semisimple Lie group. The characteristic class that is relevant to the chiral anomaly in \(2d\) is the second Chern class8 Footnote 8: For simplicity, we have taken a basis such that the second Killing form is given by \(\delta_{AB}\). \[\mathrm{ch}_{2}(\Omega)=\delta_{AB}\:\Omega^{A}\wedge\Omega^{B}\,. \tag{73}\] The Chern-Simons form associated with \(\mathrm{ch}_{2}(\Omega)\) can be deduced by employing the transgression formula (61): \[\mathscr{C}_{2}(\omega)=\delta_{AB}\left(\omega^{A}\wedge\hat{\mathrm{d}} \omega^{B}+\frac{1}{3}\omega^{A}\wedge[\omega,\omega]_{L}^{B}\right)\,. \tag{74}\] Using (74), we can easily determine the algebraic form of candidates for the covariant and consistent forms of the anomaly. To begin, still working in the algebroid \(A\) we can decompose (74) order by order in the bi-complex \(\Omega(H,V)\) by re-expressing it as a polynomial in the curvature and connection; that is, where there is a \(\hat{\mathrm{d}}\omega\) we will replace it by \(\Omega-\frac{1}{2}[\omega,\omega]_{L}\). The resulting expression is \[\mathscr{C}_{2}(\omega,\Omega)=\delta_{AB}\left(\omega^{A}\wedge\Omega^{B}- \frac{1}{6}\omega^{A}\wedge[\omega,\omega]_{L}^{B}\right)\,. \tag{75}\] In other words, the various terms in (70) are given by \[\beta^{(2,1)}(\omega,\Omega)=\delta_{AB}\;\omega^{A}\wedge\Omega^{B}\,,\qquad \beta^{(0,3)}(\omega,\Omega)=-\frac{1}{6}\delta_{AB}\;\omega^{A}\wedge[\omega, \omega]^{B}_{L}\,, \tag{76}\] from which we can read off by applying (71) that the covariant anomaly polynomial is given in terms of the curvature \(2\delta_{AB}\Omega^{B}\), as expected. To obtain the consistent anomaly polynomial, we pass to the trivialized Lie algebroid. That is, we specify a map \(\tau:A\to A_{\tau}\) along with its inverse map \(\overline{\tau}:A_{\tau}\to A\). Recall from Subsection 3.1 that such a morphism implies the following relationships between the connections, curvatures, and coboundary operators of the two algebroids: \[\overline{\tau}^{*}\omega=\omega_{\tau}=b-\varpi\,,\qquad\overline{\tau}^{*} \Omega=\Omega_{\tau}=F\,,\qquad\overline{\tau}^{*}\circ\hat{\mathrm{d}}=\hat{ \mathrm{d}}_{\tau}\circ\overline{\tau}^{*}\,. \tag{77}\] Trivializing the Chern-Simons form, it follows from (53) that \[\overline{\tau}^{*}\mathscr{C}_{2}(\omega)=\mathscr{C}_{2}(\omega_{\tau})= \mathscr{C}_{2}(b)+\delta_{AB}\left(-\varpi^{A}\wedge\mathrm{d}b^{B}-\frac{1} {2}b^{A}\wedge[\varpi,\varpi]^{B}_{L}+\frac{1}{6}\varpi^{A}\wedge[\varpi, \varpi]^{B}_{L}\right)\,. \tag{78}\] Then, the expansion (64) gives \[\begin{split}\alpha^{(3,0)}(b,\varpi)&=\mathscr{C} _{2}(b)\,,\qquad\alpha^{(2,1)}(b,\varpi)=-\delta_{AB}\varpi^{A}\wedge\mathrm{d} b^{B}\,,\\ \alpha^{(1,2)}(b,\varpi)&=-\frac{1}{2}\delta_{AB}b^ {A}\wedge[\varpi,\varpi]^{B}_{L}\,,\qquad\alpha^{(0,3)}(b,\varpi)=\frac{1}{6 }\delta_{AB}\varpi^{A}\wedge[\varpi,\varpi]^{B}_{L}\,.\end{split} \tag{79}\] The consistent anomaly polynomial can therefore be read off from the ghost number one contribution to (78), which is \(-\delta_{AB}\varpi^{A}\wedge\mathrm{d}b^{B}\). Recall that \(-\varpi^{A}\) corresponds to the ghost field, the consistent anomaly can be recognized \(\delta_{AB}\mathrm{d}b^{B}\), which is again in agreement with the known result. As promised, the covariant anomaly, which is written in terms of \(\Omega\), is _covariant_, while the consistent anomaly, which is written in terms of \(\mathrm{d}b\), is not. Moreover, it is straightforward to show that the series of terms \(\alpha^{(r,s)}(b,\varpi)\) satisfy the descent equations as introduced in (66). #### 4.4.2 Lorentz-Weyl Anomaly in \(2d\) To analyze the Lorentz-Weyl (LW) anomaly, let us begin by introducing the geometric framework and characteristic classes for a Lorentz-Weyl structure in arbitrary even dimension \(d=2l\). Consider an Atiyah Lie algebroid \(A\) derived from a principal \(G\)-structure with \(G=SO(1,d-1)\times\mathds{R}_{+}\subset GL(d,\mathds{R})\). Here \(SO(1,d-1)\) is the local Lorentz group, while \(\mathds{R}_{+}\) corresponds to local Weyl rescaling. The corresponding Lie algebra can be expressed as \(\mathfrak{g}=\mathfrak{so}(1,d-1)\oplus\mathfrak{r}_{+}\). The adjoint bundle of the group \(G\) is given by \(L=P\times_{G}\mathfrak{g}=\mathfrak{so}_{L}\oplus L_{W}\), where \(L_{L}=P\times_{SO(1,d-1)}L(1,d-1)\) and \(L_{W}=P\times_{\mathds{R}_{+}}\mathfrak{r}_{+}\) correspond to the Lorentz and Weyl factors, respectively. The connection reform on \(A\) will therefore split as \(\omega=\omega_{L}+\omega_{W}\) where \(\omega_{L}\) and \(\omega_{W}\) are the connection reform on the Lorentz and Weyl sub-algebroids, respectively. The curvature of the connection reform \(\omega\) will have two pieces \[\Omega=\hat{\mathrm{d}}\omega+\frac{1}{2}[\omega,\omega]_{L}=\Omega_{L}+ \Omega_{W}\,, \tag{80}\] where \(\Omega_{L}\in\Omega^{2}(H;L_{L})\) is related to the Riemann tensor and \(\Omega_{W}\in\Omega^{2}(H;L_{W})\) is the gauge field strength of the Weyl connection. We can see that the curvature \(\Omega\) remains horizontal. There are two natural invariant structures associated with \(L\). The Weyl factor \(L_{W}\) is an Abelian subalgebra of \(L\). Thus, the map \(\mathrm{tr}_{W}:L\to L_{W}\) which projects an element \(\underline{\mu}\in\Gamma(L)\) down to \(L_{W}\) will be invariant under the adjoint action of \(L\) on itself. In a linear representation of \(L\) given by \(v_{E}:L\to\mathrm{End}(L)\), the generators of \(L_{L}\) are represented by traceless antisymmetric matrices. Hence, as the notation indicates, the map \(\mathrm{tr}_{W}\) can also be understood by selecting a representation and computing the ordinary trace. In other words, for any representation \(E\) and given \(\mathrm{tr}:\mathrm{End}(E)\to C^{\infty}(M)\) we have \[\mathrm{tr}_{W}(\underline{\mu})=\mathrm{tr}\circ v_{E}(\underline{\mu})\,. \tag{81}\] Similarly, there is an invariant structure on \(L_{L}\) which will correspond to the Pfaffian. In particular we define \[\epsilon:L^{\otimes l}\to C^{\infty}(M)\,. \tag{82}\] One of the defining properties of the map \(\epsilon\) is that \(\epsilon(\underline{\mu}_{1},\ldots,\underline{\mu}_{l})=0\) if \(\underline{\mu}_{i}\in\Gamma(L_{W})\) for any \(i\). In other words, \(\epsilon\) only sees the orthogonal factor of \(G\), and is an invariant polynomial on this factor. As was the case with the trace, \(\epsilon\) can be computed by passing to a linear representation. To be precise, we should take a \(2l\)-dimensional representation space \(E\) equipped with an inner product \(g_{E}:E\times E\to C^{\infty}(M)\) of appropriate signature. Then, we can define the map \(w_{E}:L\to\wedge^{2}E^{*}\) such that given \(\underline{\psi}_{1},\underline{\psi}_{2}\in\Gamma(E)\) we have \[w_{E}(\underline{\mu})(\underline{\psi}_{1},\underline{\psi}_{2})=g_{E}\left( \underline{\psi}_{1},v_{E}(\underline{\mu})(\underline{\psi}_{2})\right)\,. \tag{83}\] Notice that \(w_{E}\circ\mathrm{tr}_{W}=0\), since a Weyl rescaling cannot be represented by an antisymmetric matrix. Given an oriented orthonormal basis \(\{\underline{e}_{a}\}\) for \(E\) along with its dual basis \(\{e^{a}\}\), with \(a=1,\ldots,2l\), we can define an \(SO(1,d-1)\) invariant volume form on \(E^{\mathfrak{g}}\) \[\mathrm{Vol}_{E}\equiv\epsilon_{a_{1}\cdots a_{d}}e^{a_{1}}\wedge\cdots\wedge e ^{a_{d}}\,. \tag{84}\] Thus, in this representation we can express: \[\epsilon(\underline{\mu}_{1},\ldots,\underline{\mu}_{l})=\epsilon_{a_{1}b_{1} \cdots a_{l}b_{l}}w_{E}(\underline{\mu}_{1})^{a_{1}b_{1}}\cdots w_{E}( \underline{\mu}_{l})^{a_{l}b_{l}}=\epsilon^{a_{1}}{}_{b_{1}}\cdots^{a_{l}}{}_ {b_{l}}v_{E}(\underline{\mu}_{1})^{b_{1}}{}_{a_{1}}\cdots v_{E}(\underline{ \mu}_{l})^{b_{l}}{}_{a_{l}}\,. \tag{85}\] This construction satisfies the above-mentioned properties since \(w_{E}\circ\mathrm{tr}_{W}(\underline{\mu})=0\) and \[\epsilon(\underline{\mu},\ldots,\underline{\mu})=\mathrm{Pf}(\underline{\mu} )\,. \tag{86}\] Note that this construction requires \(d\) to be even, as the \(\epsilon^{a_{1}}{}_{b_{1}}\cdots^{a_{l}}{}_{b_{l}}\) has an equal number of up and down indices (signifying its Weyl invariance). We are now prepared to introduce the relevant characteristic class for the LW anomaly. If we intend to derive the anomaly for a \(d=2l\) dimensional theory, we must construct a characteristic class of form degree \(d+2=2(l+1)\). Hence, we must construct a symmetric and invariant linear map \(Q^{LW,l+1}:L^{\otimes(l+1)}\to\mathbbm{R}\). As we have discussed, we have at our disposal two invariant objects corresponding to the trace (81) and the Pfaffian (82). We therefore obtain an \((l+1)\)-order symmetric invariant polynomial by taking the symmetrized product of these two maps: \[Q^{LW,l+1}(\underline{\mu}_{1},\ldots\underline{\mu}_{l+1})=\sum_{\pi} \epsilon(\underline{\mu}_{\pi(1)},\ldots,\underline{\mu}_{\pi(l)})\,\mathrm{ tr}_{W}(\underline{\mu}_{\pi(l+1)})\,, \tag{87}\] where \(\pi\) denotes the permutations of \((1,\ldots,l+1)\). The characteristic class associated with \(Q^{LW,l+1}\) is therefore given by \(\lambda_{Q^{LW,l+1}}(\Omega)\) as dictated in (59). While \(\lambda_{Q^{LW,l+1}}\) is the appropriate characteristic class in the LW context, in other situations (such as a simple or semi-simple group) one finds an Euler class.10 Let us now specialize to the case \(d=2\) and show that \(\lambda_{Q^{LW,2}}\) gives rise to the LW anomaly. The characteristic class of interest takes the following form: \[\lambda_{Q^{LW,2}}(\Omega)=\frac{1}{2}\left(\epsilon(\Omega)\wedge\mathrm{tr}_{W }(\Omega)+\mathrm{tr}_{W}(\Omega)\wedge\epsilon(\Omega)\right)\,. \tag{88}\] In the \(2d\) case, since the structure group \(G=SO(1,1)\times\mathbbm{R}_{+}\) is Abelian, we can write \(\Omega=\hat{\mathrm{d}}\omega\). Hence, the Chern-Simons form can be obtained as \[\mathscr{C}_{LW,2}(\omega,\Omega)=\frac{1}{2}\left(\epsilon(\omega)\wedge \mathrm{tr}_{W}(\Omega)+\mathrm{tr}_{W}(\omega)\wedge\epsilon(\Omega)\right)\,. \tag{89}\] To read off the covariant form of the anomaly polynomial let us pass to a representation on \(E\). In this representation, we have \[\omega=\hat{\omega}\epsilon^{a}{}_{b}+\hat{a}\delta^{a}{}_{b}\,,\qquad\Omega= \hat{R}\epsilon^{a}{}_{b}+\hat{f}\delta^{a}{}_{b}\,. \tag{90}\] Then using (81) and (85) we can write the covariant anomaly as [ignoring the factor of \(l=2\) in (71)] \[-\Omega_{W}\epsilon^{a}{}_{b}+\mathrm{Pf}(\Omega_{L})\delta^{a}{}_{b}\,. \tag{91}\] Noticing that \(\epsilon(\omega)\) and \(\mathrm{tr}_{W}(\omega)\) picks out the Lorentz and Weyl part of the connection, respectively, the first term in the above result should be interpreted as the Lorentz anomaly, which vanishes when the Weyl connection is turned off; the second term is the Weyl anomaly in \(2d\), which is proportional to the Ricci scalar of the spacetime. Therefore, the LW anomaly is the mixed anomaly between the Lorentz and Weyl symmetry. In fact, it is easy to see that by adding a total derivative term, one can remove the Lorentz anomaly or Weyl anomaly but cannot remove both simultaneously. To obtain the consistent form, we must employ a Lie algebroid trivialization. Under the trivialization we find that \[\overline{\tau}^{*}\omega=b-\varpi_{L}+a-\varpi_{W}\,,\qquad\overline{\tau}^{* }\Omega=R+f\,,\qquad\overline{\tau}^{*}\circ\hat{\mathrm{d}}=(\mathrm{d}+\mathrm{ s}_{L}+\mathrm{s}_{W})\circ\overline{\tau}^{*}\,, \tag{92}\] where \(b\) and \(a\) are the spin connection and Weyl connection on \(M\), and \(R\) and \(f\) are their curvature 2-forms, respectively. The pairs \((\varpi_{L},\mathrm{s}_{L})\) and \((\varpi_{W},\mathrm{s}_{W})\) are the Maurer-Cartan forms and BRST operators for the \(SO(1,1)\) and \(\mathbbm{R}_{+}\) factors of \(L\). Let \(B=b+a\) and \(\varpi=\varpi_{L}+\varpi_{W}\) denote the combined gauge field and Maurer-Cartan forms. We subsequently identify the consistent LW anomaly from \(Q^{LW,2}(\varpi,\mathrm{d}B)\). Since in the index notation of the representation we have \[(\mathrm{d}B)^{a}{}_{b}=R\epsilon^{a}{}_{b}+f\delta^{a}{}_{b}\,, \tag{93}\] the consistent form of the LW anomaly is merely the pullback of the covariant form by the trivialization \(\overline{\tau}\), which reads \[-f\epsilon^{a}{}_{b}+\mathrm{Pf}(R)\delta^{a}{}_{b}\,, \tag{94}\] which has the same form as (91). This follows in this particular case from the fact that \(G\) is an Abelian group when \(d=2\). A simplified account of the LW anomaly in two dimensions appeared recently in Appendix A of [79]. Note that here we have focussed on the type A Weyl anomaly, and the type B Weyl anomaly remains an open question in general dimension. A more elaborate discussion is required since obstruction tensors are expected to make an appearance [80, 81, 82, 83, 84]. We expect to return to this issue, as well as other \(G\)-structures, in a future publication. Conclusions In the introduction we raised a series of questions about the BRST formalism. In the course of this paper we have provided answers to each of these questions by geometrically formalizing the BRST complex in terms of the Atiyah Lie algebroid. As we promised in the introduction, each answer follows immediately from the geometry of the Atiyah Lie algebroid. **Q:** Why should the Grassmann valued fields \(c^{A}(x)\), which started their life in the BRST quantization procedure have an interpretation as the generators of local gauge transformations? And why is it reasonable to combine the de Rham complex and the ghost algebra into a single exterior bi-algebra? **A:** In the algebroid context the Maurer-Cartan form \(\varpi\in\Omega^{1}(L;L)\) plays the role of the gauge ghost, and is also a generator of local gauge transformations. Working in the consistent splitting the exterior algebra of the trivialized algebroid \(A_{\tau}\) subsequently takes the form of a bi-complex \(\Omega^{(p,q)}(M,L;E)\), where \(p\) is the form degree with respect to the de Rham cohomology of \(M\), and \(q\) is the "ghost number". The coboundary operator \(\hat{\mathrm{d}}_{\tau}\) takes explicitly the form \(\mathrm{d}+\mathrm{s}\) on this exterior algebra, where \(\mathrm{d}\) is the de Rham differential and \(\mathrm{s}\) is the BRST operator. **Q:** Why is it reasonable to consider \(\hat{A}=A+c\) as a "connection", and moreover what horizontal distribution does it define? **A:** Still in the context of the trivialized Lie algebroid, one can introduce a connection reform, \(\omega_{\tau}:A_{\tau}\to L\), defining the horizontal distribution \(H_{\tau}=\ker(\omega_{\tau})\) for which \(A_{\tau}=H_{\tau}\oplus V_{\tau}\). In the consistent splitting \(\omega_{\tau}=b-\varpi\), where \(b:TM\to L\) is a local gauge field, and \(\varpi:L\to L\) is the Maurer-Cartan form on \(L\). Hence, \(\omega\) reproduces the "connection" \(\hat{A}\) defined in the BRST complex, where again we see the role of the gauge ghost being played by the Maurer-Cartan form. **Q:** Why should the "curvature" \(\hat{F}\) be taken to have ghost number zero? And why does enforcing this requirement turn the BRST operator \(s\) into the Chevalley-Eilenberg operator for the Lie algebra of the structure group? **A:**\(\hat{F}\) in the context of the trivialized Lie algebroid is represented by \(\Omega_{\tau}=\hat{\mathrm{d}}_{\tau}\omega_{\tau}+\frac{1}{2}[\omega_{\tau},\omega_{\tau}]_{L}\), namely the curvature associated with \(\omega_{\tau}\), which is fully horizontal as a built-in geometric property of the algebroid. In the consistent splitting, this reproduces the Russian formula and the BRST transformation as we presented in (56). The culmination of all of these facts give rise to the descent equations (66) and the Wess-Zumino consistency condition (67). Given a characteristic class \(\lambda_{Q}(\Omega)\) with associated Chern-Simons form \(\mathscr{C}_{Q}(\omega)\) we have \[\hat{\mathrm{d}}_{\tau}\mathscr{C}_{Q}(\omega)=(\mathrm{d}+s)\mathscr{C}_{Q}( b-\varpi)=\mathrm{d}\mathscr{C}_{Q}(b) \tag{95}\] From the above equation, one can immediately compute the _consistent_ anomaly polynomial, which corresponds to the ghost number one contribution to \(\mathscr{C}_{Q}(b-\varpi)\), and can be shown to be an element of the first cohomology of the BRST operator \(s\) once integrated over a space of appropriate dimension. Furthermore, one can also obtain the _covariant_ form of the anomaly by viewing the Chern-Simons form in the covariant splitting and extracting the terms contributing with one exterior power in the vertical sub-bundle of the associated exterior algebra (multiplied by the order \(l\) of \(Q\)). Although the formulae for finding the consistent and covariant anomalies have been known [67], our approach to these anomalies provides a meaningful explanation as to why the consistent anomaly is consistent and the covariant anomaly is covariant. From the algebroid perspective, they just correspond to different choices of splitting. To understand the complete picture of the consistent and covariant anomalies as well as the anomaly inflow mechanism that relates them, we will have to further exploit the structure of the configuration space of Lie algebroid connections. In this paper we established a powerful approach for studying Lie algebroid morphisms in terms of commutative diagrams, which found a physical interpretation as a unified tool for implementing diffeomorphisms and gauge transformations. In a partner paper [65] we make use of this construction to define a new geometric formalism for understanding the extended configuration space of arbitrary gauge theories. We refer to this construction as the configuration algebroid. We demonstrated that the configuration algebroid provides a suitable quantification of the local degrees of freedom in a gauge theory, leading to a fully integrable algebra of charges associated with the local symmetries of a theory. From the point of view of the configuration algebroid, the presence of anomalies is associated with the question of whether the charge algebra is centrally extended. In forthcoming work we will combine the insights of this paper with [65] to describe anomalies as topological features of the configuration algebroid, and demonstrate how the anomaly inflow mechanism can be incorporated into the algebroid language. ## Acknowledgements We thank Luca Ciambelli, Pin-Chun Pai, Mike Stone and Manthos Karydas for conversations. This work was supported by the U.S. Department of Energy under contract DE-SC0015655.
2310.02317
Di-Higgs Signatures in Neutral Naturalness
The Higgs boson was the last fundamental piece of the Standard Model to be experimentally confirmed. LHC is embarked in a quest to probe the possibility that this particle provides a portal to new physics. One front of this quest consists in measuring the interactions of the Higgs with itself and with other SM particles to a high precision. In a more exotic front, the LHC is searching for the possibility that a pair of Higgses (HH) is the evidence of a new resonance. Such resonances are predicted in models with extended Higgs sectors, extra dimensions, and in models with exotic bound states. In this paper we show how scalar quirks in Folded Supersymmetry can give rise to HH resonances. We point out a viable sector of the parameter space in which HH is the dominant decay channel for these {\it squirkonium} bound states. We found that future runs of the LHC could discover HH resonances in the range of 0.5 - 1.6 TeV under reasonable assumptions. Furthermore, for a given mass and width of the HH signal, the model predicts the branching ratio of the subsequent decay modes of the heavy resonance. Finding the extra decay modes in the predicted pattern can serve as a smoking gun to confirm the model.
Mario W. Barela, Rodolfo Capdevilla
2023-10-03T18:00:07Z
http://arxiv.org/abs/2310.02317v2
# Di-Higgs Signatures in Neutral Naturalness ###### Abstract The Higgs boson was the last fundamental piece of the Standard Model to be experimentally confirmed. LHC is embarked in a quest to probe the possibility that this particle provides a portal to new physics. One front of this quest consists in measuring the interactions of the Higgs with itself and with other SM particles to a high precision. In a more exotic front, the LHC is searching for the possibility that a pair of Higgses (HH) is the evidence of a new resonance. Such resonances are predicted in models with extended Higgs sectors, extra dimensions, and in models with exotic bound states. In this paper we show how scalar quirks in Folded Supersymmetry can give rise to HH resonances. We point out a viable sector of the parameter space in which HH is the dominant decay channel for these _squikonium_ bound states. We found that future runs of the LHC could discover HH resonances in the range of 0.4 - 1.7 TeV under reasonable assumptions. Furthermore, for a given mass and intensity of the HH signal, the model predicts the branching ratio of the subsequent decay modes of the heavy resonance. Finding the extra decay modes in the predicted pattern can serve as a smoking gun to confirm the model. + Footnote †: preprint: FERMILAB-PUB-23-472 ## I 1. Introduction The current particle physics paradigm is that the Standard Model (SM) is a remarkable and, perhaps, the most successful existing physical theory. However, it is also known to be a low energy description of a much larger construction. This is because of the variety of phenomenological problems that the SM cannot address such as the Baryon asymmetry of the Universe, the mechanism for neutrino mass, flavor, and dark matter, to cite a few. One of the guiding principles in the search for physics beyond the SM has been Naturalness and the Hierarchy Problem (HP). This problem arises because the Higgs mass is quadratically sensitive to new physics scales, and becomes even more intriguing by the lack of evidence of new physics in ever increasing experimental energies. The SM is said unnatural for it does not contain a mechanism to stabilize the Higgs mass. Solutions to the HP typically feature top partners responsible for cancelling the quadratic contribution to the Higgs mass from top quark loops. This is the case in the Minimal Supersymmetric version of the SM (MSSM). Unfortunately, the fact that the mass of the top partners has been pushed to an uncomfortably high regime by current data gives rise to a smaller leftover tuning referred to as _Little Hierarchy Problem_. It is the strong interacting quality of the top partners that results in the powerful constraints on their masses. This observation triggered the proposition of _Neutral Naturalness_[1; 2; 3; 4] models in which the top partners are neutral with respect to one or various of the subgroups of the SM group. Folded Supersymmetry (F-SUSY) is an example of this type of construction in which top partners are not charged under the SM QCD, but under a _dark_ version of it. In this theory the Higgs mass is protected at the one loop level up to characteristic energies of tens of TeV. At this scale and above, it is possible to define an ultraviolet completion of F-SUSY with a fifth dimension compactified over an orbifold [2]. In F-SUSY the dark sector squarks are all heavier than the dark QCD hadronization scale. This causes them to behave as quirks (or squirks for its scalar nature). Pair production of these states results in excited _squirksonium_ bound states that relax down to the ground state and decay promptly at collider time scales [5]. Neutral squirkonium, here denoted as \(X_{q}^{0}\), can be produced via \(pp\to\gamma/Z\to\tilde{q}\tilde{q}^{*}\). Typically, these states preferentially decay into dark glueballs independently on the generation of the constituent squarks. Charged squirkonium \(X_{\tilde{q}}^{+}\), produced through \(pp\to W\to\tilde{q}^{\prime}\tilde{q}^{*}\), of the first and second generation will have a dominant branching ratio (BR) to \(W+\gamma\)[5; 6; 7; 8]. Now, third-generation charged squirkonium will undergo beta decay in a time scale much faster than relaxation [5], causing the system to decay to \(W+X_{q}^{0}\), where \(q\) represents the lighter between stop and sbottom. This final state shows promising results in a variation of the model where \(X_{\tilde{q}}^{0}\) is long-lived [9]. F-SUSY production of third generation squirks always derives in neutral squirkonium, either by direct production or via beta decay of charged ones. This neutral state then preferentially decays to dark glueballs. One feature of the model is that the \(0^{++}\) dark glueball state can mix with the Higgs boson through loops [10; 11]. This mixing causes the dark glueballs to have a naturally small coupling to SM particles, making them long-lived and a great signal for neutral naturalness models [12; 13; 14]. However, glueball production is known to decrease as the mass splitting between the two stop eigenstates increases [13]. This is the regime that we will explore in this paper. We will see how increasing the soft trilinear term \(A_{t}t_{L}t_{R}H\) that controls the mixing of the two eigestops, causes the neutral _stoponium_ state \(X^{0}_{t}\) to predominantly decay to a pair of Higgs bosons. A similar observation was made long ago in the context of the MSSM, where studies of stoponium bound states [15, 16, 17, 18, 19, 20, 21, 22, 23, 24] have shown that Higgs decay modes dominate for large stop mixing angles. However, stoponium bound states can only be realized in the MSSM for low stop masses, in a regime excluded by the LHC. Our study brings back the possibility that HH resonances have a connection with the third generation of (s)quarks and Naturalness. Furthermore, we will see how the prediction of the model lies in a range of masses that will be soon explored by the LHC. This paper is organized as follows: Sec. II gives a brief summary of the model and its unique phenomenological features. Sec. III presents our parametric setting where we define the benchmarks that we will analyze. We also show the theoretical bounds on the parameter space of interest from perturbative unitarity. Sec. IV shows squarkonium production cross section and decay modes. In Sec. V one can find our results for observability of HH resonances at the LHC. Finally, Sec. VI shows our conclusions and discussion. ## 2 Scalar Quirks in Folded Susy In this section we provide a synthesis of F-SUSY concepts that are important for our our analysis. For a complete treatment of the model, including a description of the full supersymmetric ultraviolet completion, we refer the reader to [2]. In F-SUSY, the low energy theory is symmetric under the group \(SU(3)_{c}\times SU(3)_{c^{\prime}}\times SU(2)_{L}\times U(1)_{Y}\). The representation content is that of the MSSM, but with squarks charged not under \(SU(3)_{c}\), but under the _dark_ color \(SU(3)_{c^{\prime}}\). The model comprises an additional octet of gluons corresponding to the new color sector. In order to understand the origin of the strange dynamics this results in, it must be known that the two strong force groups are related to each other in the ultraviolet completion of the theory by a \(Z_{2}\) symmetry. This ensures that the theory is fully Supersymmetric in the UV. As a consequence, the characteristic scales where confinement dynamics kicks in are close to each other \(\Lambda_{c^{\prime}}\sim\Lambda_{c}\). In general, a pair-produced particle-antiparticle system will hadronize when the energy density of the flux tube (or string) approaches or exceeds \(2m_{1}\), where \(m_{1}\) is the lightest quark-like particle in the theory. Differently from QCD, the QCD\({}^{\prime}\) particle content does not comprise any species with a mass \(m\) smaller than the typical string tension \(\Lambda_{c^{\prime}}\). Because of this, pair creation from the vacuum is suppressed as \(\exp(-m_{1}^{2}/\Lambda^{\prime 2})\) and a produced pair of QCD\({}^{\prime}\) particles will form a bound state instead of hadronizing. For this odd behavior, particles with charges of a strong group whose confining scale is much smaller than the lightest charged species mass are called _quirks_[25] - and, in F-SUSY, since they are supersymmetric partners, _squirks_. At LHC energies and for lightest quirk masses of up to \(\sim 1\) TeV, the squarkonium will typically be produced at a highly excited state. A semiclassical analysis [5] of the strong force bound state shows that the probability of decay only become appreciable after relaxation, _i.e._, after the excess energy is radiated away through emission of photons or glueballs, and the 2-particle system is left at the lowest lying angular momentum state. The decay of the squirkonium to lightest states will, then, most likely have an \(s\) wave contribution. The possibility of detecting the soft signals of the relaxation period have been discussed in [26] where the _anthena_ pattern is the smoking gun signature. Soon after the proposal of F-SUSY, the same authors showed that the \(W+\gamma\) final state is the dominant decay mode for first and second generation of squirks. They also show that it is not possible to have a charged squarkonium bound state of the third generation because the heavier constituent will beta-decay in a timescale faster than relaxation [5]. This indicates that only neutral squurkonium of the third generation is possible, a state which preferentially decays to dark glueballs. Now, the third generation is of great important for it is the one intrinsically tied to Naturalness and the hierarchy problem. Our work is motivated by this connection, and we would like to study decay channels of the neutral third-generation squirkonium in F-SUSY beyond those explored in the literature where long-lived glueballs seems to be one of the most interesting signals [12]. We will study the large soft trilinear coupling limit for stoponium, where the decay mode to HH can dominate over glueball formation. Our study only involves interactions of the third generation quarks, squarks and of the Higgs and gauge bosons. We will not make any attempt to fix classical problems of the MSSM like the \(\mu\) problem or the Higgs mass [27, 28, 29, 30]. Our simplified analysis assumes: 1) The lightest stop is the lightest third generation squirk; 2) A neutral stoponium is produced from proton-proton collision at the LHC; 3) This state, initially highly excited, will promptly radiate away energy and angular momentum relaxing down to its ground state; 4) Finally, this ground state squarkonium will decay to a variety of channels with a narrow total width (\(\sim 5-10\%\)). In order to determine if one of these channels can overcome glueball formation, we calculate the complete set of branching ratios and analyze their variation over an interesting sector of parameter space. We now discuss the parameter space of interest in the next section. ## 3 Parameter Space and Unitarity The interactions relevant to our study involve third generation squarks, gauge bosons, and the Higgs. These comprise, in principle, the following free parameters \(\{\tan\beta,\,\mu,\,A_{t},\,A_{b},\,m_{\tilde{Q}_{L}},\,m_{\tilde{t}_{R}},\,m_{ \tilde{b}_{R}}\}\), where \(\tan\beta\) (or simply \(t_{\beta}\)) is the ratio \(v_{q}/v_{d}\) of the vacuum expectation values (vev) of the two Higgses in the model, \(\mu\) is the parameter of the supersymmetric quadratic scalar term, \(A_{q}\) are the soft trilinear terms of the form \(A_{q}H\widetilde{Q}_{L}\widetilde{q}_{R}\), and \(m_{\widetilde{Q}_{L}},m_{\tilde{t}_{R}},m_{\tilde{b}_{R}}\) are the squark soft masses. In order to define practical benchmarks, we choose a scenario in which all soft masses are equal _i.e._, \(m_{\widetilde{Q}_{L}}=m_{\tilde{t}_{R}}=m_{\tilde{b}_{R}}\equiv\widetilde{m}_{ \rm soft}\) and there is no mixing in the sbottom sector, meaning \(m_{\tilde{b}_{1}}=m_{\tilde{b}_{2}}=\widetilde{m}_{\rm soft}\). These choices leave us with the following set of free parameters \[\{t_{\beta},\;\mu,\;A_{t},\;m_{\tilde{t}_{1}}\}, \tag{1}\] where \(m_{\tilde{t}_{1}}\) (or simply \(m_{\tilde{t}}\)) is the mass of the lightest eigen-stop. A given choice of these parameters will determine the mass of the heaviest stop, the soft (and sbottom) mass, and mixing angles. In our analysis, we will vary the mass of the lightest stop between 200 GeV up to 1 TeV and the soft trilinear parameter from 1 up to a few TeV. It could be argued that a _natural_ choice for the other parameters is \((t_{\beta},\mu)\sim(1,m_{h})\), where \(m_{h}\) is the mass of the SM-like Higgs particle. A _tuned_ choice of \((t_{\beta},\mu)\) could be defined as one that reflects a hierarchy between the two vev of the model and between \(\mu\) and the EW scale. Without a rigorous definition of tuning, here we define a set of benchmarks (B1, B2, B3, B4) that go from very small to some degree of tuning: \[\begin{split}\text{B1:}&\quad\mu=200\,\text{GeV},\, t_{\beta}=1\\ \text{B2:}&\quad\mu=200\,\text{GeV},\,t_{\beta}=10\\ \text{B3:}&\quad\mu=1\,\text{TeV},\,t_{\beta}=1\\ \text{B4:}&\quad\mu=1\,\text{TeV},\,t_{\beta}=10. \end{split} \tag{2}\] ### Perturbative Unitarity As mentioned above, \(A_{t}\) is the scalar trilinear coupling that controls the \(H\tilde{t}_{1}\tilde{t}_{1}^{*}\) vertex strength. Increasing this parameter increases the splitting between the two eigen-stops \(\tilde{t}_{1},\tilde{t}_{2}\) which, as we will see below, in turn increases the production and HH decay rates of the squirkonium states of interest. However, trilinear terms like \(A_{t}\) cannot be set to arbitrarily large values for these parameters tend to create problems like vacuum instability, tachyonic states, or violation of perturbative unitarity [31; 32]. The first two problems are under control within our reasonable benchmark region, and to analyze the third we now study the partial wave unitarity of the model. We begin from the partial-wave expansion of the (azimuthally symmetric) scattering amplitude for the scalar \(2\to 2\) process \(i\to f\equiv\)\(\{a,b\}\to\{c,d\}\), here denoted by \(\mathcal{M}_{if}(\theta)\). The \(j\)-th coefficient of the expansion is \[a_{if}^{j}=\frac{1}{32\pi}\sqrt{\frac{4|\mathbf{p}^{i}||\mathbf{p}^{f}|}{2^{ \delta_{ab}}2^{\delta_{cd}}}}\int d\theta\mathcal{M}_{if}(\theta)P_{j}(\theta), \tag{3}\] where \(P_{j}(\theta)\) are the Legendre polynomials and \(\mathbf{p}^{i},\mathbf{p}^{f}\) are the centre of mass three-momentum for the initial and final states respectively. In a multi-process analysis one can construct the matrix \((a^{j=0})_{if}\) taking into account all the initial and final states. To satisfy the unitarity condition, the \(k\)-th eigenvalue of this matrix must obey \[\left|\text{Re}\left(a_{0}^{k}\right)\right|\leq\frac{1}{2},\;\;\forall\,k. \tag{4}\] Note that the constraint above must hold in the entire phase space. To obtain an estimate of the unitarity bounds, we consider the amplitude for the process \(\tilde{t}_{1}\tilde{t}_{1}^{*}\rightarrow\tilde{t}_{1}\tilde{t}_{1}^{*}\), which include the 4-scalar vertex as well as \(s\)- and \(t\)-channel exchange of Higgs and dark gluons. The 0-th coefficient is given by1 Footnote 1: These approximate formulae ignore terms proportional to EW parameters suppressed by factors of \(m_{Z}^{2}/m_{t}^{2}\) and \(m_{\tilde{Z}}^{2}/m_{\tilde{t}_{1}}^{2}\). In our analysis and figures no approximations have been considered. \[a_{0}\sim-\frac{1}{24\pi s_{\beta}^{2}v_{h}^{2}}\sqrt{1-\frac{4m_{\tilde{t}}^{2 }}{s}}(F_{0}+F1+F_{2}+F_{3}), \tag{5}\] where \[\begin{split} F_{0}=&(3m_{t}^{2}s_{2\theta}^{2}+g_{ s}^{2}s_{s}^{2}c_{2\theta}v_{h}^{2})\\ F_{1}=& e^{2}s_{\beta}^{2}(9c_{\theta}^{4}+8s_{W} ^{2}(2s_{\theta}^{4}-c_{\theta}^{2}))/(12c_{W}^{2}s_{W}^{2})\\ F_{2}=&\frac{6m_{t}^{2}(c_{\alpha}m_{t}+s_{\alpha}c_{ \theta}(A_{t}c_{\alpha}-s_{\alpha}\mu))^{2}}{s-m_{h}^{2}}\\ F_{3}=&-\frac{s-m_{h}^{2}}{s-4m_{\tilde{t}}^{2}}F_{2 }\log\left[1+\frac{s-4m_{\tilde{t}}^{2}}{m_{h}^{2}}\right].\end{split} \tag{6}\] Here, \(m_{t}\) is the mass of the top quark, \(v_{h}\) is the SM-like Higgs vev, \(\theta\) is the stop mixing angle, and \(\alpha/\beta\) are the Figure 1: Maximum \(A_{t}\) allowed by perturbative unitarity as a function of the lightest stop mass. mixing angles of the neutral CP-even/odd components of the two Higgs multiplets in the MSSM [33]. Fig. 1 shows the unitarity bounds corresponding to our four benchmarks defined in Eqs. 2. Below each line the model is unitary safe. We found that for a stop mass of 200 GeV the bound on \(A_{t}\) varies between 2.5 and 3.5 TeV, depending on the benchmark. Note how reducing \(\mu\) and increasing \(\tan\beta\) one may extend the allowed region of parameter space. For a more refined calculation, one can construct a \(5\times 5\) scattering matrix including \(hh\), \(\tilde{t}_{1}\tilde{t}_{1}^{*}\), \(\tilde{t}_{2}\tilde{t}_{2}^{*}\), \(\tilde{b}_{1}\tilde{b}_{1}^{*}\), \(\tilde{b}_{2}\tilde{b}_{2}^{*}\) initial and final states. In [31] the authors show how including some of these processes one can extend the unitary bound on \(A_{t}\) up to 4.4 - 5 TeV for stop masses of 100 GeV. We will keep our calculation as a conservative constraint keeping in mind that the full calculation could in principle open a larger region of parameter space. ## IV 4. Stoponium Production and Decay Production We now discuss the production mechanisms for our squirkonium state of interest at the LHC. In the parameter space that we focus i.e. where the trilinear term \(A_{t}\) is large, the dominant production channels of stoponium \(X_{\tilde{t}}^{0}\) are \[q\bar{q}\,\mbox{fusion}: p(q)p(\bar{q})\to\gamma/Z\to\widetilde{t}\widetilde{t}^{*}\] \[gg\,\mbox{fusion}: p(g)p(g)\to h\to\widetilde{t}\widetilde{t}^{*}.\] The first process is the usual Drell-Yan, neutral gauge boson mediated, \(q\bar{q}\)-fusion. The second process is the \(gg\)-fusion that involves a triangle top-quark loop and a Higgs in the \(s\)-channel. In the limit of large \(A_{t}\) and high center of mass energy, the partonic cross section of the \(q\bar{q}\)-fusion is given by \[\hat{\sigma}(q\bar{q}\to\widetilde{t}^{*})\approx\frac{\pi\alpha^{2}}{3\hat{s }}\left(1-\frac{4m_{\tilde{t}_{1}}^{2}}{\hat{s}}\right)^{3/2}f_{q}(\theta), \tag{7}\] where \(f_{q}(\theta)=\alpha_{0}^{q}+\alpha_{2}^{q}s_{\theta}^{s}+\alpha_{4}^{q}s_{ \theta}^{4}\). The dimensionless coefficients \(\alpha_{i}^{q}\) are given in terms of SM constants and are numerically equal to \(\alpha_{0}^{u}=20.3,\alpha_{2}^{u}=-32.8,\alpha_{4}^{u}=18.2\), and \(\alpha_{0}^{d}=17.6,\alpha_{2}^{d}=-39.3,\alpha_{4}^{d}=23.4\). In the same limit of large \(A_{t}\) and \(\hat{s}\), the partonic cross section of the \(gg\)-fusion process is given by \[\hat{\sigma}(gg\to\widetilde{t}\widetilde{t}^{*})\approx\frac{6\alpha_{s}^{ 2}y_{t}^{2}m_{t}^{4}}{64^{2}\pi^{3}\hat{s}^{2}v_{h}^{2}}\left(1-\frac{4m_{ \tilde{t}_{1}}^{2}}{\hat{s}}\right)^{1/2}g_{t}(\hat{s}), \tag{8}\] \[g_{t}=\frac{s_{2\theta}^{2}A_{t}^{2}}{4t_{\alpha}^{2}\hat{s}}\left[-4+\left(1 -\frac{4m_{t}^{2}}{\hat{s}}\right)\log^{2}\left(-\frac{m_{t}^{2}}{\hat{s}} \right)\right]^{2}. \tag{9}\] In our calculation we included the effects of \(u,d,s,c,g\) partons convoluting the cross section above with the corresponding PDFs for which we used the MSTW2008 set [34]. The cross sections resulting from these channels may be observed in Fig. 2 (left). The \(q\bar{q}\)-fusion process (solid blue) occurs through gauge interactions and it is independent of \(A_{t}\). The \(gg\)-fusion channel (dashed lines) involves a \(H\tilde{t}_{1}\tilde{t}_{1}^{*}\) vertex and it is enhanced with increasing \(A_{t}\), reason why this channel dominates for an arbitrarily high value of this parameter. Note, for example, that for a mass of \(m_{\tilde{t}_{1}}=0.4\) TeV the \(gg\)-fusion process dominates for \(A_{t}>2\) TeV. Figure 2: Left: Production cross section of stoponium at the LHC. For low \(A_{t}\) values, the dominant process is \(q\bar{q}\)-fusion, whereas \(gg\)-fusion dominates for large \(A_{t}\). Right: Branching Ratios of the lowest lying energy state of the lightest stoponium into the various decay modes as a function of \(A_{t}\). ### Decay In order to calculate the BR of the different decay modes of \(X_{t}^{0}\) we will follow the method in [5]. We calculate the cross section \(\sigma(\vec{t}\bar{t}^{*}\to xy)\) for all possible combinations of \(xy\) given the interactions of the \(X_{t}^{0}\) state: \(g^{\prime}g^{\prime},HH,H\gamma,HZ,\gamma\gamma,\gamma Z,ZZ,WW,t\bar{t}\). We then get the annihilation rate \(\left\langle\sigma v\right\rangle\) taking the limit where the relative velocity \(v\) of the \(t\bar{t}^{*}\) system goes to zero. Finally, the BR for the \(i\)-th decay mode is simply \(\text{BR}_{i}=\left\langle\sigma v\right\rangle_{i}/\sum_{j}\left\langle \sigma v\right\rangle_{j}\). A priori, one can guess that the dominant decay mode is \(g^{\prime}g^{\prime}\) due to strong nature of the interaction. Our task is to look for a region of the parameter space where HH can dominate. In the limit \(A_{t}\gg m_{\tilde{t}}\gg m_{t},m_{h}\), the \(g^{\prime}g^{\prime}\) and HH annihilation rates are equal to \[\left\langle\sigma v\right\rangle_{g^{\prime}g^{\prime}}\approx \frac{28\pi\alpha_{s}^{2}}{3m_{\tilde{t}}^{2}}, \tag{10}\] \[\left\langle\sigma v\right\rangle_{HH}\approx \frac{3y_{t}^{4}c_{s}^{4}s_{2d}^{4}A_{t}^{4}}{128\pi s_{\beta}^{ \prime}m_{\tilde{t}}^{6}}.\] Here we can observe that for large enough values of \(A_{t}\), the HH mode is expected to dominate. In agreement with this intuition we can see in Fig. 2 (right) how for large \(A_{t}\), the \(g^{\prime}g^{\prime}\) mode (solid orange) is highly suppressed whereas the HH mode (dot-dashed green) BR approaches one. The effect of increasing the stop mass \(m_{\tilde{t}}\) (not shown in the figure) is that all curves in the figure move to the right, meaning that the HH mode starts dominating at higher values of \(A_{t}\) than those shown in the figure. In the relevant parameter space, we found that the modes \(H\gamma,HZ,\gamma\gamma,\gamma Z\) were highly suppressed compared to those shown in Fig. 2. ### Di-Higgs Signals at the LHC The LHC performs both resonant and non-resonant searches for a pair of Higgs bosons in a variety of final states [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. One of the main motivations of HH searches is to accurately measure the self coupling of the Higgs. The SM has an unfortunate accidental cancellation between the two main diagrams that contribute to HH production, namely, the gluon fusion s-channel Higgs exchange that then splits into two Higgses via self coupling, and the gluon fusion to HH via a top quark box diagram. The total cross section for this process in the SM is about 32.7 fb [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58]. The main effect of the self coupling is more significant at lower HH invariant masses. Current bounds from non-resonant HH searches at the LHC constrain the trilinear coupling to be within 40% of the SM prediction [69; 70; 71; 72; 73; 74; 75; 76]. Now, the fact that HH has a small cross section in the SM opens an opportunity for new physics. In the large invariant mass regime one expects very little irreducible background events. Searches for HH resonances performed in the \(bbbb\) final states place bounds [43] on masses between 250 GeV and 5 TeV for spin 0 [77] and spin 2 [78] resonances. The bounds on the cross section times HH branching ratio range between a few pb for the lowest masses down to 1 fb for the heaviest mass. 2 Footnote 2: These bounds imply different lower bounds in the HH resonance mass in the context of different models [79; 80; 81]. In order to find the reach of the LHC on the parameter space of our model, we calculated the cross section for stoponium production and multiplied by the corresponding BR to HH in the plane \((m_{\tilde{t}},A_{t})\). Our results Figure 3: Exclusion contours on the \((m_{\tilde{t}},A_{t})\) plane for the two _natural_ benchmarks. For low \(t_{\beta}\) the LHC is expected to find low mass resonances in the range of (400, 800) GeV corresponding to \(m_{\tilde{t}}\) in the range (200, 400) GeV. As \(t_{\beta}\) increases, heavier resonances are expected so that for \(t_{\beta}=10\) a 1.7 TeV resonance is possible. are presented in Figs. 3 and 4, where we show the exclusion and projections for the different benchmarks defined in Eq. 2. We found that for the _natural_ benchmark B1, where \(\mu=200\) and \(t_{\beta}=1\) (Fig. 3 - left), current LHC data only covers a region of the parameter space that is disfavored by Unitarity. This benchmark predicts that HL-LHC will discover di-Higgs resonances in the range of 400-800 GeV corresponding to stop masses of 200-400 GeV. For the second _natural_ benchmark B2, where \(\mu=200\) and \(t_{\beta}=10\) (Fig. 3 - right), current data exclude resonances up to 1.4 TeV, corresponding to stop masses of 700 GeV. According to this benchmark, HL-LHC will discover HH resonances up to 1.7 TeV corresponding to stop masses of 850 GeV. The bottom line of what these results indicate is that the LHC could discover di-Higgs resonances in the range of 400 - 1700 GeV in subsequent runs. This, in a reasonable _natural_ region of the parameter space. Furthermore, if LHC finds a HH resonance in this range, according to our analysis, we could be able to infer the value of \(t_{\beta}\) and \(A_{t}\) within a small window. This in turn will allow us to infer the subsequent decay modes of the resonance according to the right panel of Fig. 2. As we can see in said figure, our resonance will have a significant BR to massive gauge bosons, and if this resonance were to be related to naturalness and the stops, it will also have a significant BR to a pair of top quarks. Finding the same resonance in any of these channels would amount to strong evidence in favour of the model. The situation for the more _tuned_ benchmarks B2 and B3, for which \(\mu=1\) TeV (Fig. 4), is quite similar to what happend for the _natural_ benchmarks; future runs of the LHC could discover HH resonances in the range of 400 - 1700 GeV as a function \(t_{\beta}\). As discussed in previous sections, our calculations assume stoponium production, fast relaxation, prompt decay, and a narrow width so that our signal efficiency is comparable to those of the LHC searches. Except for the last, all these assumptions were proved to be valid for stoponium in Folded SUSY [5]. If the last assumption were not true and the resonance is broad, our results would not apply, and this represents an opportunity for future work. ### Final Remarks We showed how di-Higgs resonances are predicted in Folded SUSY in the limit of large \(A_{t}\), the parameter of the trilinear soft SUSY breaking term, in the stop sector. Our results are relevant for subsequent runs at the LHC, where these resonances could be discovered in the range of 400 - 1700 GeV under reasonable assumptions. These values correspond to stop masses between 200 and 850 GeV. The observation that stoponium bound states preferentially decay to HH has been made in past in the context of the MSSM. However, these bound states can only be conceived in the MSSM for light stops, in a range of masses excluded by LHC searches. Our analysis brings back the possibility that stoponium bound states will produce HH resonances that the LHC will soon discover but this time in the context of F-SUSY. This makes a direct connection between HH resonances, the third generation of (s)quarks, and Naturalness. Although our analysis focuses on F-SUSY, we argue that the main ingredients of the model that led us to the main results are also present in other models of NN. In general, in NN models the Higgs is the portal between the SM and the _dark_ (or _mirror_) sectors. What we showed in this paper is that enhancing the parameter that connects the Higgs with the third generation quirks in the dark sector has two effects: it enhances the production Figure 4: Similar to Fig. 3 but for the _tunned_ benchmarks. The results are similar to those of the _natural_ benchmarks because both production and decay of stoponium have a small dependence on \(\mu\) in the parameter space of interest. of the corresponding squirkonium state, and it enhances its BR to HH. Once the LHC discovers a HH resonance, a thorough study of its decay modes will serve to unveil the underline theory responsible for said resonance. A pattern like the one in the right panel of Fig. 2 will be a smoking gun pointing at F-SUSY, and it will help us determine some of the model parameters. In a different model of NN the squirkonium bound state will have a different pattern of decays that deserve detailed study in future work. ## Acknowledgment We thank Zackaria Chacko for valuable discussions at the early stage of this work. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of High Energy Physics.
2303.15753
Synchronization of spin-driven limit cycle oscillators optically levitated in vacuum
We explore, experimentally and theoretically, the emergence of coherent coupled oscillations and synchronization between a pair of non-Hermitian, stochastic, opto-mechanical oscillators, levitated in vacuum. Each oscillator consists of a polystyrene microsphere trapped in a circularly polarized, counter-propagating Gaussian laser beam. Non-conservative, azimuthal forces, deriving from inhomogeneous optical spin, push the micro-particles out of thermodynamic equilibrium. For modest optical powers each particle shows a tendency towards orbital circulation. Initially, their stochastic motion is weakly correlated. As the power is increased, the tendency towards orbital circulation strengthens and the motion of the particles becomes highly correlated. Eventually, centripetal forces overcome optical gradient forces and the oscillators undergo a collective Hopf bifurcation. For laser powers exceeding this threshold, a pair of limit cycles appear, which synchronize due to weak optical and hydrodynamic interactions. In principle, arrays of such Non-Hermitian elements can be arranged, paving the way for opto-mechanical topological materials or, possibly, classical time crystals. In addition, the preparation of synchronized states in levitated optomechanics could lead to new and robust sensors or alternative routes to the entanglement of macroscopic objects.
Oto Brzobohaty, Martin Duchan, Petr Jakl, Jan Jezek, Pavel Zemanek, Stephen H. Simpson
2023-03-28T06:22:08Z
http://arxiv.org/abs/2303.15753v1
# Synchronization of spin-driven limit cycle oscillators optically levitated in vacuum ###### Abstract We explore, experimentally and theoretically, the emergence of coherent coupled oscillations and synchronization between a pair of non-Hermitian, stochastic, opto-mechanical oscillators, levitated in vacuum. Each oscillator consists of a polystyrene microsphere trapped in a circularly polarized, counter-propagating Gaussian laser beam. Non-conservative, azimuthal forces, deriving from inhomogeneous optical spin, push the micro-particles out of thermodynamic equilibrium. For modest optical powers each particle shows a tendency towards orbital circulation. Initially, their stochastic motion is weakly correlated. As the power is increased, the tendency towards orbital circulation strengthens and the motion of the particles becomes highly correlated. Eventually, centripetal forces overcome optical gradient forces and the oscillators undergo a collective Hopf bifurcation. For laser powers exceeding this threshold, a pair of limit cycles appear, which synchronize due to weak optical and hydrodynamic interactions. In principle, arrays of such Non-Hermitian elements can be arranged, paving the way for opto-mechanical topological materials or, possibly, classical time crystals. In addition, the preparation of synchronized states in levitated optomechanics could lead to new and robust sensors or alternative routes to the entanglement of macroscopic objects. ## I Introduction Over the preceding decade, levitational optomechanics has emerged as a versatile platform for addressing crucial questions in the physical sciences, ranging from the macroscopic limits of quantum mechanics [1] to the thermodynamic limits of computation [2]. It makes use of optical forces, which are generated when light scatters from small particles. These forces can confine or suspend isolated particles in vacuum, or induce structured interactions, known as _optical binding forces_, amongst collections of them [3]. Supplementing optical with electrostatic forces [4], and combining with an optical cavity [5] results in a reconfigurable experimental system with widely tunable reactive and dissipative forces, capable of supporting dynamical effects across multiple physical regimes [6]. Most significantly, these techniques have recently enabled motional cooling of nanoparticles towards and into their quantum mechanical ground state, in single and multiple degrees of freedom [7; 8; 9; 10] with the future promise of macroscopic entanglement [11; 12; 13]. In the classical domain, the harmonic potentials associated with optical tweezers can confine cooled particles forming high Q oscillators with exquisite force sensitivity [14; 15]. However, optomechanical systems can exhibit far richer behaviour. Optical forces, including interaction forces, are, in general, non-conservative [16; 17; 18; 19], and can be holographically sculpted [20; 21]. In combination, arbitrarily structured non-conservative and non-linear forces, thermal fluctuations and dissipation provide the necessary ingredients for numerous stochastic and dynamic phenomena which are not only of intrinsic interest, but could also have novel applications in sensing and metrology. Examples include autonomous stochastic resonance [22], coherence resonance [23], stochastic bifurcations [24] and stochastic synchronization, all of which are exploited by nature in the sense apparatus of animals [25; 26]. In this article we demonstrate an archetypal non-equilibrium effect: synchronization of a pair of noisy limit cycle oscillators [27; 28]. The oscillators comprise polystyrene microspheres trapped in circularly polarized, counter-propagating optical beams, in vacuum. Each oscillator is linearly non-conservative, due to azimuthal components of momentum associated with inhomogeneous optical spin [29; 30; 31], and coupled through weak hydrodynamic and optical interactions. Analogous effects can be induced via birefringence [32], or phase difference [33; 34]. Below a critical, threshold power, we observe biased Brownian motion, featuring correlations which strengthen with increasing optical power. At threshold, a bifurcation occurs [35] and the stable trapping points are replaced with noisy limit cycles that form robust synchronized states with characteristic detuning behaviour [23]. The ability to form coherent, coordinated non-equilibrium states, such as the synchronized states described here, could have applications in sensor arrays [37], suppressing phase noise and natural variations in fundamental frequencies [38; 39]. These linearly non-conservative oscillators are particular examples of a broader class of non-Hermitian oscillator [40; 41], characterised by broken time reversal symmetry and a capacity to exchange energy with the environment. Arrays of such non-Hermitian units can form topological phases and exhibit exponential sensitivity through the well know skin effect [37; 42; 43; 44]. These phenomena have been successfully realized in the classical domain with the use of micro-robotics [45; 46]. Optical forces, such as those considered here, offer a route to realize similar effects, spontaneously, in the mesoscopic regime. Moreover, under appropriate conditions, such systems may also act like classical time crystals [47; 48]. Developing appropriate cooling techniques could take these effects towards the quantum regime and provide experimental access to mesoscopic quantum dynamic phenomena such as quantum synchronization or entanglement [13; 49; 50]. Figure 1: Overview of the experiment. (a) Basic geometry and coordinate system. Two pairs of counter-propagating circularly polarized laser beams (red arrows) (wavelength \(\lambda=1064\,\)nm and beam waist \(900\,\)nm) form two Gaussian standing waves. Two polystyrene particles (nominal radius \(a=425\) nm) are localized in the standing waves in the axial direction (\(z\)-axis), while in the lateral direction, they tend to orbit due to the azimuthal spin force at the ambient pressure \(17\,\)mbar (equivalent to an effective viscosity \(\mu\approx 1.15\,\mu\)Pa s [36]). Their synchronized motion emerges fromve the breathing (QM\({}_{1}\), QM\({}_{2}\)) and center of mass (QM\({}_{3}\), QM\({}_{4}\)) quasi-modes. Cumulative azimuthal angles \(\phi_{1}\) and \(\phi_{2}\) are used to quantify the level of their synchronization. (b) Schematic describing the stochastic quasi-modes in the linear (sub-threshold) regime. Increasing the laser power leads to slightly larger radii of the lateral particles’ trajectories and suppression of two quasi-modes QM\({}_{1}\), QM\({}_{3}\). (c) Experimentally observed biased stochastic motion, showing a tendency towards orbital rotation illustrated using spatial probability densities (PDFs) in the particle displacement basis and in the quasi-mode basis. The picture depicts excitation of the breathing mode QM\({}_{2}\) for beam separation \(d=8.6\,\mu\)m, corresponding to a condition where the threshold power \(P_{\rm c}\) for the center of mass mode QM\({}_{4}\) is larger than \(P_{\rm b}\) for the breathing mode. (d) At larger powers approaching the threshold, fluctuating limit cycles occasionally begin to be formed. At \(d=8.6\,\mu\)m the breathing quasi-mode QM\({}_{2}\) fulfilling \(P_{\rm c}-P_{\rm b}>0\) (red) is further developed stable while for \(d=8.9\,\mu\)m the center of mass quasi-mode QM\({}_{4}\) with \(P_{\rm c}-P_{\rm b}<0\) (green) further grows. (e) Short time trajectories of particles for both cases \(P_{\rm c}-P_{\rm b}\lessgtr 0\) from (d) demonstrating the rise of anti-phase (left) and in-phase synchronized motions (right). Results ### Qualitative Experimental Observations The geometry of the experiment is graphically represented in Fig. (1)a. The optical field consists of a parallel pair of counter-propagating Gaussian beams each of which has the same power, and forms a standing wave normal to its axis. Circular polarization gives rise to azimuthal components of optical spin momentum which swirl around the axis of each beam. One particle is confined in each counter-propagating beam, where it is subject to non-conservative, azimuthal spin-forces which drive its motion out of thermodynamic equilibrium. In addition, light scattering between the particles induces optical binding forces which, in addition to dissipative hydrodynamic interactions, couple their stochastic motion. The relative strength of these coupling interactions varies with the separation, \(d\) between the beams, allowing us to tune the form of behaviour manifested in the experiment. We observe a range of quintessentially non-equilibrium effects ranging from biased stochastic motion to the formation of synchronized limit cycle oscillations, Fig 1(b-f). The experimentally observed stochastic motion can be described in terms of two interconnected pairs of quasi-modes (QMs), whose properties are described in detail below and in the Supplementary Information. These pairs of QMs are referred to as the _Centre of Mass_ (CoM) and _breathing_ (BR) QMs. In combination, they describe in-phase (CoM) and anti-phase (BR) stochastic orbital rotation, in clockwise and counter-clockwise directions, Fig. 1b. Each pair of QMs has a threshold optical power, \(P_{\mathrm{c}}\) and \(P_{\mathrm{b}}\) for CoM and BR, respectively. Thermal fluctuations combine with non-conservative forces and excite the QMs to different degrees: the closer the optical power is to the threshold power of a QM, the more strongly it is excited and the greater is its mean squared amplitude. In our experiments, we continuously increase the optical power and make observations of the stochastic motion produced. The observed behaviour depends, therefore, on the relative magnitudes of the threshold powers, \(P_{\mathrm{c}}\) and \(P_{\mathrm{b}}\), see Fig. 1(b,c). For example, when \(P_{\mathrm{c}}<P_{\mathrm{b}}\), the threshold power of the CoM mode is approached first as the power increases. This causes the CoM mode to grow most rapidly until it dominates the observed motion. When \(P_{\mathrm{b}}<P_{\mathrm{c}}\), it is BR that becomes dominant. As described further below, the difference between the threshold powers, \(P_{\mathrm{c}}-P_{\mathrm{b}}\), oscillates about zero as the beam separation, \(d\), is increased so that \(P_{\mathrm{c}}<P_{\mathrm{b}}\) for some separations and \(P_{\mathrm{b}}<P_{\mathrm{c}}\) for others, Fig. 1(d). We are therefore able to tune the observed behaviour by adjusting \(d\). Increasing the power above one of the threshold powers results in a sudden bifurcation. Subsequently, each particle executes a limit cycle oscillation. Weak interaction forces cause these self-sustained oscillations to synchronize, Fig. 1(f). In the following sections we first outline some theoretical principles before applying them to experimental investigations of the sub-threshold (Fig. 1(c,d)) and above threshold regimes (Fig. 1e). Theoretical Considerations: Generalized Hooke's Law, Linear Stability and Limit Cycle Formation in Stochastic Optomechanics In Supplementary Note we provide a detailed analysis of the general stability properties and stochastic motion of multi-particle, levitated optomechanical systems in the linear regime. Below we summarize the results used throughout the rest of this article. For many optomechanical systems, including the one studied here, it is possible to identify a configuration in which the system is at mechanical equilibrium i.e. a configuration in which the external optical forces vanish and are locally restoring. For small displacements, the optical force can be linearly approximated by a generalized Hooke's law i.e. \[\mathbf{f}\approx-\mathbf{K}\cdot\mathbf{q}\equiv-P\mathbf{k} \cdot\mathbf{q}, \tag{1a}\] \[K_{ij}=-\frac{\partial f_{i}}{\partial x_{j}}\bigg{|}_{\mathbf{q }=0}, \tag{1b}\] where \(\mathbf{q}=\mathbf{r}-\mathbf{r}_{0}\) are small displacements with respect to the coordinates of the mechanical equilibrium, \(\mathbf{r}_{0}\). The _stiffness matrix_, \(\mathbf{K}\), is proportional to the optical power such that \(\mathbf{K}=P\mathbf{k}\), with \(\mathbf{k}\) the power normalized stiffness. Two qualitatively distinct cases emerge: Linearly conservative forces:In this case \(\mathbf{K}\) is symmetric with real eigenvalues. The motion of the system can be described in terms of a discrete, orthogonal set of _normal modes_, each satisfying the equipartition theorem, having energy \(k_{\mathrm{b}}T/2\) for any value of the optical power, \(P\). Linearly non-conservative forces:In this case, \(\mathbf{K}\) is non-symmetric and its eigenvalues can occur in complex conjugate pairs [51]. Pairs of such eigenvalues are associated with _quasi-modes_ (QMs) which are not orthogonal, and do not satisfy equipartition. Each QM has characteristic frequencies that can be approximated as, \[\omega_{i\pm}\approx\pm\sqrt{\frac{P}{m}}\Re(\lambda_{\mathrm{i}}^{1/2})+i \Big{(}\pm\sqrt{\frac{P}{m}}\Im(\lambda_{\mathrm{i}}^{1/2})+\frac{\xi_{i}}{2m }\Big{)}, \tag{2}\] where \(i\) indexes the QM, \(\lambda_{\mathrm{i}}\) is the associated complex eigenvalue of \(\mathbf{k}\) and \(\xi_{i}\) is the effective drag, directly proportional to the effective viscosity, \(\mu\). In the absence of thermal fluctuations, these frequencies, \(\omega_{i\pm}\), relate to damped oscillations in which the coupled coordinates spiral into the fixed point (Supplementary Note). The rate at which they do so, depends on the imaginary part of \(\omega_{i\pm}\), which describes motional damping. By increasing the power, \(P\), \(\Im(\omega_{i-})\) can be decreased towards zero before the changing sign. As it does so, motional damping turns into exponential growth transforming the inward spiral to an outward spiral, destabilizing the trap. The condition for \(\Im(\omega_{i-})=0\) is, \[\frac{P}{\xi_{i}^{2}}=\frac{\Re(\lambda_{\mathrm{i}})}{m\Im(\lambda_{\mathrm{i}}) ^{2}}. \tag{3}\] As the power is increased towards this threshold, the interaction between thermal fluctuations and the non-conservative force causes the instantaneous variance, \(\langle a_{i}^{2}\rangle\) and decay time of the autocorrelation of the QM to increase, \[\langle a_{i}(t+\tau)a_{i}(\tau)\rangle\propto\frac{k_{\mathrm{b}}T}{\Im( \omega_{i-})}e^{i\Re(\omega_{i-})\tau}e^{-\Im(\omega_{i-})\tau}, \tag{4}\] where \(a_{i}\) is the amplitude of the QM, and \(\Im(\omega_{i-})\to 0\) (Supplementary Note ). Eventually, the amplitude of the dominant motion exceeds the range over which the forces are approximately linear. Given suitable curvature in the force field, stable limit cycles (i.e. isolated, closed paths in phase space describing self sustained oscillations), or orbits, can form [35] and, ultimately, the fixed point (i.e. the mechanical equilibrium) of the system is destabilized. Experimental observations of such a transition are shown graphically in Fig. 1(c,d). We next apply these principles to our pair of spin-driven oscillators, Fig. 1a. ### Sub-threshold behaviour, optical binding between non-conservative oscillators First we consider the sub-threshold behaviour, for which the motion remains within the linear range of the force field, where the generalized Hooke's law, Eq. (1) applies, see Fig. 1(c). A detailed account is provided in Supplementary Note The main theoretical results are summarized below and compared with experimental demonstrations. We confine attention to the \(xy-\)plane, the \(z\) motion corresponding to an uncoupled normal mode. In this plane the displacement coordinates are \(\mathbf{q}=(\mathbf{q}_{1},\mathbf{q}_{2})=(x_{1},y_{1},x_{2},y_{2})\) and the stiffness matrix for this system have the form, \[\mathbf{K}=\begin{bmatrix}\mathbf{K}^{\prime(1)}&\mathbf{A}\\ \mathbf{A}&\mathbf{K}^{\prime(1)}\end{bmatrix},\ \ \ \mathbf{K}^{(1)}=\begin{bmatrix}K_{r}&K_{\phi}\\ -K_{\phi}&K_{r}\end{bmatrix}. \tag{5}\] Here, \(\mathbf{K}^{(1)}\) is the stiffness of a single oscillator, comprising a single sphere in a counter-propagating, circularly polarized trap. The diagonal elements, \(K_{r}\), quantify the stiffness of the purely attractive gradient forces and the off-diagonal terms, \(K_{\phi}\), are connected with non-conservative, azimuthal forces deriving from inhomogeneous optical spin [31]. \(\mathbf{K}\) is the stiffness for the pair of Figure 2: Quantitative analyses of the experimental results below threshold. (a) The growth in the auto-correlation of the \(x_{\mathrm{b}}\) component of the excited breathing mode \(\langle x_{\mathrm{b}}(t+\tau)x_{\mathrm{b}}(\tau)\rangle\) with increasing optical power for the beam distance \(d=8.6\,\mu\)m giving \(P_{\mathrm{c}}-P_{\mathrm{b}}>0\). Increasing the power increases both the mean frequency of the oscillation and the time constant governing the loss of coherence of the oscillation. (b) Due to minor imperfections, the \(x_{\mathrm{c}}\) component CoM auto-correlation, \(\langle x_{\mathrm{c}}(t+\tau)x_{\mathrm{c}}(\tau)\rangle\), also grows slightly, but the effect is far weaker. (c) The cross-correlation of the \(x_{\mathrm{c,b}}\) components of CoM and BR, \(\langle x_{\mathrm{c}}(t+\tau)x_{\mathrm{b}}(\tau)\rangle\) shows weak coupling caused by minor imperfections in the system. (d) Cross-correlations of the BR coordinates are \(\pi/2\) phase shifted, indicating a tendency toward circular motion. (e,f) PDFs of the difference between the azimuthal coordinates of both particles, see Fig. 1(a), \(\Delta\phi=\phi_{1}-\phi_{2}\) for beam separations of \(d=8.6\,\mu\)m (e) and \(d=8.9\,\mu\)m (f) showing the emergence of phase locking approximating to BR and CoM modes, respectively, shown in Fig. 1(e). (g) Relative Shannon entropy \(S_{r}\) as a function of power for the PDFs (e) and (f). oscillators: the stiffness of each constituent oscillator is slightly modified by the proximity of its neighbour i.e. \(\mathbf{K}^{\prime}(1)\approx\mathbf{K}^{(1)}\), while \(\mathbf{A}\) describes the relatively weak coupling between the two particles. Note that \(\mathbf{A}\) is, itself, non-symmetric indicating that the interaction is intrinsically non-conservative. Its elements oscillate with the separation between the beams, as is common with conventional binding interactions. A parametric study of the elements of \(\mathbf{K}\), and their dependence on beam separation, \(d\), and particle radius, \(a\), is provided in Supplementary Note. The overall form of \(\mathbf{K}\) derives from the inversion symmetry of the system, and allows separation into two independent oscillators by transforming to the centre of mass (CoM) and breathing (BR) coordinates (\(\mathbf{q}_{\mathrm{c}}\) and \(\mathbf{q}_{\mathrm{b}}\) respectively) with, \[\mathbf{q}_{\mathrm{c}} =(\mathbf{q}_{1}+\mathbf{q}_{2})/\sqrt{2}, \tag{6a}\] \[\mathbf{q}_{\mathrm{b}} =(\mathbf{q}_{1}-\mathbf{q}_{2})/\sqrt{2}, \tag{6b}\] where \(\mathbf{q}_{\mathrm{c/b}}=(x_{\mathrm{c/b}},y_{\mathrm{c/b}})\). This transformation decouples the system stiffness, \(\mathbf{K}\), according to, \[\mathbf{K}=\begin{bmatrix}\mathbf{K}^{\prime(1)}&\mathbf{A}\\ \mathbf{A}&\mathbf{K}^{\prime(1)}\end{bmatrix}\rightarrow\begin{bmatrix}( \mathbf{K}^{\prime(1)}+\mathbf{A})&0\\ 0&(\mathbf{K}^{\prime(1)}-\mathbf{A})\end{bmatrix}. \tag{7}\] We refer to these two separate oscillators as CoM and BR oscillators. Each has two quasi-modes (QMs), with complex conjugate eigenvalues, which, together, describe stochastic orbital rotation of the coordinates, \(\mathbf{q}_{\mathrm{c}}\) or \(\mathbf{q}_{\mathrm{b}}\), about the origin. These stochastic motions correspond, respectively, to in-phase (CoM) and anti-phase (BR) circulation of the individual particles, in clockwise or counter-clockwise directions, see Fig. 1b. Treating the optical and the hydrodynamic interactions as perturbations, Eq. (3) gives the difference between the threshold powers for these oscillators as, \[P_{\mathrm{c}}-P_{\mathrm{b}}\approx-\frac{\xi_{0}^{2}}{m}\Big{(}\frac{4K_{r} \Im(\delta)}{K_{\phi}^{3}}+\frac{9}{2}\frac{a}{d}\frac{K_{r}}{K_{\phi}^{2}} \Big{)}, \tag{8}\] where \(\xi_{0}=6\pi\mu a\) is the Stokes drag on a single particle and \(\delta\) is a complex scalar quantity derived from elements of \(\mathbf{A}\), that describes the optical interaction. The first term in Eq. (8) is due to optical coupling and the second is caused by differences in the effective drag for the CoM and BR QMs (see Supplementary Note). The optical coupling parameter, \(\delta\), oscillates with beam separation, \(d\), while the hydrodynamic interaction decays monotonically with \(d\), serving to systematically reduce the threshold power of CoM relative to BR. As the beam separation, \(d\), is increased the CoM and BR oscillators alternately have the lowest threshold power, satisfying \(P_{\mathrm{c}}-P_{\mathrm{b}}>0\) or \(P_{\mathrm{c}}-P_{\mathrm{b}}<0\) (Supplementary Note). At low power all four of these QMs have approximately equal energy and the preference in the sense of stochastic orbital rotation is negligible. As the power is increased, the stochastic motion is increasingly biased towards circulation in the sense dictated by the azimuthal spin forces, although the energy in the CoM and BR QMs remains comparable. For further increases in power, the energy in the QMs with the lowest threshold power begins to grow until it becomes dominant. At this point, the observed motion consists of stochastic rotations of the microspheres around their respective beam axes which are either in phase (CoM dominant), or anti-phase (BR dominant), but always in the direction dictated by the azimuthal spin force. We investigate these phenomena experimentally in Fig. 1c. Fig. 1c shows two dimensional spatial probability distribution functions (PDFs) for the particles. On the top two rows, the PDFs are given in displacement coordinates [i.e. \((x_{1/2},y_{1/2})\)] and, on the lower rows, in the QM coordinates [\((x_{\mathrm{c/b}},y_{\mathrm{c/b}})\), Eq. (6)]. These results correspond to a separation of \(d=8.6\,\mu\)m, for which \(P_{\mathrm{b}}<P_{\mathrm{c}}\), so that BR grows to dominate, as is clear from the PDFs in the QM basis. Figure 2 describes the stochastic motion in more detail. Figs. 2a-c show time dependent correlation functions of \(x_{\mathrm{b}}\) and \(x_{\mathrm{c}}\), for increasing optical power. Written in terms of the components of \(\mathbf{q}_{\mathrm{c/b}}\) the auto-correlation of these QMs, Eq.4, is, \[\langle x_{\mathrm{c/b}}(t+\tau)x_{\mathrm{c/b}}(\tau)\rangle \propto\frac{k_{\mathrm{b}}T}{\Im(\omega_{i-})}\cos(\Re(\omega _{i-})\tau)e^{-\Im(\omega_{i-}\tau)} \tag{9a}\] \[\langle x_{\mathrm{c/b}}(t+\tau)y_{\mathrm{c/b}}(\tau)\rangle \propto\frac{k_{\mathrm{b}}T}{\Im(\omega_{i-})}\sin(\Re(\omega _{i-})\tau)e^{-\Im(\omega_{i-}\tau)}. \tag{9b}\] Equation (9a) describes the increased amplitude and coherence of the stochastically driven oscillations of \(x_{\mathrm{c}}\) and \(x_{\mathrm{b}}\), while the cross correlation, Eq. 9b, describes the growing tendency of \(\mathbf{q}_{\mathrm{c}}\) or \(\mathbf{q}_{\mathrm{b}}\) to circulate about the origin [52]. In Fig. 2a the increase in amplitude and coherence of the BR QM is shown and, in Fig. 2b, the relative stagnation of CoM, see Eq. (9a). The coupling between the CoM and BR oscillators is relatively weak, Fig. 2c, but indicates a slight departure from the ideal symmetry, assumed in Eq. (7), for which CoM and BR motions would be completely independent. The time dependent autocorrelation of \(x_{\mathrm{b}}\), \(\langle x_{\mathrm{b}}(t+\tau)x_{\mathrm{b}}(\tau)\rangle\), and the cross correlation of \(x_{\mathrm{b}}\) with \(y_{\mathrm{b}}\), \(\langle x_{\mathrm{b}}(t+\tau)y_{\mathrm{b}}(\tau)\rangle\) are shown in Fig. 2d, demonstrating the tendency for \(\mathbf{q}_{\mathrm{b}}\) to rotate about the origin, Eqns (9a,9b). This motion corresponds to stochastic rotation of the individual particles about their beam axes, with a relative phase shift of \(\pi\) raads [31; 52], as illustrated in Fig. 1b. Figures 2(e,f) gives an analysis of the statistical behaviour of the azimuthal coordinates of the particles, \(\phi_{1/2}\), see Fig. 1(a), in the form of PDFs at discrete bins of \(\Delta\phi_{k}=\phi_{1}-\phi_{2}\), \(p(\Delta\phi_{k})\), as the optical power is increased. For \(d=8.6\mu\)m, BR grows to dominate the motion, while CoM is emphasized in for \(d=8.9\mu\)m. In Fig. 2(g), we plot the relative Shannon entropy, \(S_{r}=1-S/S_{max}\) where \(S_{max}=\ln N\), \(N\) is the number of bins in PDF, and \(S=-\sum_{k=1}^{N}p(\Delta\phi_{k})\ln p(\Delta\phi_{k})\)[53]. In this context, \(S_{r}\) measures synchronization strength, taking values between zero and one, where a value of one indicates perfect synchronization. For the sub-threshold regime, these values of \(S_{r}\) suggest a form of stochastic synchronization, arising prior to limit cycle formation, as the instability is approached. ### Above threshold behaviour, synchronization and phase locking of limit cycle oscillators As a dominant QM grows in amplitude, the particles begin to stray further from the beam axis, where the forces are non-linear, allowing for the formation of self-sustained, periodic trajectories or _limit cycles_[54]. Eventually the fixed point destabilizes and each particle forms its own limit cycle, resembling a circular orbit. These limit cycles exist independently of one another, and execute a complete cycle in a well defined time period, \(T\), with fundamental frequency, \(\Omega=2\pi/T\). The position of the particle on the limit cycle can be associated with a single scalar coordinate, the phase, \(\phi\). In our system, two limit cycles are formed, consisting of approximately circular orbits, Fig. 3a. In general, collections of weakly interacting limit cycles have a tendency to _synchronize_. That is, their slightly differing fundamental frequencies are drawn together so that the ensemble oscillates collectively with a single, unique frequency [54]. This process is the consequence of small phase adjustments which accumulate over the course of many time periods. In this respect, the mechanisms underpinning synchronization differ fundamentally from those that generate other forms of highly correlated motion as found, for instance, in conservative optomechanics [55], in which the interaction is more direct and the correlation is directly proportional to a coupling constant. In the mesoscopic regime, synchronization is always accompanied by significant levels of thermal noise. Since limit cycles are neutrally stable, the phase diffuses as it advances. In particular, the total change in phase over a time interval, \(\Phi\), has a variance that increases linearly with time [54]. Synchronization of stochastic systems therefore requires that the interaction forces are strong enough to overcome phase diffusion. Nevertheless, fluctuations will still give rise to _phase slips_, in which the relative phase of oscillators changes abruptly between phase locked states. This can be likened to Kramers hopping in Figure 3: Experimental behaviour of non-conservative oscillators, with the same fundamental frequencies, above threshold where the limit cycles are formed and synchronized. (a) Trajectories of both particles in \(x-y\) plane corresponding to 10 limit cycle periods. Positions of each particle were detected by an independent quadrant photodiode detector (QPD). (b) The correlated motion of both particles along \(x\) axis is visible to the naked eye in records from a fast CMOS camera plotted against time. Blue and red curves link the centers of particle images. (c) Accumulated phase difference, \(\Delta\Phi=\Phi_{1}-\Phi_{2}\), plotted against time, shows discrete phase jumps between phase-locked states. (d) Experimental demonstration of the phase diffusion properties comparing the linear time dependence of the vaccinated phase of a single oscillator i.e. \(\mathrm{Var}(\Phi_{1})\) (blue) with the variance in the accumulated phase difference between the synchronized oscillators i.e. \(\mathrm{Var}(\Delta\Phi)\). (e) Probability density function PDF of the relative phase \(\Delta\phi\) of the oscillators obtained by a fast CMOS camera and mapped on the interval \(\langle-\pi,\pi\rangle\). a potential [56; 57] although, in this case, the transitions take place between non-equilibrium states and a closer analogy is with stochastic motion in a tilted periodic potential [58; 59]. The above threshold synchronization of our twin oscillators is explored experimentally in figures (3) and (4). Accompanying simulations and theoretical comments are provided in Supplementary Note. We take the azimuthal coordinate of the particle displacement as the phase of the oscillator. This is less rigorous than the definition obtained from _phase reduction_ but is far simpler to describe and sufficiently accurate to capture the required phenomena (see Supplementary Note. In the following discussion we distinguish between the absolute phase of an oscillator, \(\phi_{i}\) with \(i=1,2\), which specifies the position of particle \(i\) on its limit cycle and takes values in the interval \((-\pi,\pi)\), and the accumulated phase, \(\Phi_{i}\), which specifies the total phase difference covered in a given period of time. Associated with these quantities are the absolute phase difference between the oscillators, \(\Delta\phi=(\phi_{1}-\phi_{2})\) (restricted again to the interval \((-\pi,\pi)\) and the accumulated phase difference, \(\Delta\Phi=(\Phi_{1}-\Phi_{2})\). Figure 3 describes an established synchronized state for oscillators with approximately equal fundamental frequencies. Experimentally measured trajectories are depicted in Fig. 3(a, b). The accumulated phase difference shows, in Fig. 3(c), a typical noise-induced phase slip. Figure 3(d) confirms the established linear relationship between the time dependence of the variance of the accumulated phase, \(\Phi_{1}\), for a single oscillator (blue line). In contrast, the variance in the difference of the accumulated phases of a pair of synchronized oscillators saturates quickly demonstrating that the interactions are strong enough to suppress phase diffusion in the synchronizing pair. The steady-state PDF of \(\Delta\phi\) appears in Fig. (3)(e), showing a sharply peaked phase difference with relative Shannon entropy \(S_{r}=0.39\). The detuning behaviour obtained when the limit cycles of the oscillators have differing fundamental frequencies is described in Figure 4. Figures 4a-c show the effect of varying the second beam waist radius between \(w_{0}=1.03\,\mu\)m and \(w_{0}=1.06\,\mu\)m, holding the first constant at \(w_{0}=1.03\,\mu\)m. This has the effect of continuously varying the fundamental frequency of the second limit cycle oscillator (see [31]). Figs. 4a,b show the accumulated phase difference, \(\Delta\Phi\), for a series of detuned oscillators over different time intervals. Over long times, Fig. 4a shows a steady increase in the accumulated relative phase of the oscillators as the faster oscillator pulls ahead of the slower. Fig. 4b shows the detailed motion over shorter time scales. As described previously, the oscillators synchronize perfectly for short intervals before fluctuations induce a phase slip of \(2\pi\) radians, or an integer multiple. As the detuning increases the time between phase slips decreases until synchronization becomes impossible. The effect on \(p(\Delta\phi)\) is to lower and broaden the main peak, shifting it to slightly greater phase differences on average, Fig. 4c. Over the course of this variation the Shannon entropy changes from 0.35 to 0.17, Fig. 4d. These experimental results are supported by dynam Figure 4: Behavior of the relative phase of two interacting limit cycle oscillators having detuned fundamental frequencies \(\Delta f=f_{2}-f_{1}\). (a) Long-time traces of the accumulated phase difference \(\Delta\Phi\) for a series of detuned oscillators. (b) Shorter-time traces of records (a) illustrating phase slips of an integer multiple of \(2\pi\) radians. As the detuning increases the time between phase slips decreases until synchronization becomes impossible. (c) Broadening of the PDF of the absolute phase difference, \(\Delta\phi\) for larger detuning. (d) Quantitative characterization of weakening of synchronization using the relative Shannon entropy \(S_{r}\). ical simulations (see Supplementary Note, which shed light on the synchronization mechanism. In particular, optical interactions alone cannot account for the observed effects. Dissipative, hydrodynamic interactions act cooperatively to generate and stabilize synchronized states. ## III Discussion In this article we have described the emergence of archetypal, non-equilibrium behaviour in a non-conservative optomechanical system consisting of a pair of non-Hermitian oscillators driven by optical spin momentum. We have shown that stochastic motion becomes progressively more biased, coherent and deterministic as the optical power is increased. Particular forms of motion, described by _quasi-modes_, begin to dominate. Further increases in power result in collective a Hopf bifurcation and the formation of limit cycle oscillations which interact and synchronize. This general behaviour is representative of a far wider class of systems than the particular example dealt with here. In addition, our results suggest that hydrodynamic interactions play a role in the formation of coordinated motion in both the linear and non-linear regimes. The dependence of hydrodynamic coupling, and therefore dissipation rate, on the configuration of the system appears to influence the formation of these non-equilibrium steady states. This effect could be analogous to the minimal dissipation principle of Onsager [60]. We note that hydrodynamic interactions have been inferred experimentally in similar systems [33, 55]. These considerations imply a fundamental difference between this system, and the paradigmatic, Kuramoto model for synchronization [28], in which the underlying mechanism relies on reactive forces alone. More generally, the combination of structured non-conservative forces and coupled dissipation open up numerous new themes for continuing research in levitational optomechanics. These avenues range from the development of novel forms of mesoscale topological matter [40, 42], to the experimental exploration of emerging and controversial issues in the stochastic thermodynamics, of non-equilibrium states, such as the synchronized states described here. Application of the cooling protocols, previously applied to conservative systems, could even push these effects towards the quantum regime allowing experiment to probe the quantum-classical interface for dynamic phenomena such as limit cycle formation or synchronization. ## IV Methods ### Experimental details In order to optically confine the particles inside a vacuum chamber and characterize their optical binding, we used a source of infrared laser light operating at the vacuum wavelength of 1064 nm with low intensity noise (Coherent Mephisto). We used Thorlabs achromatic doublets with antireflection coating ACN254-XXX-C (L1 - L6), dielectric mirrors PF10-03 (M1 - M3) and aspheric lenses C240TME-C with antireflection coating (AS1). A collimated Gaussian beam from an infrared laser was expanded by a telescope formed by lenses L1 (\(f_{1}=150\) mm) and L2 (\(f_{2}=300\) mm) and projected on a spatial light modulator (SLM) (Hamamatsu LCOS X10468-07). The phase mask encoded at the SLM diffracted the beam into the \(\pm 1\) diffraction orders that were used to generate the two counter-propagating trapping beams; the zeroth and higher orders were blocked by a stop placed Figure 5: Experimental set-up of two pairs of counter-propagating beams forming standing wave optical traps with circularly polarized light. Particles are trapped in a small vacuum chamber, dashed square in the inset, placed between the focusing aspherical lenses AS1,2. Positions of particles in \(x-z\) plane are magnified by a telescope and observed by CAM1. in the focal plane of lens L3 (\(f_{3}=400\) mm). The two transmitted 1\({}^{\mathrm{st}}\) - order beams were reflected from prisms P1 and collimated by lenses L4 (\(f_{4}=200\) mm). These lenses formed telescopes with the lens L3, projecting the SLM plane on the mirrors M2. The SLM plane was then imaged onto the back focal planes of aspheric lenses AS1 (\(f=8\) mm, maximal NA = 0.5) by telescopes consisting of lenses L5 (\(f_{5}=100\) mm) and L6 (\(f_{6}=150\) mm). Two pairs of horizontal counter-propagating laser beams generated by splitting a single incident beam with a spatial light modulator (SLM) were focused inside the vacuum chamber by two aspheric lenses with NA=0.5, leading to the beam waist radii \(w_{0}\) adjustable in the range \(1-3\,\mu\)m. The focal planes of the four beams created in the trapping region were slightly displaced from each other along the beam propagation direction \(z\) (by approximately 5 \(\mu\)m, see red lines in Fig. 5) to increase the axial trapping stability [61] (see Fig. 5). Widths of the focused trapping beams in the sample chamber could be controlled by adjusting the area of the diffraction grating imposed upon the SLM. Polystyrene particles (Polysciences, mean diameter \(850\,\mathrm{nm}\)) were dispersed in isopropyl alcohol and after \(\sim 20\) min sonication of the suspension, droplets containing the particles were sprayed into the trapping region in the vacuum chamber employing an ultrasonic nebulizer (Beurer IH 50). We employed two quadrant photo diodes to record the motion of the particles. Trajectories were recorded for durations of \(2\,\mathrm{s}\) with \(1\,\mathrm{MHz}\) sampling frequency. At the same time we used a fast CMOST camera (I-speed 5 series from IX Camera, exposure time was set to 1 \(\mu\)s and the frame rate was 300 kHz) to record the motion of the particles in \(x-z\) plane. To enable position tracking of the optically trapped and bound particles, the sample was illuminated by an independent laser beam (Coherent Prometheus, vacuum wavelength 532 nm) propagating along the \(y\)-direction perpendicular to the imaging \(xz\)-plane. Large beam waist radius \(w_{0}=40\,\mu\)m and low power (approximately \(5\,\mathrm{mW}\) at the sample) of the green illuminating beam ensured its negligible contribution to the net optical force acting on the particles. Typically, we recorded at least 100 000 frames from the studied optically bound structures to obtain sufficiently long trajectories for the analysis of their motional dynamics. The off-line tracking of the particle position from the high-speed video recordings was based on the determination of symmetries in the particle images [62]. Briefly, since a spherical particle produces an azimuthally invariant image, we used the shift property of the Fourier transform and looked for the best horizontal and vertical symmetries in the particle image, which provided us with the information about the in-plane \(x\) and \(z\) coordinates. ## V Acknowledgement The Czech Science Foundation (GF21-19245K); Akademie ved Ceske republiky (Praemium Academiae); Ministerstvo Skolstvi mladeze a telovychovy (CZ.02.1.01/0.0/0.0/16.026/0008460). ## VI Author contributions SHS, OB, and PZ designed and developed the study from the theoretical and experimental aspects, OB, MD, PJ, and JJ upgraded the experimental setup and performed the measurements, SHS, OB, and PZ analyzed the experimental data and compared them to the theoretical results. SHS, OB, and PZ contributed to the text of the manuscript.
2303.04886
Some density results involving the average order of a finite group
Let $o(G)$ be the average of the element orders of a finite group $G$. A research topic concerning this quantity is understanding the relation between $o(G)$ and $o(H)$, where $H$ is a subgroup of $G$. Let $\mathcal{N}$ be the class of finite nilpotent groups and let $L(G)$ be the subgroup lattice of $G$. In this paper, we show that the set $\lbrace \frac{o(G)}{o(H)} \ | \ G\in\mathcal{N}, H\in L(G)\rbrace$ is dense in $[0, \infty)$. Other density results are outlined throughout the paper.
Mihai-Silviu Lazorec
2023-03-08T20:51:17Z
http://arxiv.org/abs/2303.04886v1
# Some density results involving the average order of a finite group ###### Abstract Let \(o(G)\) be the average of the element orders of a finite group \(G\). A research topic concerning this quantity is understanding the relation between \(o(G)\) and \(o(H)\), where \(H\) is a subgroup of \(G\). Let \(\mathscr{N}\) be the class of finite nilpotent groups and let \(L(G)\) be the subgroup lattice of \(G\). In this paper, we show that the set \(\{\frac{o(G)}{o(H)}\mid G\in\mathscr{N},H\in L(G)\}\) is dense in \([0,\infty)\). Other density results are outlined throughout the paper. **MSC (2010):** Primary 20D15; Secondary 20D60, 40A05. **Key words:** element orders, \(p\)-groups, nilpotent groups, density of a set ## 1 Introduction Let \(G\) be a finite group. In [5], A. Jaikin-Zapirain finds a super-logarithmic lower bound for the number of conjugacy classes \(k(G)\) of \(G\), when \(G\) is nilpotent. More exactly, Theorem 1.1 of the same paper states that \[k(G)>10^{-4}\cdot\frac{\log_{2}\log_{2}n}{\log_{2}\log_{2}\log_{2}n}\cdot\log _{2}n,\] where \(G\) is a nilpotent group of order \(n\geq 5\). One of the tools which plays a significant role in the proof of the above result is the so-called average order of \(G\), i.e. the quantity \[o(G)=\frac{1}{|G|}\sum_{x\in G}|x|,\] where \(|x|\) denotes the order of an element \(x\in G\). Among others, the author proves that \(o(G)\geq o(Z(G))\), for any finite group \(G\), and suggests that it would be interesting to further investigate the relation between the average order of \(G\) and the average orders of its subgroups by answering the following question: **Question 1.1.**_Let \(G\) be a finite (\(p\)-)group and let \(N\) be a normal (abelian) subgroup of \(G\). Is it true that \(o(G)\geq o(N)^{\frac{1}{2}}\)?_ Question 1.1 remained unanswered for nearly a decade. During 2021, E.I. Khukhro, A. Moreto and M. Zarrin published the paper [6] which provides a negative answer to a generalized version of Jaikin-Zapirain's question. More exactly, Theorem 1.2 of [6] states that given a real number \(c>0\) and a prime number \(p\geq\frac{3}{c}\), one can construct a \(p\)-group \(G\) with a normal abelian subgroup \(N\) such that \(o(G)<o(N)^{c}\). Hence, for \(c=\frac{1}{2}\), it is clear that there are counterexamples to Question 1.1. By following the notations in [6], these counterexamples are constructed by taking \(G\) to be a semidirect product of a homocyclic group \(U_{s}\) of exponent \(p^{s}\), where \(s=p+1\), and a so-called secretive \(p\)-group \(P\) (see [11] and Lemma 4.1 of [6]), while \(N\) is set to be \(U_{s}\). Let \(\mathscr{F}\) be the class of all finite groups, let \(\mathscr{N}\) be the class of all finite nilpotent groups and let \(L(G)\) be the subgroup lattice of a finite group \(G\). For a subset \(A\) of \(\mathbb{R}\), we denote by \(\overline{A}\) the closure of \(A\) with respect to the usual topology \(\tau_{\mathbb{R}}\) of \(\mathbb{R}\). If we work with a different topology, say \(\tau\), we denote the closure of \(A\), with respect to \(\tau\), by \(\overline{A}_{\tau}\). This paper also aims to investigate the relation between \(o(G)\) and \(o(H)\), where \(H\in L(G)\), by studying the density of the set \[O_{\mathscr{C}}=\left\{\frac{o(G)}{o(H)}\ \bigg{|}\ G\in\mathscr{C},H\in L(G)\right\}\] in \([0,\infty)\), where \(\mathscr{C}\) is a specific class of finite groups. We manage to show that \(O_{\mathscr{F}}\) is dense in \([0,\infty)\) as a consequence of our main result which is even stronger and states that: **Theorem 1.2.**_The set \(O_{\mathscr{N}}\) is dense in \([0,\infty)\)._ An immediate consequence of Theorem 1.2 is obtained as follows. Let \(\mathscr{C}\) be a class of finite groups such that \(\mathscr{N}\subseteq\mathscr{C}\). Then \(O_{\mathscr{N}}\subseteq O_{\mathscr{C}}\subseteq[0,\infty)\), so \(\overline{O_{\mathscr{N}}}\subseteq\overline{O_{\mathscr{C}}}\subseteq[0,\infty)\). Since \([0,\infty)\) is a closed set and \(\overline{O_{\mathscr{N}}}=[0,\infty)\), we get: **Corollary 1.3.**_Let \(\mathscr{C}\) be a class of finite groups such that \(\mathscr{N}\subseteq\mathscr{C}\). Then \(O_{\mathscr{C}}\) is dense in \([0,\infty)\). In particular, \(O_{\mathscr{F}}\) is dense in \([0,\infty)\)._ We end the introduction by mentioning that the average order of a finite group \(G\) may be also expressed as \[o(G)=\frac{\psi(G)}{|G|},\] where \(\psi(G)=\sum\limits_{x\in G}|x|\) is the sum of element orders of \(G\). During the last years, there was a growing interest in investigating this invariant. We refer the reader to [4] for a recent survey including relevant results concerning the sum of element orders of a finite group. ## 2 Proof of Theorem 1.2 and other results concerning the density of some sets As it was suggested in the first section, to obtain the density of \(O_{\mathscr{F}}\) in \([0,\infty)\), it would be sufficient to find a class of groups \(\mathscr{C}\subseteq\mathscr{F}\) such that \(\overline{O_{\mathscr{C}}}=[0,\infty)\). It is clear that \(\overline{O_{\mathscr{C}}}\subseteq[0,\infty)\). So, once we choose a candidate for the class \(\mathscr{C}\), it suffices to show that each \(a\in[0,\infty)\) is an adherent point of \(O_{\mathscr{C}}\), i.e. there is a sequence of groups \((G_{n})_{n\geq 1}\subset\mathscr{C}\) and a corresponding sequence \((H_{n})_{n\geq 1}\), where \(H_{n}\in L(G_{n})\), for all \(n\geq 1\), such that \[\lim_{n\to\infty}\frac{o(G_{n})}{o(H_{n})}=a.\] Our candidate for \(\mathscr{C}\) is \(\mathscr{N}\) and, in what follows, we justify this option. To expand our reasoning, we include the following preliminary result which is a consequence of the Proposition outlined on p. 863 of [7]. **Lemma 2.1**.: _Let \((x_{n})_{n\geq 1}\) be a sequence of positive real numbers such that_ \[\lim_{n\to\infty}x_{n}=0\ \ \text{and}\ \ \sum_{n=1}^{\infty}x_{n}=\infty.\] _Then the set containing the sums of all finite subsequences of \((x_{n})_{n\geq 1}\) is dense in \([0,\infty)\)._ We denote the \(n\)th prime number by \(p_{n}\). Lemma 2.1 is the main tool that is going to be used to show that each \(a\in[1,\infty)\) is an adherent point of \(O_{\mathscr{N}}\). Once this is done, it remains to cover the points \(a\in[0,1)\). For the first part, the main idea is to apply Lemma 2.1 for a sequence \((x_{n})_{n\geq 1}\), where \(x_{n}=\ln\frac{o(\widetilde{G_{n}})}{o(H_{n})}\). We are going to show that some suitable candidates for \((\widetilde{G_{n}})_{n\geq 1}\) and \((\widetilde{H_{n}})_{n\geq 1}\), such that the sequence \((x_{n})_{n\geq 1}\) defined above satisfies the hypotheses of Lemma 2.1, are \(\widetilde{G_{n}}=C_{p_{n}}^{m}\) and \(\widetilde{H_{n}}=C_{p_{n}}^{m-1}\) for a fixed integer \(m\geq 2\) (see the proof of Claim 2.5 below). Consequently, by applying Lemma 2.1 and some calculus properties, we deduce that there exists a sequence \((G_{n})_{n\geq 1}\) of finite abelian groups and a corresponding sequence of subgroups \((H_{n})_{n\geq 1}\) such that \[\lim_{n\to\infty}\frac{o(G_{n})}{o(H_{n})}=a\in[1,\infty).\] This means that \([1,\infty)\subseteq\overline{O_{\mathscr{A}}}\), where \(\mathscr{A}\) is the class of finite abelian groups. The reverse inclusion also holds because \(\frac{o(G)}{o(H)}\geq 1\) for any finite abelian group \(G\) and any \(H\in L(G)\). Indeed, since \(G\) is self dual (see Chapter 8 of [9] or [2]), we know that for any \(H\in L(G)\), there is \(K\in L(G)\) such that \(H\cong\frac{G}{K}\). Hence, \[\frac{o(G)}{o(H)}=\frac{o(G)}{o(\frac{G}{K})}=\frac{1}{|K|}\cdot\frac{\psi(G)} {\psi(\frac{G}{K})}=\frac{\sum\limits_{x\in G}|x|}{\sum\limits_{x\in G}|xK|} \geq 1,\] so \(O_{\mathscr{A}}\subseteq[1,\infty)\) and this leads to \(\overline{O_{\mathscr{A}}}\subseteq[1,\infty)\). Thus, we state the following result. **Corollary 2.2**.: _The set \(O_{\mathscr{A}}\) is dense in \([1,\infty)\)._ We mention that Corollary 2.2 also holds if we replace \(\mathscr{A}\) with a class \(\widetilde{\mathscr{C}}\) of finite groups such that \(O_{\mathscr{A}}\subseteq O_{\widetilde{\mathscr{C}}}\subseteq[1,\infty)\). Finally, concerning the adherence property of the points \(a\in[0,1)\), we will mainly work with sequences formed of specific direct products of finite \(p\)-groups. Each such direct product has two main components: one is abelian, while the other one is a counterexample to Question 1.1 (see the proof of Claim 2.7 below). All finite groups that were highlighted in the last paragraphs are nilpotent and this consequently explains why our choice for \(\mathscr{C}\) is \(\mathscr{N}\). The following preliminary result includes some number theoretic and calculus properties which are going to be used further. **Lemma 2.3.** * _Let_ \(G_{1}\) _and_ \(G_{2}\) _be finite groups. If_ \((|G_{1}|,|G_{2}|)=1\)_, then_ \[o(G_{1}\times G_{2})=o(G_{1})\cdot o(G_{2}).\] * \[\sum\limits_{n=1}^{\infty}\frac{1}{p_{n}}=\infty.\] * _Let_ \((x_{n})_{n\geq 1},(y_{n})_{n\geq 1}\) _be sequences of positive real numbers. If_ \[\lim\limits_{n\rightarrow\infty}\frac{x_{n}}{y_{n}}\in(0,\infty),\] _then the series_ \(\sum\limits_{n=1}^{\infty}x_{n}\) _and_ \(\sum\limits_{n=1}^{\infty}y_{n}\) _have the same nature._ * _Let_ \((X,\tau)\) _and_ \((Y,\tau^{\prime})\) _be topological spaces, let_ \(f:X\longrightarrow Y\) _be a continuous function and let_ \(A,B\subseteq X\)_. If_ \(\overline{A}_{\tau}=\overline{B}_{\tau}\)_, then_ \(\overline{f(A)}_{\tau^{\prime}}=\overline{f(B)}_{\tau^{\prime}}\)_._ Concerning the previous lemma, we mention that item _i)_ states that the average order is a multiplicative function. This is a consequence of the multiplicativity of the sum of element orders (see Lemma 2.1 of [1]). A short proof of item _ii)_ may be found in [8]. For item _iii)_, one can check Theorem 10.9 of [3], while item _iv)_ is easily obtained using the characterization of the continuity of a function in terms of closure (see Proposition 6.12 of [10]). Let \(I=[1,\infty)\). Denote by \(\tau_{I}\) the subspace topology on \(I\). For a subset \(A\) of \(\mathbb{R}\), the closure of \(A\) with respect to \(\tau_{I}\) is \(\overline{A}_{\tau_{I}}=\overline{A}\cap I.\) By Corollary 2.2, we have \(\overline{O_{\mathscr{A}}}=I\). We deduce that \[\overline{O_{\mathscr{A}}}_{\tau_{I}}=\overline{I}_{\tau_{I}}. \tag{1}\] Since the function \[f:(I,\tau_{I})\longrightarrow(\mathbb{R},\tau_{\mathbb{R}}),\ \ \mbox{given by}\ \ f(x)=\frac{1}{x},\ \forall\ x\in I,\] is continuous, by Lemma 2.3, _iv)_, and (1), we get \[\overline{\left\{\begin{array}{c|}o(H)\\ o(G)\end{array}\right.}\ G\in\mathscr{A},H\in L(G)\right\}=\overline{(0,1]}=[0,1].\] Therefore, one can state the following result. **Corollary 2.4.**_The set_ \[\left\{\begin{array}{c|}o(H)\\ o(G)\end{array}\right|\ G\in\mathscr{A},H\in L(G)\right\}\] _is dense in \([0,1]\)._ We proceed now with the proof of the main result. **Proof of Theorem 1.2.** Recall that \(p_{n}\) denotes the \(n\)th prime number. We are going to complete some preliminary steps towards achieving our goal. **Claim 2.5.**_Let \(m\geq 2\) be an integer. The set_ \[\left\{\frac{o\big{(}\mathop{\bigtimes}\limits_{n\in I}C_{p_{n}}^{m}\big{)}}{o \big{(}\mathop{\bigtimes}\limits_{n\in I}C_{p_{n}}^{m-1}\big{)}}\ \Bigg{|}\ I\subset\mathbb{N}^{*},|I|<\infty\right\}\] _is dense in \([1,\infty)\)._ **Proof.** Consider the sequence \((x_{n})_{n\geq 1}\), where \(x_{n}=\ln\frac{o(C_{p_{n}}^{m})}{o(C_{p_{n}}^{m-1})}\), for all \(n\geq 1\). We have \[x_{n}=\ln\frac{p_{n}^{m+1}-p_{n}+1}{p_{n}^{m+1}-p_{n}^{m}+p_{n}^{m-1}}\in(0, \infty).\] As \(n\) approaches infinity, we get \[\lim_{n\to\infty}x_{n}=\ln 1=0. \tag{2}\] Further, take the sequence \((y_{n})_{n\geq 1}\) given by \(y_{n}=\frac{1}{p_{n}}\), for all \(n\geq 1\). Then \[\lim_{n\to\infty}\frac{x_{n}}{y_{n}}=\lim_{n\to\infty}\left(p_{n}\cdot\ln\frac {p_{n}^{m+1}-p_{n}+1}{p_{n}^{m+1}-p_{n}^{m}+p_{n}^{m-1}}\right)=1\in(0,\infty).\] By Lemma 2.3, _ii)_, _iii)_, we have \[\sum_{n=1}^{\infty}x_{n}=\infty. \tag{3}\] According to (2) and (3), the sequence \((x_{n})_{n\geq 1}\) satisfies the hypotheses of Lemma 2.1. Hence, we have \[\overline{\bigg{\{}\sum_{n\in I}x_{n}\ \bigg{|}\ I\subset\mathbb{N}^{*},|I|< \infty\bigg{\}}}=[0,\infty)\Longleftrightarrow\overline{\bigg{\{}\ln\bigg{(} \prod_{n\in I}\frac{o(C_{p_{n}}^{m})}{o(C_{p_{n}}^{m-1})}\bigg{)}\ \bigg{|}\ I\subset\mathbb{N}^{*},|I|<\infty\bigg{\}}}=[0,\infty). \tag{4}\] Since, by Lemma 2.3, _i)_, the average order is a multiplicative function, (4) becomes \[\overline{\bigg{\{}\ln\frac{o\big{(}\mathop{\bigtimes}\limits_{n\in I}C_{p_{ n}}^{m}\big{)}}{o\big{(}\mathop{\bigtimes}\limits_{n\in I}C_{p_{n}}^{m-1} \big{)}}\ \bigg{|}\ I\subset\mathbb{N}^{*},|I|<\infty\bigg{\}}}=[0,\infty). \tag{5}\] Finally, since \[exp:(\mathbb{R},\tau_{\mathbb{R}})\longrightarrow(\mathbb{R},\tau_{\mathbb{R} }),\ \ \text{given by}\ \ exp(x)=e^{x},\forall\ x\in\mathbb{R},\] is continuous and (5) highlights the equality of two closed sets of \((\mathbb{R},\tau_{\mathbb{R}})\), we apply Lemma 2.3, _iv)_, to finish the proof of our claim, i.e. \[\overline{\left\{\begin{array}{c}o\big{(}\mathop{\bigtimes}C_{p_{n}}^{m} \big{)}\\ \frac{o\big{(}\mathop{\bigtimes}C_{p_{n}}^{m-1}\big{)}}{o\big{(}\mathop{\bigtimes }C_{p_{n}}^{m-1}\big{)}}\end{array}\right|\ I\subset\mathbb{N}^{*},|I|<\infty \right\}}=[1,\infty).\] **Claim 2.6.**_Let \(m\geq 2\) and let \(J\) be a finite non-empty subset of \(\mathbb{N}^{*}\). The set_ \[\left\{\begin{array}{c}o\big{(}\mathop{\bigtimes}C_{p_{n}}^{m}\big{)}\\ \frac{o\big{(}\mathop{\bigtimes}C_{p_{n}}^{m-1}\big{)}}{o\big{(}\mathop{\bigtimes }C_{p_{n}}^{m-1}\big{)}}\end{array}\right|\ I\subset\mathbb{N}^{*}\setminus J,|I|<\infty\right\}\] _is dense in \([1,\infty)\)._ **Proof.** This is obtained by repeating the proof of Claim 2.5 for the sequence \((\widetilde{x_{n}})_{n\in\mathbb{N}^{*}\setminus J}\), where \(\widetilde{x_{n}}=\ln\frac{o(C_{p_{n}}^{m})}{o(C_{p_{n}}^{m-1})}\), for all \(n\in\mathbb{N}^{*}\setminus J\). The same reasoning can be repeated since \((\widetilde{x_{n}})_{n\in\mathbb{N}^{*}\setminus J}\) is obtained by removing a finite number of terms from the original sequence \((x_{n})_{n\geq 1}\) taken in the proof of Claim 2.5, so \((\widetilde{x_{n}})_{n\in\mathbb{N}^{*}\setminus J}\) also satisfies the hypotheses of Lemma 2.1. **Claim 2.7.**_Any \(a\in[0,1)\) is an adherent point of \(O_{\mathscr{N}}\)._ **Proof.** Suppose that \(a=0\). As we outlined in the first section, for \(n\geq 4\) (i.e. for a prime greater than or equal to 7), if we take \(G_{n}=U_{s_{n}}P_{n}\) to be a semidirect product of a homocyclic group \(U_{s_{n}}\) of exponent \(p_{n}^{s_{n}}\), where \(s_{n}=p_{n}+1\), and a secretive \(p_{n}\)-group \(P_{n}\), one has \(o(G)<o(U_{s_{n}})^{\frac{1}{2}}\). According to the proof of Theorem 1.2 of [6], the following inequalities hold: \[o(G_{n})<p_{n}^{3}\ \ \text{and}\ \ o(U_{s_{n}})\geq p_{n}^{p_{n}},\ \forall\ n \geq 4.\] Hence, \[\frac{o(G_{n})}{o(U_{s_{n}})}<\frac{p_{n}^{3}}{p_{n}^{p_{n}}}. \tag{6}\] As \(n\) approaches infinity, (6) leads us to \[\lim_{n\to\infty}\frac{o(G_{n})}{o(U_{s_{n}})}=0, \tag{7}\] so \(a=0\) is an adherent point of \(O_{\mathscr{N}}\). Let \(a\in(0,1)\). By (7), there is a sufficiently large \(N\) such that \(a\geq\frac{o(G_{N})}{o(U_{s_{N}})}\). Consequently, \(a\cdot\frac{o(U_{s_{N}})}{o(G_{N})}\in[1,\infty)\). If we take \(J=\{N\}\) in Claim 2.6, it follows that there is a sequence of finite abelian groups \((\widetilde{G_{n}})_{n\geq 1}\) and a corresponding sequence \((\widetilde{H_{n}})_{n\geq 1}\), where \(\widetilde{H_{n}}\in L(\widetilde{G_{n}})\) for all \(n\geq 1\), such that \[\lim_{n\to\infty}\frac{o(\widetilde{G_{n}})}{o(\widetilde{H_{n}})}=a\cdot \frac{o(U_{s_{N}})}{o(G_{N})}. \tag{8}\] Finally, we consider the sequences \((G_{N}\times\widetilde{G_{n}})_{n\geq 1}\) and \((U_{s_{N}}\times\widetilde{H_{n}})_{n\geq 1}\). Note that \((|G_{N}|,|\widetilde{G_{n}}|)=(|U_{s_{N}}|,|\widetilde{H_{n}}|)=1\), for all \(n\geq 1\). Hence, by Lemma 2.3, _i_), and (8), we conclude that \[\lim_{n\to\infty}\frac{o(G_{N}\times\widetilde{G_{n}})}{o(U_{s_{N}}\times \widetilde{H_{n}})}=\frac{o(G_{N})}{o(U_{s_{N}})}\cdot\lim_{n\to\infty}\frac{o (\widetilde{G_{n}})}{o(\widetilde{H_{n}})}=a.\] Hence, any \(a\in(0,1)\) is also an adherent point of \(O_{\mathscr{N}}\) and this concludes the proof of our claim. By Claims 2.5 and 2.7, it follows that \([0,\infty)\subseteq\overline{O_{\mathscr{N}}}\). Since the reverse inclusion also holds, the proof of Theorem 1.2 is complete. We end our paper by posing a question concerning the class \(\mathscr{P}\) of finite \(p\)-groups. If the answer would be affirmative, our main result would also follow since \(\mathscr{P}\subset\mathscr{N}\). **Question 2.8**.: _Is the set \(O_{\mathscr{P}}\) dense in \([0,\infty)\)?_ **Acknowledgements.** The author is grateful to the reviewers for their remarks which improve the previous version of the paper. This work was supported by a grant of the "Alexandru Ioan Cuza" University of Iasi, within the Research Grants program, Grant UAIC, code GI-UAIC-2021-01.
2310.03009
Electronic structure evolution of the magnetic Weyl semimetal Co$_3$Sn$_2$S$_2$ with hole and electron doping
Co$_3$Sn$_2$S$_2$ has been established as a prototype of magnetic Weyl semimetal, exhibiting a ''giant'' anomalous Hall effect in its ferromagnetic phase. An attractive feature of this material is that Weyl points lie close to Fermi level, so that one can expect a high reactivity of the topological properties to hole or electron doping. We present here a direct observation with Angle Resolved Photoemission Spectroscopy of the evolution of the electronic structure under different types of substitutions : In for Sn (hole doping outside the kagome Co plane), Fe for Co (hole doping inside the kagome Co plane) and Ni for Co (electron doping inside the kagome Co plane). We observe clear shifts of selected bands, which are due both to doping and to the reduction of the magnetic splitting by doping. We discriminate between the two by studying the temperature evolution from ferromagnetic to paramagnetic state. We discuss these shifts with the help of DFT calculations using the Virtual Crystal Approximation. We find that these calculations reproduce rather well the evolution with In, but largely fail to capture the effect of Fe and Ni, where local behavior at the impurity site plays an important role.
Himanshu Lohani, Paul Foulquier, Patrick Le Fevre, Francois Bertran, Dorothee Colson, Anne Forget, Veronique Brouet
2023-10-04T17:53:17Z
http://arxiv.org/abs/2310.03009v1
# Electronic structure evolution of the magnetic Weyl semimetal Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) ###### Abstract Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) has been established as a prototype of magnetic Weyl semimetal, exhibiting a "giant" anomalous Hall effect in its ferromagnetic phase. An attractive feature of this material is that Weyl points lie close to Fermi level, so that one can expect a high reactivity of the topological properties to hole or electron doping. We present here a direct observation with Angle Resolved Photoemission Spectroscopy of the evolution of the electronic structure under different types of substitutions : In for Sn (hole doping outside the kagome Co plane), Fe for Co (hole doping inside the kagome Co plane) and Ni for Co (electron doping inside the kagome Co plane). We observe clear shifts of selected bands, which are due both to doping and to the reduction of the magnetic splitting by doping. We discriminate between the two by studying the temperature evolution from ferromagnetic to paramagnetic state. We discuss these shifts with the help of DFT calculations using the Virtual Crystal Approximation. We find that these calculations reproduce rather well the evolution with In, but largely fail to capture the effect of Fe and Ni, where local behavior at the impurity site plays an important role. ## I Introduction Topological quantum materials have become a core area of research in the field of condensed matter physics in past decade. Weyl semimetals (WSM) form a part of this broad area of research. They feature inverted bands between which a gap closes at only a few non-degenerate Weyl points (WPs). These points act as sources or sinks of Berry flux, giving rise to novel magnetoelectric effects, such as the chiral anomaly [1]. They can be formed either by breaking the inversion symmetry (IS) or the time reversal symmetry (TRS). The best known examples of WSMs are due to IS breaking, as studied in TaS [2], TaP [3] or TaIrTe [4]. There are fewer examples of TRS broken WSMs, as they inherently display strong electronic correlations, which are more difficult to control. Recently Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) has been identified as a prototype example of a magnetic WSM [5]. It has a rhombohedral structure, where Co atoms form a kagome pattern layer. Below the Curie temperature of \(\sim\) 177 K [5], the Cobalt atoms host local moments M = 0.3 \(\mu_{B}\), which are ferromagnetically aligned along the c-direction [6]. More complex helimagnetic and antiferromagnetic phases have been suggested to form with doping [7]. Despite the rather small value of the magnetic moment, a record value of the anomalous Hall conductivity (AHC) has been measured by Liu _et al._[5], which is considered as evidence of the strong Berry curvature around the WPs contributing significantly to the AHC [8; 9]. Subsequently, Angle-resolved photoelectron spectroscopy (ARPES) has mapped linearly dispersive bulk Weyl bands crossing just above the Fermi level (E\({}_{F}\)) [10; 11], in fair agreement with DFT band structure calculations [12; 13]. Moreover, they observed surface states, which could form Fermi arcs between the Weyl points, although there remain discrepancies between the reports about which of these surface states are topological [10; 14]. Above \(T_{C}\), a \(\sim\) 100 meV shift of the bulk bands has been observed [14; 15], with the surface states either vanishing [14] or smoothly evolving to Z\({}_{2}\) type surface states [15]. An attractive feature of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) is that WPs lie only \(\sim\) 60 meV above E\({}_{F}\). Therefore, attempts have been made to tune E\({}_{F}\) position by changing the chemical potential by hole and electron like doping for fundamental study and applications [16; 17; 8; 19]. For doping, Co and Sn are two suitable substitution sites in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\). When Cobalt (atomic number 27) is replaced by Fe (atomic number 26) or Nickel (atomic number 28), one can expect a hole and electron like doping, respectively. However, these modifications take place directly in the active kagome Co plane and they may modify more deeply the electronic structure. On the other hand, substitution of In (atomic number 49) at Sn (atomic number 50) site provides another way to add holes in the system. Although there are two Sn sites inside and outside the kagome planes, In was found to substitute preferentially at the interlayer position [20]. Earlier experiments have shown that the T\({}_{c}\) of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) decreases by In [21; 8], Fe [17; 18] and Ni [22; 23] dopings. The AHC exhibits a non-linear change, _i. e._ it first increases at small doping and then decreases very fast at larger doping. The role of intrinsic [8] and extrinsic [17] contributions in these evolutions have been discussed. Some evidences of a local moment behavior at the dopant site have been reported, either by the observation with STM of a localized bound states at the In site [23] or by the appearance of a Kondo-like upturn below 50K in the resistivity of Fe-doped samples [17], which is absent for In doping [8]. Therefore, it remains to be understood to which amount the role of substitutions can be regarded as a simple doping effect or a larger perturbation. Will there be a simple rigid-like shift of the electronic structure? Will it be similar below and above T\({}_{c}\)? In this paper, we present direct observation with angle resolved photoelectron spectroscopy (ARPES) of the electronic band structure evolution of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) doped by In, Fe and Ni. We try to correlate these changes with changes in the magnetic moment by also studying the temperature dependence of the electronic structure. We compare our results with those of DFT-based first principles calculations using virtual crystal approximation (VCA) method [24] to mimic the doping effect. For clarity, we choose significantly doped samples _i. e._ x = 0.44 for In (Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\)), x = 0.42 for Fe (Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\)) and x = 0.6 for Ni (Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\)) for our study. We find that, as a first approximation, the evolution of the band structure can be described by a rigid-like shift, although the shift is different for Fe and In despite a similar hole doping value. For Ni doping, we observe new, previously unoccupied, electron pockets appearing near \(E_{F}\). With our temperature dependent study, we separate the amount of shift due to doping from the one due to magnetism. ## II Experimental details We have grown single crystals of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) and its doped compounds Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\), Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\) and Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) by using Sn flux method, as reported before [5]. We checked crystallinity and stoichiometry of the samples by using XRD and energy dispersive X-ray (EDX) measurements. Magnetic properties were measured by Quantum Design SQUID system. Photoemission experiments were performed CASSIOPEE-B-ARPES beamline at SOLEIL synchrotron, using SCIENTA-R4000 electron analyser, where a liquid He based cryostat was used to control the sample temperature. The samples were cleaved _in-situ_ and measured under ultra high vacuum condition at base pressure \(\sim\) 5.0 \(\times\) 10\({}^{-11}\) mbar. The resolution was \(\sim\) 15 meV and 0.2\({}^{\circ}\) for energy and momentum respectively. We perform density functional calculations for Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) by using the WIEN2K code [25] with the experimentally determined structure of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\). We estimate the effect of doping using the VCA approximation at the Co site and neglecting structural changes. ## III Magnetic properties Fig.1(a)-(d) shows magnetization as a function of temperature in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\), Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\) and Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) collected in field cooled conditions at the external field value of 1000 Oe. The external field was parallel to the c-axis of the crystals during the measurements. The magnetic moment rises sharply in all the samples indicating their ferromagnetic character, which is further confirmed by their ferromagnet hysteresis loop observed in the magnetic moment vs external field measurement at 10 K shown in Fig.1(e)-(h). We estimate ferromagnetic transition (T\({}_{c}\)) by using derivative plot (dM/dT vs T) and find that the T\({}_{c}\) of the pristine compound goes down by all types of doping. It is 177K, 140 K, 118 K and 20 K for pristine, Fe, In and Ni doping. Similarly, the magnetic moment per Cobalt atom obtained from saturation value of the magnetic moment from the hysteresis curves decreases. It is 0.33, 0.22, 0.15 and 0.04 \(\mu_{B}\) for pristine, Fe, In and Ni doping. Fig.1(i) shows that there is a nearly linear relation between T\({}_{c}\) and the saturated magnetic moment. On the other hand, the coercive field (H\({}_{c}\)) decreases by In and Fe doping but it increases in Ni doped samples as depicted in Fig.1(j). In a simple reasoning based on the number of available electrons, we expect nearly similar hole dopings for Fe and In (x = 0.42 and x = 0.44, respectively) and an electron doping x = 0.6 for Ni. A change of the total number of electrons can be simulated in a DFT calculation by the VCA method, where the average number of available electrons is assumed for all Co. In Fig.1(k), we compare the magnetic moment per cobalt atom calculated as a function of doping concentration x by using VCA calculation to the experimental values. As was shown in previous reports [9], for the pristine compound, the DFT calculation captures rather well the experimental moment, although it is slightly overestimated. Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) is a half metal, the near E\({}_{F}\) DOS is composed of Co-3d states, which acquire spin-up polarized character in the ferromagnetic state. In the VCA approach, the moment is simply proportional to the number of electrons filling these states, therefore it increases linearly with the electron filling and keeps increasing from the hole to the electron doped side. However, in the electron doped compound Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) the magnetic moment is only 17% of the value of pristine compound, which is quite contrary to this prediction. This clearly shows that this VCA approach is insufficient to treat the effect of doping associated with Co/Ni substitutions. Indeed, DFT calculation using a supercell approach [19; 22] predict the decrease of magnetic moment, associated with the disorder introduced by Ni. Furthermore, Ni and Co in the kagome plane hold quite different magnetic moments in these calculations, emphasizing that they cannot be treated as a single average atom. Interestingly, the calculated value with VCA for x = 0.4 (65% of its value in pristine compound) is rather close to the one found for In doping, where the magnetic moment decreases to \(\sim\) 57%. This suggests that for substitutions outside the Co network, a simple charge transfer can be assumed. On the contrary, the magnetic moment found in our experiment for Fe is somewhat higher (77%), suggesting that doping from Fe and In substitutions are quite different. Earlier report by Kassem _et al._Kassem _et al._ (2013) have suggested that T\({}_{c}\) decreases identically by Fe and In doping, which they considered as a proof of a similar effect of the two types of doping. While the nearly similar change in T\({}_{c}\) could be consistent with our findings, we observe that the change in the magnetic moment is relatively larger. As doping is an important knob to tune the topological properties of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), it is important to understand better how the electronic structure is modified by the different types of substitutions and particularly potential deviations from the rigid band filling picture. We will show that these substitutions indeed have a different impact on the electronic structure. ## IV Electronic band structure in ferromagnetic state We start our investigation by comparing the electronic structure at low temperatures among the four compounds. Fig.2(a-d) presents Fermi Surface (FS) taken at 30 K by ARPES with a photon energy of 117 eV and linear horizontal (LH) light polarization. In photoemission, the momentum perpendicular to the in-plane direction (k\({}_{z}\)) depends on the incident photon energy. We mapped the k\({}_{z}\) dispersion of bands in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) (see appendix A) and find that this photon energy of 117 eV corresponds to k\({}_{z}\simeq\) 0, in agreement with previous reports of the literatureKasahara _et al._ (2013). In all the FS, a similar pattern can be observed, with high intensity spots along the \(\Gamma\)-K direction and slightly above M point. For clarity, we use here a 2D Brillouin Zone (BZ), more details are given in Appendix B. For quantitative comparison of the band structure of the various dopings, we choose three representative cuts in the FSs, through the high symmetry direction \(\Gamma\) - K - M in first BZ (cut#1), through the high intensity point above M at k\({}_{y}\simeq\) 0.85 A\({}^{-1}\) (cut#2) and through \(\Gamma\) - K - M in second BZ (cut #3, this corresponds to k\({}_{z}\) = 2/3, see appendix B). In Appendix B, we give for reference the full electronic structure calculated by DFT along each cut. ARPES images taken along these different cuts are presented in Fig.2. In each case, we identify a similar structure, but with a shift that is doping dependent. Along cut#1 [Fig.2(e-h)], the band structure Figure 1: M vs T plot at external field value of 1000 Oe for Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) (a) Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\) (b), Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\) (c) and Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) (d). M vs H plot at temperature 10 K for Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) (e) Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\) (f), Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\) (g) and Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) (h). (i) Curie temperature Vs magnetic moment per Cobalt atom. (j) Coercive field as a function of doping concentration x, where x = 0.0, 0.42, 0.44 and -0.6 corresponds to Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) (red circle), Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\) (black square), Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\) (blue up triangle) and Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) (green down triangle) respectively. (k) Magnetic moment as a function of doping concentration x. DFT predicted value of magnetic moment of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) for hole (x \(>\) 0) and electron (x \(<\) 0) like doping is drawn in (k) by yellow colour line. Black colour dotted line in (i) and (j) is guide to eye for the linear decrement. in these experimental conditions is characterized by an electron-like band around \(\Gamma\) that we call \(\delta\) (black line) and an intense parabolic patch at higher binding energy (BE) that we call \(\epsilon\) (blue line). Along cut#3 [Fig.2(i-l)], one band (\(\alpha\), red) is identified near E\({}_{F}\) and a broad intensity patch (\(\gamma\), green) lies at higher BE. Finally, along cut#2 [Fig.2(m-o)], a clearly dispersing band (\(\beta\), orange) is visible and crosses E\({}_{F}\) with nearly linear dispersion. As will be justified later, this point is close to the Weyl crossing loop. This \(\beta\) band is superimposed to an electron like band, which is possibly a surface state (it is more clearly visible in circular polarization, see appendix C). To track the position of these bands, we extract data points either by fitting the momentum dispersion curves (MDCs) to Lorentzian peaks or by taking the maxima of energy dispersion curves (EDCs) on flat parts of the dispersion where MDC peaks are unresolved. The \(\gamma\), \(\delta\) and \(\epsilon\) points are fitted to parabolic dispersion to estimate their extreme position. The near E\({}_{F}\) bands \(\alpha\) and \(\beta\) have more complicated shapes, which are well captured by DFT calculations for the pristine compound after using a renormalization factor of 1.4, as detailed in Appendix B and in agreement with previous reports [10; 14]. We use these DFT bands as model and shift them to match with the \(\alpha\) and \(\beta\) bands in doped compounds. For Ni [Fig.2(p)], it is not possible to describe the \(\beta\) band by such a rigid shift. A new electron pocket appear near \(E_{F}\) presumably from a previously unoccupied band. In Fig.3, we summarize the evolution of the position of each band in the different compounds. We indicate the position of the bottom of the parabola for \(\delta\), the top of the parabola for \(\gamma\) and \(\epsilon\). For \(\alpha\) and \(\beta\) bands, we use the position of the DFT model at k\({}_{x}\) = 0.51 A\({}^{-1}\) and k\({}_{x}\) = 0.35 A\({}^{-1}\) respectively (see Appendix D). In Fig.3(a), we plot these values as a function of the doping expected in case of complete charge transfer, e.g. a hole concentration with x = 0.44 for Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\), x = 0.42 for Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\) and an electron concentration x = -0.6 for Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\). We compare these evolutions with those expected for the DFT bands in the VCA calculation (solid lines). They shift almost linearly with doping, albeit with a different slope for spin-up (\(\alpha\), \(\beta\) and \(\epsilon\)) and spin-down (\(\gamma\) and \(\delta\)). The calculated shifts matches rather well with the observed ones on the hole doped side, Figure 2: (a-d) FS of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\), Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\) and Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\), respectively, collected at 30 K by using 117 eV LH polarized light. Black lines indicate the 2D hexagonal BZ (see Appendix B for more details). (e-h), (i-l) and (m-p) ARPES images of the above compounds along the cuts indicated in (a) : cut#1 (k\({}_{y}\simeq\)0), #3 (k\({}_{y}\simeq\)1.35) and #2 (k\({}_{y}\simeq\) 0.85 Å\({}^{-1}\)), respectively. \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) and \(\epsilon\) bands are identified and marked by red, orange, green, black and blue colours respectively. A parabolic fit is used for the \(\gamma\), \(\delta\) and \(\epsilon\) bands and a DFT model for \(\alpha\) and \(\beta\) (see Appendix B). The DFT calculated bands are renormalized by a factor of 1.4 to match the experimental results. although it is systematically reduced in Fe (up triangles), compared to In (squares). The position of Fe is usually found half way between those of the pristine compound and the one of In, as if the effective doping was closer to x = 0.2. For Ni, the shift of the spin up band \(\alpha\) is opposite to what would be expected for electron doping and strongly deviates from the VCA curve. On the other hand, the position of the spin down bands (\(\gamma\) and \(\delta\)) and \(\epsilon\) at high BE are consistent with electron doping and the calculated values, within experimental accuracy. This problem arises because the magnetic splitting is overestimated for Ni by the VCA calculation [see Fig. 1(k)]. Indeed, as was clear from Fig.1, doping also modifies the magnetic properties of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), which of course will affect the position of the bands in the magnetic state. Therefore, we replot in Fig.3(b) the position of the bands with respect to magnetic moment, together with the position of the DFT bands at the VCA doping giving rise to this magnetic moment. The DFT predicted spin-up (\(\alpha\), \(\beta\) and \(\epsilon\)) and spin-down (\(\gamma\) and \(\delta\)) bands move towards the lower and higher BE respectively as magnetic moment per Cobalt atom decreases. The experimentally observed position of these bands do not lie exactly on the DFT predicted lines but they show a much better agreement, especially for \(\alpha\) in Ni. This means that shift with doping is dominated by the magnetic shift, which was incorrectly predicted for Ni by VCA. On the other hand, the position of \(\epsilon\) band in Ni is not well predicted. As this band overlaps with many other bands (see Appendix B), it is difficult to determine whether this difference is significant. ## V Electronic band structure in paramagnetic state Next, we study the evolution with temperature from the ferromagnetic to the paramagnetic state. Fig.4(a-d) present ARPES images along the cut#3, in the same conditions as Fig.2, albeit at 250 K instead of 30 K, where all compounds are in paramagnetic state. In these images, one can notice that the separation becomes larger between the \(\gamma\) and \(\alpha\) band, spin down and up respectively, as compared to 30 K. In Fig.4(i), we compare the models observed for \(\gamma\) and \(\alpha\) bands at 30 K (solid lines) and 250 K (dotted lines). When the magnetic splitting closes, the spin up band moves up and the spin down band down, enlarging the gap between them. This larger gap between \(\gamma\) and \(\alpha\) is then a direct consequence of the magnetic transition. We can also see that at 250 K, when this magnetic shift is removed, the \(\alpha\) band in Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) (Fig.4(d)) lies at higher BE in comparison to its position in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) (Fig.4(a)), as expected from electron doping. When magnetic shift is present, as sketched in Fig. 4(j), the shift appears different due to the reduced magnetic splitting in the Ni doped sample in Figure 3: Position of the \(\alpha\) (red), \(\beta\) (orange), \(\gamma\) (green), \(\delta\) (black) and \(\epsilon\) (blue) bands defined in Figure 2 with respect to (a) doping (hole (x \(>\) 0) and electron (x \(<\) 0)) and (b) magnetic moment per Cobalt atom. The solid lines correspond to the DFT predicted doping dependent change in the respective bands calculated using the VCA approximation (in (b) the positions for a doping corresponding to the given magnetic moment is used). The position for \(\gamma\), \(\delta\) and \(\epsilon\) are defined at their extrema and for \(\alpha\) and \(\beta\) at special k-points (see Appendix D). comparison to the pristine compound. In Fig.4(e-h), images are shown at 250K along cut#2. The \(\beta\) band similarly exhibits a shift towards E\({}_{F}\) as temperature rises to 250 K in the pristine and hole doped compounds (Fig.4(e-g)). In Ni doped compound (Fig.4(h)), \(\beta\) is still unclear, as it was at 30 K, but the spectral weight we observed at 30K just below E\({}_{F}\) at k\({}_{x}\) = 0.0 A\({}^{-1}\) is now shifted above E\({}_{F}\). This reveals the structure of the bands just above \(E_{F}\) that cannot be observed in the other compounds. We summarize in Fig.5(a) the positions of the \(\alpha\), \(\beta\) and \(\gamma\) bands as a function of doping at 250 K, with dotted line representing their values in paramagnetic state obtained from VCA calculations. The VCA calculation predicts a smooth evolution and this is indeed the qualitative trend. For Ni, there is a clear electron shift for \(\gamma\) and \(\alpha\) bands. We again find a difference between Fe and In positions, Fe falling roughly between pristine and In. However, the experimental value for \(\beta\) found in Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\) lies clearly above this calculated line, suggesting a deviation for the near E\({}_{F}\) states with respect to the calculation. Indeed, even at the VCA level, this part of the band structure is not rigidly shifted (the shift is non-linear for \(\beta\) at x=0.6). The shape of the band also slightly changes (see Appendix D). This is probably due to the influence of the hybridized Sn states, and the In doping might amplify the effect. We also note that we neglected here structural evolution, which is not negligible for In [21]. We plot in Fig.5(b) the energy difference between the PM and FM states for the \(\alpha\), \(\beta\) and \(\gamma\) bands with respect to the magnetic moment. This difference is positive for the \(\alpha\) and \(\beta\) bands (spin-up) and negative for the \(\gamma\) band (spin-down), as expected from the different spin direction. In addition, the amplitude of the shift is different for the three bands and it shows reasonable agreement with VCA calculations. The larger FM-PM difference (experiment) for \(\beta\) band in comparison to \(\alpha\) and \(\gamma\) bands highlights that Weyl loop is particularly sensitive to magnetic strength compared to the rest of the electronic structure. To better visualize the relative importance of doping-induced and magnetic-induced shifts, we plot in Fig.5(c) the calculated and experimental positions for \(\beta\) as a function of doping in magnetic and non-magnetic states. The magnetic splitting reduces from 0.35 eV to 0.2 eV between x = 0 and x = 0.4, while \(\beta\) shifts towards \(E_{F}\) by 0.05 eV in the PM state and 0.15 eV for the up band in the ferromagnetic state. The magnetic shift is then the dominant effect. In our experiment, the magnetic splitting cannot be determined directly, as the two spin directions of one band are never observed simultaneously. Near E\({}_{F}\), one spin direction is unoccupied, above E\({}_{F}\), and therefore not observed. At higher BE, the two spin directions overlap too strongly to be resolved. The energy difference between PM state at 250 K and FM state at 30 K of each band gives an indication of the reduction of magnetic splitting, but it also depends on the position of \(E_{F}\) within the band, which depends on details of the semi-metallic structure. ## VI Discussion One of the remarkable feature of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) is the high value of AHC. It is believed to arise from the proximity of the Weyl points to the Fermi level. According to Kubo Figure 4: (a-d) ARPES images collected at 250 K along the cut#3 (as defined in Fig.2(a)), by using 117 eV LH light of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\), Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\) and Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) respectively. \(\alpha\) and \(\gamma\) bands are marked by red and green colours. (e-h) Same along the cut#2 (\(\beta\) band). These images are divided by Fermi-Dirac distribution at 250 K to highlight the unoccupied region above E\({}_{F}\). The overlaid lines are the same as shown in Figure 2 but they are shifted to match with the 250 K data. (i) Comparison of the models of \(\alpha\) and \(\gamma\) bands for temperature 30 K (solid line) and 250 K (dotted line). (j) Comparison of the model of \(\alpha\) band in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) (left) and Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\) (right) for temperature 30 K (solid line) and 250 K (dotted line). formula : \(\sigma_{xy}=\frac{e^{2}}{\hbar}\sum_{n}\int_{BZ}\frac{d^{3}k}{(2\pi)^{2}}b_{nk}^ {z}f(E_{nk}-\mu)\) strong Berry curvature (b\({}_{nk}^{z}\)) around the nodal ring should contribute significantly to the AHC [13]. Here, \(f(E_{nk}-\mu)\) is the Fermi Dirac distribution, where \(\mu\) represents Fermi level, which can be tuned by doping. However, dopants not only modify the AHC by changing the Fermi level (intrinsic factor) but also due to scattering effects (extrinsic factor). The relative magnitude of these two contributions is still debated [22; 17; 26]. In this situation, our doping dependent ARPES study of the electronic structure of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) allows to directly locate quantitatively the main bands as a function of doping with respect to the Weyl loop [5]. The Weyl loop is Figure 5: (a) Position of the \(\alpha\) (red), \(\beta\) (orange) and \(\gamma\) (green) bands as a function of hole (x \(>\) 0) and electron (x \(<\) 0) doping at 250 K obtained from the data shown in Figure 4. Dotted lines represents their DFT predicted value obtained from VCA calculations in paramagnetic state. (b) Energy difference of the \(\alpha\), \(\beta\) and \(\gamma\) bands position between paramagnetic state (PM) at 250 K and ferromagnetic state (FM) at 30 K with respect to the magnetic moment per Cobalt atom obtained from the Fig.1(k). (c) \(\beta\) band position for different doping, where empty and filled symbols correspond to data at 250 K and 30 K respectively. The center dashed line is the \(\beta\) band position for paramagnetic calculation. This splits into spin-up and spin-down bands in ferromagnetic calculations and this spin orientation is marked by up and down headed black arrows. Doping dependent evolution of the \(\beta\) band in PM (0.09) and FM state (0.29) is obtained by linear fitting to the paramagnetic and spin-up calculations respectively and written in (c). Figure 6: (a) Position of the Weyl loop in the k\({}_{y}\) vs k\({}_{z}\) plane, as obtained from DFT calculation. The black lines indicate the 3D BZ, the points the position of the crossing between the two bands forming the Weyl loop, the color corresponds to the energy position of the crossing. (b) Band dispersion along \(\Gamma\) - M at k\({}_{z}\) = 0 (red) and k\({}_{z}\) =1/3 (green) (thin line without SOC and thick line with SOC). \(E_{F}\) for the Fe and In substitutions considered in this work at 30 K are indicated. The cuts of Fig. 2 are perpendicular to \(k_{y}\) at positions indicated by the dotted lines. (c) AHC as a function of doping reproduced from previous results in the literature for In Ref[8], Fe (filled up headed triangle Ref[17], empty up headed triangle Ref[26]) and Ni (filled down headed triangle Ref[22], empty down headed triangle Ref[19]). formed by the crossing of two inverted bands near E\({}_{F}\), the \(\beta\) band studied in this paper gives an example of these two bands close to a crossing, which takes place just above E\({}_{F}\). In Fig.6(a), we show the position of the loop around L in the k\({}_{y}\) - k\({}_{z}\) plane, as calculated by DFT similarly to previous studies [14; 27]. The color code indicates the energy position of the crossing with respect to E\({}_{F}\). The SOC opens a gap of \(\simeq\) 60 meV everywhere [27] except at the WPs indicated by asterisks. The band structures calculated at k\({}_{z}\) = 0 (red line) and k\({}_{z}\) = 1/3 (green line) along \(k_{y}\) are indicated in Fig. 6(b), with the renormalization of 1.4 indicated by ARPES, with and without SOC (thick and thin lines respectively). The cuts we considered in Fig.2 are perpendicular to this \(k_{y}\) direction at positions indicated on the figure. The typical position of E\({}_{F}\) observed for the different dopings are indicated on this plot. From this plot, we see that the Fermi level in the pristine compound lies just below the position of the Weyl crossing at k\({}_{z}\)=0 and above the Weyl crossing at k\({}_{z}\) = 1/3. With hole doping, the Fermi level will move to the energy region where the Weyl crossings occurs for k\({}_{z}\) = 1/3 rather than around L (green dispersion in Fig.6). This is where the crossing takes place the lowest energy and where the SOC gap is the largest. It was argued that strong Berry curvature around the nodal ring should contribute significantly to the AHC in these conditions [9]. Indeed, previous studies have found that the maximum of AHC occurs for a small hole doping. The previous results by Shen _et al._ for Fe [17] and Zhou _et al._ for In [8] are reproduced in Fig.6(c). Although the shape of the two hole-doped cases is similar, the peak occurs at different doping values : x = 0.05 in the case of Fe and x = 0.15 in the case of In. From our ARPES study we directly observe difference in the shifting amount of Weyl band \(\beta\) between the In (0.15 eV / 0.12 eV) and Fe (0.07 eV / 0.03 eV) doping at 30 K / 250 K, despite the nearly similar (\(\simeq\) 0.4) doping level. It suggests that the effective doping obtained from Fe is half that of In. If this was the reason for the different maximum of AHC, we would rather expect to find the Fe maximum at twice the value of In. Therefore, the increase of the AHC is probably due to extrinsic contribution, as was indeed suggested [18]. Fe atoms, going to kagome layer of Co atoms, enhances asymmetric scattering of the conduction electrons which is an extrinsic factor to generate AHC [18] much more efficiently than for In. The maximum value of AHC itself could depend on the details of the electronic structure at one particular doping. For Ni doping, the AHC measured by Shen _et al._[22] also increases slightly with small doping (it shows a maximum at x = 0.056) and then decreases. This peak was not as clear in a previous measurement [19]. This seems to confirm the extrinsic origin of this increase, irrespective of the \(E_{F}\) position. Recent preprint has also elaborated that the AHC evolution of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) by different (Fe and Ni) dopings depend on crystal growth method [26]. ## VII Summary Our comparative study of hole (Co\({}_{3}\)Sn\({}_{1.56}\)In\({}_{0.44}\)S\({}_{2}\), Co\({}_{2.58}\)Fe\({}_{0.42}\)Sn\({}_{2}\)S\({}_{2}\)) and electron (Co\({}_{2.4}\)Ni\({}_{0.6}\)Sn\({}_{2}\)S\({}_{2}\)) doped samples finds that the T\({}_{c}\) and magnetic moment of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) is suppressed by all forms of dopings. However, although the number of added holes is nearly the same between the two hole substitutions (In and Fe), T\({}_{c}\) and magnetic moment evolve differently. Furthermore, the Weyl band \(\beta\) shift towards E\({}_{F}\) with respect to pristine is \(\simeq\) 0.15 eV and 0.07 eV respectively by In and Fe doping in ferromagnetic state at 30 K. This shift reduces to \(\simeq\) 0.12 eV and 0.03 eV in the paramagnetic state at 250 K. These differences show that the dopant sites influences the type of electron transfer, which appears reduced for Fe. In the case of Ni, it is more difficult to evaluate the effective doping, because the \(\beta\) band is not clear. On the other hand, our study reveals bands lying close above \(E_{F}\), which are occupied by the added electrons. This complements our knowledge of the near E\({}_{F}\) structure of this material. Our investigation confirms that the DFT calculation gives a good starting point to describe the electronic structure, but we also noted some discrepancies about the near E\({}_{F}\) states, for example the position of \(\beta\) in the PM state for In or of the pockets above E\({}_{F}\) revealed by Ni doping. Although rather small, these deviations can have significant consequences for the shallow pockets governing the properties in this semi-metallic structure. In this ARPES investigation, we have given particular attention to representative bands of the spin up and spin down directions. This has clarified how doping and magnetism influence their relative positions. We find a shift of \(\simeq\) 0.29*x eV in the magnetic state for states near E\({}_{F}\) and \(\simeq\) 0.09*x eV in the paramagnetic state (here, x is the doping concentration). This complements the information already published on the pristine compound [11; 10; 11; 14; 27; 28], which focused essentially on the spin up near E\({}_{F}\) states. It is quite remarkable that DFT predictions describe rather well the magnetic transition. This contrasts with the situation in other ferromagnetic systems, like Fe\({}_{3}\)GeTe\({}_{2}\), where the magnetic splitting does not vanish clearly at T\({}_{c}\)[29; 30]. This suggests an itinerant origin of magnetism with relatively small correlation effects in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\). ## VIII Acknowledgment This work benefited from financial support by Investissement d'Avenir LabEx PALM (Grant No.ANR-10-LABX-0039-PALM). Appendix ### Estimation of the value of \(k_{z}\) at 117eV In photoemission, the dispersion perpendicular to the plane direction (k\({}_{z}\)) can be mapped by varying the photon energy [31]. One can convert the photon energy to the k\({}_{z}\) value by using formula \(k_{z}=\sqrt{\frac{2m}{\hbar^{2}}}\sqrt{(h\nu-\phi-E_{B})cos^{2}\theta+V_{0}}\), where \(\phi\), E\({}_{B}\), \(\theta\) and V\({}_{0}\) correspond to work function, BE of photoelectron, emission angle of photoelectron with respect to the sample normal and inner potential of the sample respectively. V\({}_{0}\) is estimated from the periodicity of the observed dispersion. We found that the value of V\({}_{0}\)=11eV allows a good description of the experimental periodicity, which is in agreement with earlier works [10]. Fig.7(a)-(e) show the \(\beta\) band dispersion for the photon energy range 118 eV - 125 eV at 30 K. We find that the \(\beta\) band position is minimum near 117eV and shifts towards \(E_{F}\) with increasing photon energy. As it does not drastically changes its shape, we follow its position by shifting the DFT dispersion at k\({}_{z}\)=0 (after renormalization by a factor of 1.4, see next section). This behavior, reported in Fig. 7(k) is in reasonably good correspondence with the DFT calculation for the \(\beta\) band. This allows to assign a value k\({}_{z}\)=0 to our experiment at 117eV. Note that the well defined k\({}_{z}\) variation of the \(\beta\) band implies it is a bulk band. ### Comparison to band structure calculations In this section, we compare the bands \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) and \(\epsilon\) described in the text with respect to the DFT calculation. Fig.8(a) displays the BZ boundaries expected when the normal incidence corresponds to k\({}_{z}\)=0 (\(\Gamma\) point). The black hexagon corresponds to the 2D BZ of the kagome plane, with \(\Gamma\)K and \(\Gamma\)M as high symmetry directions. In color, we represent the cut of the 3D BZ at different k\({}_{z}\) value, red for k\({}_{z}\) = 0.0, blue for k\({}_{z}\) = 2/3\(\pi\)/c and green for k\({}_{z}\) = -2/3\(\pi\)/c. Indeed, adjacent BZs have different k\({}_{z}\) value with respect to central BZ in the rhomboedral structure (see stacking in Fig. 6). The three cuts we consider in the main text are indicated and the ARPES intensity plots taken for the pristine compound at 30 K using LH polarized light of 117 eV along these cuts are presented in Fig.8(b), (c) and (d), respectively. The points extracted from the images are indicated. DFT calculated bands for ferromagnetic phase (normalized by a factor of 1.4) are superimposed in these images, where violet and marron colours represent spin-up and spin-down bands respectively. Along the cut #1 the \(\alpha\) (red) and \(\delta\) (black) bands are well reconciled with the near E\({}_{F}\) spin-up and spin-down bands respectively. Similarly, the \(\beta\) (orange) and \(\alpha\) bands along the cut #2 and #3 exhibit a reasonable matching with the DFT prediction. The intensity patch of \(\epsilon\) (blue) band falls in an energy window where more than one band are present. ### Surface state behind the \(\beta\) band Fig.9(a), (b) and (c) show FS of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) taken at 30 K by using linear horizontal (LH), circular left (CL) and circular right (CR) polarized light of photon energy 117 eV. ARPES intensity plots along cut#1 are displayed in Fig.9(d), (e) and (f) respectively. In LH polarization mainly the \(\beta\) band is visible but in circular polarization we can see another electron like band (SS) is present just behind the \(\beta\) band. No such electron like band is present in bulk DFT calculation along this cut, so most possibly this is a surface related band. The \(\beta\) and SS band show opposite response to circularly polarized light, _i. e._ left and right branch of the \(\beta\) and SS are more intense in CL light and vice versa in CR light. ### Determination of \(\alpha\) and \(\beta\) band position To determine the position of bands for Fig. 3 and 5, we use the position at k\({}_{x}\) = 0.51 A\({}^{-1}\) for \(\alpha\) and k\({}_{x}\) = 0.35 A\({}^{-1}\) for \(\beta\), both for experiment and calculation. These k-points are shown by black arrows on 10(a) and (b) for images taken for Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) along the cut#3 and #2 at 250 K. The DFT predicted bands, renormalized by 1.4, are shown for the PM state (dashed line) and for the FM state (solid line, spin up) with an up-shift needed to reach the experimental position. In Fig. 4, to describe the \(\alpha\) and \(\beta\) bands at 250 K, we shift the DFT predicted band calculated for ferromagnetic state (renormalized by 1.4) towards E\({}_{F}\) to match with the ARPES data at 250 K. Thus, we use the same model to estimate the \(\alpha\) band position at low T (30 K) and high T (250 K) and we can define a shift between them, independently of details of the calculation. However, we can notice there is a slight difference in the DFT calculated band shape in FM and PM state. This shows the limit of the rigid shift approach. The sensitivity of the near \(E_{F}\) structures beyond a simple shift can play an important role for this semi-metallic state with few carriers, but they are beyond the scope of this report.
2302.03218
Cycle-to-cycle variations in cross-flow turbine performance and flow fields
Cross-flow turbine performance and flow fields exhibit cycle-to-cycle variations, though this is often implicitly neglected through time- and phase-averaging. This variability could potentially arise from a variety of mechanisms -- inflow fluctuations, the stochastic nature of dynamic stall, and cycle-to-cycle hysteresis -- each of which have different implications for our understanding of cross-flow turbine dynamics. In this work, the extent and sources of cycle-to-cycle variability for both the flow fields and performance are explored experimentally under two, contrasting operational conditions. Flow fields, obtained through two-dimensional planar particle image velocimetry inside the turbine swept area, are examined in concert with simultaneously measured performance. Correlations between flow-field and performance variability are established by an unsupervised hierarchical flow-field clustering pipeline. This features a principal component analysis (PCA) pre-processor that allows for clustering based on all the dynamics present in the high-dimensional flow-field data in an interpretable, low-dimensional subspace that is weighted by contribution to overall velocity variance. We find that the flow-field clusters and their associated performance are correlated primarily with inflow fluctuations, despite relatively low turbulence intensity, that drive variations in the timing of the dynamic stall process. Further, we find no evidence of substantial cycle-to-cycle hysteresis. Clustering reveals persistent correlations between performance and flow-field variability during the upstream portion of the turbine rotation. The approach employed here provides a more comprehensive picture of cross-flow turbine flow fields and performance than aggregate, statistical representations.
Abigale Snortland, Isabel Scherl, Brian Polagye, Owen Williams
2023-02-07T03:03:40Z
http://arxiv.org/abs/2302.03218v2
# Cycle-to-cycle variations in cross-flow turbine performance and flow fields ###### Abstract Cross-flow turbine performance and flow fields exhibit cycle-to-cycle variations, though this is often implicitly neglected through time- and phase-averaging. This variability could potentially arise from a variety of mechanisms - inflow fluctuations, the stochastic nature of dynamic stall, and cycle-to-cycle hysteresis - each of which have different implications for our understanding of cross-flow turbine dynamics. In this work, the extent and sources of cycle-to-cycle variability for both the flow fields and performance are explored experimentally under two, contrasting operational conditions. Flow fields, obtained through two-dimensional planar particle image velocimetry inside the turbine swept area, are examined in concert with simultaneously measured performance. Correlations between flow-field and performance variability are established by an unsupervised hierarchical flow-field clustering pipeline. This features a principal component analysis (PCA) pre-processor that allows for clustering based on all the dynamics present in the high-dimensional flow-field data in an interpretable, low-dimensional subspace that is weighted by contribution to overall velocity variance. We find that the flow-field clusters and their associated performance are correlated primarily with inflow fluctuations, despite relatively low turbulence intensity, that drive variations in the timing of the dynamic stall process. Further, we find no evidence of substantial cycle-to-cycle hysteresis. Cycle-to-cycle performance variability occurs earlier in the cycle than flow-field variability, indicating the limits of co-temporal correlation between performance and flow fields, but clustering reveals persistent correlations between performance and flow-field variability during the upstream portion of the turbine rotation. The approach employed here provides a more comprehensive picture of cross-flow turbine flow fields and performance than aggregate, statistical representations. \({}^{*}\)Corresponding author email: [email protected] Contributing authors email: [email protected], [email protected], [email protected] \({}^{*1}\)University of Washington, Mechanical Engineering, 3900 E Stevens Way NE, Seattle, WA 98195, USA \({}^{2}\)University of Washington, Aeronautics and Astronautics, 3940 Benton Lane NE, Seattle, WA 98195, USA ## 1 Introduction Cross-flow turbines are able to harness the kinetic energy in wind, tidal currents and rivers. Relative to axial-flow turbines, cross-flow turbines, referred to a "vertical-axis" turbines in the wind sector, operate at lower rotation rates, are insensitive to inflow direction, and may be able to achieve higher power output per unit area within an array [1]. However, because cross-flow turbines rotate perpendicular to the inflow, the blades encounter a continually fluctuating angle of attack and relative inflow velocity that leads to the unsteady, non-linear phenomenon of dynamic stall [2, 3, 4, 5, 6, 7, 8, 9]. Similarly to stall on a static foil, flow separation from the blade is eventually accompanied by a significant loss of lift and increased drag. For cross-flow turbines, dynamic stall severity depends on the ratio of the blade tangential velocity to the inflow velocity - the dimensionless "tip-speed ratio". While cross-flow turbine hydrodynamics are functions of both the blade azimuthal position and the tip-speed ratio, cycle-to-cycle variability in performance and near-rotor flow fields is also observed. This is potentially caused by inflow fluctuations, hysteresis from previous cycles, and the sensitive and stochastic nature of dynamic stall itself. For example, any perturbations in the inflow velocity not only change the kinetic energy available in the flow but also change the instantaneous angle of attack and relative velocity encountered by the blade. This may, in turn, appreciably affect the timing and severity of dynamic stall. Similarly, Choudhry et al. [10] hypothesize that dynamic stall is influenced by the state of the boundary layer, which suggests that hysteresis from previous cycles, such as the extent of separated flow remaining on the blade at the beginning of the next cycle, may affect future dynamic stall. Cycle-to-cycle variability is commonly neglected in the cross-flow turbine literature. RANS simulations, while often employed to study these flows, are inherently unable to accurately model cycle-to-cycle variations, and the computational expense of LES and DNS precludes their wider use to characterize this variation over a large numbers of cycles [11]. For experiments, these variations are implicitly neglected when data are time- or phase-averaged. However, phase-averaging can remove information that would otherwise assist in interpreting power production, vortex shedding, and stall, as well as distorting the timing and character of the dynamic stall process by smearing out non-linear phenomena and post-stall events [12, 13, 14, 15]. As such, more sophisticated techniques that preserve cycle-to-cycle variability could provide additional insight into the flow-field physics. Some cross-flow turbine works do acknowledge the variability that is present in performance and/or flow-field measurements, but simply treat it as experimental uncertainty [16, 17, 18, 19, 20]. Our objective is to quantify the extent of cycle-to-cycle variation in cross-flow turbine performance and flow fields, its sources, and the relationships between performance and flow-field variability. To do so, we employ an unsupervised clustering pipeline, comprised of well-developed, data-driven methods, that identifies physically meaningful flow-field clusters with differing dynamics relevant to the dynamic stall process. Clustering was chosen since it can provide a basis for conditionally-averaging experimental data using all the dynamics present, rather than resorting to hand-engineered metrics (e.g., flow-field data based on phase-specific vortex separation from the foil surface). We are also able to correlate these flow-field clusters with performance and investigate the different sources of variability. While cycle-to-cycle variability has not been explicitly studied for cross-flow turbines, variability in dynamic stall is the subject of several recent works [14, 21, 12, 15, 22, 11]. By using pressure tap measurements on a flapping foil, Harms et al. [14] investigated variability in dynamic stall and concluded that the phase-average and a measure of the spread (e.g., standard deviation) were descriptive of the general dynamics for cases where bivariate pressure distributions did not exhibit bimodal behavior. They hypothesized that the cases exhibiting bimodal behavior were more sensitive to inflow conditions and boundary layer unsteadiness. Tsang et al. [21] employed a wavelet analysis on lift and drag data from a flapping foil, and showed that non-stationarity between cycles increased with stall severity. They hypothesized that this was the result of non-linear interactions between the fluid force and the pitching motion. Lennie et al. [12] used a Covolutional Neural Network and hierarchical clustering of pressure tap data to estimate cycle-specific vortex convection speeds, and to understand variability in the lift coefficient. They found that stall onset varied cycle-to-cycle, as did vortex convection speed, most prominently post-stall. Because of this, phase-averaging inadequately represented portions of the data set. Kuppers et al. [11] showed that using hierarchical clustering on pressure tap data on a flapping foil produced physically meaningful clusters (one exhibiting a higher secondary lift peak and the other a lower one) even though no clear bimodal behavior existed in the bivariate distributions. Ramasamy et al. [22] also employed clustering to investigate cycle-to-cycle variation in pressure data for a pitching foil. They focused on cases with bimodal behavior in bivariate distributions and found that the clusters diverged in the post-stall region. They also showed that cluster-conditional averages deviated substantially from phase-averages, and that the clusters differed in shedding timing of the dynamic stall vortex (inferred from the pressure data), lift production, and flow recovery. They concluded that the clusters were associated with physical processes. These studies highlight the presence and complexity of cycle-to-cycle variability in systems that exhibit dynamic stall, and also demonstrate that phase-averaging can generate misleading representations. However, because none of these studies involve flow-field or inflow velocity measurements, they could not quantify the sources of the observed variability, characterize the flow-field variability beyond inference from force/pressure data, or directly correlate force and flow-field variability. An analysis that considers both forces and flow fields is necessary to understand the extent and sources of the variability present and the sensitivity of the dynamics. However, unlike forces and pressures, flow-field data is high-dimensional in space and time, which poses apparent limitations on the use of clustering. Specifically, a "curse of dimensionality" arises for flow-field data because the algorithmic distances between all data points become approximately equal as the number of coordinates grows. This makes direct clusters difficult to define because all data points appear increasingly similar to one another [23]. As such, clustering of high-dimensional data requires adaptations for dimensionality reduction. Principle component analysis, PCA, is a well-known technique useful for decoupling the dynamics of complex, high-dimensional data sets, for feature selection, and for dimensionality reduction [24, 25, 26, 27, 28, 29, 30]. Additionally, several groups have demonstrated the use of PCA as a means of preserving dynamics otherwise smeared out by phase-averaging of unsteady, vortex-dominated flows [13, 31, 15]. Clustering in a PCA subspace is particularly useful because the PCA basis optimally maximizes variance in the data [22]. Because of this, PCA and clustering are commonly used in conjunction [32, 33, 34, 35, 36]. Additionally, clusters in a PCA subspace are a useful basis for producing probabilistic reduced-order models and for cluster-based feedback control [37, 38]. We explore the topic of cycle-to-cycle variability in cross-flow turbines using near-blade flow fields and performance (i.e., power output) for two distinct operating conditions. Through this, we characterize the flow-field variability and its correlation with turbine performance using unsupervised, hierarchical clustering with a PCA pre-processor on the flow fields. The paper is laid out as follows. Section 2 provides a theoretical background for flow-field interactions with the moving rotor, then lays out the methodology for turbine performance and flow-field measurement, flow-field clustering, and correlations between flow-field clusters and performance. Section 3 quantifies the extent of cycle-to-cycle performance and flow-field variability, then explores how near-blade hydrodynamics, inflow velocity, and hysteresis from previous cycles contribute to the observed variability. ## 2 Methods We begin with a discussion of the theoretical blade-level hydrodynamics in Section 2.1, then discuss the experimental acquisition of the simultaneous performance and PIV measurements in Sections 2.2-2.4, and conclude with a detailed description of the PCA analysis and flow-field clustering pipeline in Section 2.5. ### Theoretical Blade-level Hydrodynamics and Performance To contextualize experimental flow fields and performance, it is instructive to consider the kinematics and dynamic stall theory relevant to the hydrodynamics of cross-flow turbines. A schematic of the blade geometric definitions pertinent to the kinematics is given in 1a,b. Two key factors which govern the near-blade hydrodynamics are the nominal angle of attack, \(\alpha^{*}\) (affecting lift and drag coefficients), and the nominal incident velocity, \(U_{rel}^{*}\), (affecting the magnitude of the lift and drag forces). In the absence of any induced flow (i.e., proximate changes in streamwise and cross-stream velocities as a consequence of interaction with the turbine), the nominal angle of attack, defined as the angle between the chord line and \(U_{rel}^{*}\) at the quarter chord, \(C/4\), is \[\alpha^{*}(\lambda,\theta)=tan^{-1}\left[\frac{sin(\theta)}{\lambda+cos( \theta)}\right]+\alpha_{p} \tag{1}\] . Here \(\alpha_{p}\) is the blade preset pitch angle, \(\theta\) is the blade azimuthal position (\(\theta\,=\,0^{\circ}\) defined as a turbine blade pointing directly upstream), and \(\lambda\) is the tip-speed ratio. The tip-speed ratio is a non-dimensional representation of the turbine rotation rate, defined as \[\lambda\,=\,\frac{r\omega}{U_{\infty}} \tag{2}\] where \(r\) is the turbine radius and \(\omega\) is the rotation rate. The nominal incident velocity (relative velocity to \(C/4\)) is the vector sum of the tangential velocity, \(r\omega\), and the freestream velocity, \(U_{\infty}\), such that its non-dimensional magnitude is \[\frac{||U_{rel}^{*}(\lambda,\theta)||}{U_{\infty}}=\sqrt{\lambda^{2}+2\lambda cos (\theta)+1}. \tag{3}\] Azimuthal variations in \(||U_{rel}^{*}||\) and \(\alpha^{*}\) over one turbine rotation are shown in Figure 1c,d. For negative \(\alpha^{*}\), on the upstream sweep, the lift vector points inward to the center of rotation and, therefore, the suction side of the blade is the inner surface. Conversely, for positive \(\alpha^{*}\) on the downstream sweep, the suction side of the blade is the outer surface. We refer to these as _nominal_ quantities because induction, manifesting as deceleration of the flow through the turbine rotor, is appreciable but unknown. Because \(\alpha^{*}\) and \(U_{rel}^{*}\) depend on \(\lambda\), the phase, duration, and severity of dynamic stall are influenced by this parameter. A decrease in \(\lambda\) reduces \(||U_{rel}^{*}||\) and increases the range of \(\alpha^{*}\) encountered during a cycle, which corresponds to earlier vortex shedding, increased stall severity, and delayed flow recovery. In severe, or "deep" dynamic stall cases (lower \(\lambda\), larger \(\alpha\) ranges), the near-blade flow field is characterized by the formation and roll-up of an energetic dynamic stall vortex that is on the order of the blade chord. This vortex grows to maturity and sheds before the maximum angle of attack is reached. After shedding, the blade experiences a sharp drop-off in lift and an increase in drag. In contrast, any vortex growth in "light" dynamic stall cases (higher \(\lambda\), smaller \(\alpha\) ranges) is prematurely terminated and shedding is induced by downward flow entrainment as \(\alpha\) begins to decrease [39, 40]. Provided turbine geometry and all other relevant non-dimensional parameters (e.g., Reynolds number, Froude number, blockage) are held constant, phase-averaged hydrodynamic power, \(P\), and the global velocity fields, \(\tilde{\mathbf{V}}\), are functions of \(U_{\infty}\), \(\lambda\), and \(\theta\)[16, 18]. Within a single turbine cycle, \(n\), hydrodynamic power is non-dimensionalized as the coefficient of performance \(\eta(\lambda,\theta,n)\ =\ \frac{P}{\rho U_{\lambda}^{3}Lr}\) where \(\rho\) is fluid density and \(L\) is the blade span. The coefficient of performance is often presented as \(C_{P}\), but \(\eta\) is used here for notational simplicity and does not imply a "water-to-wire" efficiency. The upstream sweep (\(\theta\ =\ 0^{\circ}\ -\ 180^{\circ}\)) is commonly referred to as the "power stroke" of the turbine as it produces most of the hydrodynamic power, while the downstream sweep (\(\theta\ =\ 180^{\circ}\ -\ 360^{\circ}\)) is characterized by parasitic drag, post- and secondary-stall events, and boundary layer reattachment. The deviation between nominal and true values for \(\alpha\) and \(U_{rel}\) (Figure 1) is most pronounced on the downstream sweep because of momentum extraction by the upstream sweep. ### Experimental Facility Experiments were performed in the Alice C. Tyler flume at the University of Washington, a rendering of which is shown in Figure 2a. The data presented in this paper utilized a mean dynamic water depth, \(h\), of 0.5 m, resulting in a channel cross-sectional area \(A_{C}\) of 0.375 m\({}^{2}\) (0.75 m width). The water temperature was maintained at \(36.3\pm 0.2\ ^{\circ}\)C, giving a \(\rho\) of 993.5 kg/m\({}^{3}\), and a kinematic viscosity, \(\nu\), of \(7.1x10^{7}\) m\({}^{2}\)/s. An acoustic Doppler velocimeter (Nortek Vectrino) operating at a 100 Hz sampling rate and positioned approximately 5 diameters upstream of the turbine rotor, measured an average \(U_{\infty}\) of 0.69 m/s with a turbulence intensity, \(TI\), of 1.8-2.1%. These conditions corresponded to a Froude number, \(Fr\ =\ \frac{U_{\infty}}{\sqrt{gh}}\), of 0.31 where the gravitational constant \(g\) is 9.81 m/s\({}^{2}\). Figure 1: (a,b) Blade geometric definitions, (c) normalized nominal relative velocity trajectories, and (d) nominal angle of attack for the two tip-speed ratios. The tangential velocity is defined tangent to the chord line (dashed line through blade). A positive pre-set pitch angle is depicted in (a), and (b) shows the angle of attack definition. The static stall angle in (d) is for a foil in rectilinear flow at a similar Reynolds number (\(Re_{c}\ =\ 1.5x10^{5}\), [41]). We note that because of the rapidly varying angle of attack and appreciable induction, the comparison between \(\alpha^{*}\) and the steady-state stall angle is qualitative. ### Cross-flow Turbine #### 2.3.1 Experimental Setup and Measurements These experiments utilized a one-bladed (NACA 0018 foil) turbine. The turbine has a radius of 8.6 cm, blade span of 23.4 cm, a blade chord length, \(C\), of 4.06 cm and a 6\({}^{\circ}\) preset pitch. The support structure comprised of a NACA 0008 foil strut at the top of the blade span and a large acrylic plate (40 cm diameter) at the bottom. The plate facilitates PIV imaging of the mid-span from the cameras positioned below the flume (Figure 2a,c). Because turbine torque is measured in line with the drive train, a one-bladed turbine is necessary to directly tie performance variations to the near-blade flow fields (i.e., with a two-bladed turbine, the torque contribution from each blade is ambiguous). The blockage ratio, \(\beta=\frac{2Lr}{A_{C}}\) is 8.9% and the Reynolds number, \(Re_{C}=\frac{U_{m}C}{\nu}\) is 3.95\(x\)10\({}^{4}\). As shown in Figure 2c, the turbine rotation rate, \(\omega\), is regulated by a servomotor (Yaskawa SGMCS-02B servomotor, Yaskawa SGDV-2R1F01A002 drive). In these experiments, we are operating under constant speed control holding \(\omega\) constant, which requires the servomotor to apply a variable control torque, \(\tau_{c}\), which is measured by a hollow reaction torque cell (Futek FSH02595) rigidly coupled to the motor and flume cross beam. An air bearing (Professional Instruments Company Block-Head 4R low-inertia) absorbs the thrust moment while imparting a minimal bearing torque. MATLAB Simulink Desktop Real-Time was used for data collection and turbine control. For each control set point (i.e., constant \(\omega\)), torque cell data were acquired for 60+ seconds at 1 kHz. Blade position and \(\omega\) were measured by the servomotor encoder with a resolution of 2\({}^{18}\) counts/rotation, also sampled at 1 kHz. #### 2.3.2 Blade-level Performance Calculation Hydrodynamic power produced by the turbine is the product of the hydrodynamic torque, \(\tau_{h}\), and \(\omega\). At the turbine level, \(\tau_{h}\) is the net hydrodynamic torque produced by the blades, less the parasitic torque from support structure drag. For constant \(\omega\) and negligible bearing torque, the torque balance reduces to \(\tau_{h}=\tau_{c}\). Because the measured torque is dominated by the parasitic torque incurred by large bottom plate, "full-turbine" performance is not a meaningful metric. To this end, "blade-level" \(\eta\) is calculated by subtracting phase-averaged performance for the turbine support structure (no blade), \(<\eta_{S}>\), at the same inflow conditions from the full-turbine performance measurements, \(\eta_{T}\) (Figure 3). Here the \(<>\) brackets denote the phase-average which is conditional on \(\lambda\) and \(\theta\), or, in other words, an average for a single operating condition and azimuthal position across multiple cycles. Blade-level \(\eta\) is used throughout to describe cycle-to-cycle performance variability. This approach requires that secondary interactions between the blades and support structures are minimal (demonstrated in [42, 43]) and that cycle-to-cycle fluctuations are dominated by variation in blade performance and not variation from the support structure. All performance measurements were filtered with a low-pass, zero-phase, Butterworth filter to remove high-frequency electromagnetic interference from the servomotor. Both the 75 and 30 Hz cutoff frequencies used for the Figure 2: Annotated PIV experimental setup in the flume (a,b) and turbine free body diagram (c). The turbine is one-bladed but utilizes a top strut designed to support two blades. turbine and support structure performance data, respectively, are 10+ harmonics faster than the blade passage frequency. Additionally, the Butterworth filter has no effect on the time-averaged performance. Therefore, it unlikely the filter is removing any hydrodynamic power. In calculating \(\eta\) we note there is some ambiguity in the choice of \(U_{\infty}^{3}\). For example, one could attempt to use instantaneous velocity measurements, but there is a temporal mismatch between the time the freestream velocity is measured and when it interacts with the rotor. Additionally, we cannot use PIV to estimate inflow since our field of view does not extend far enough upstream to measure the undisturbed inflow. Our notional choices are (1) to use a time-average of all of the cubed freestream velocity measurements acquired at a tip-speed ratio set point, or (2) apply an advection correction, as in Polagye et al. [44], to compute the instantaneous freestream velocities and calculate cycle-specific kinetic power. The advection correction is based on a cross-correlation of \(U_{\infty}\) with measured power and the application of Taylor's frozen flow hypothesis. Option (2) performs poorly in these experiments, in that it generates greater relative variability in \(\eta\) than measured power, possibly because of the single-bladed configuration. Consequently, we utilize option (1), but in Section 3.3, we employ a hybrid approach to estimate "cluster-specific" kinetic power as an explanatory factor for cycle-to-cycle performance variation. ### Flow Fields #### 2.4.1 PIV Data Acquisition Two-dimensional, two-component, phase-locked, flow-field measurements were obtained in a streamwise plane at the mid-span of the turbine, simultaneously with turbine performance measurements. While the performance measurements were continuously recorded over the entire turbine rotation, each PIV acquisition only captured flow-field data during each cycle at discrete, prescribed \(\theta\) positions. PIV acquisition was controlled by TSI Insight and acquisition for each cycle commenced upon receipt of trigger pulses sent at a specified \(\theta\) from the Simulink model controlling the turbine. The PIV system returned pulses at each image pair capture instance (Figure 4a), the timing of which were logged by the Simulink model. Using these timing signals, the PIV snapshots were aligned with performance in post-processing. We note that this does not produce perfect phase-locking, but "phase jitter" between image pairs at the same \(\theta\) was on the order of \(0.009^{\circ}\), which we deem insignificant for the current analysis. The general arrangement of the PIV laser and cameras is shown in Figure 2a. A dual cavity, Nd:YLF laser Figure 3: Schematic representations of the calculation of blade level performance, \(\eta\), by subtracting support-only “performance”, \(\eta_{S}\), from the full-turbine performance, \(\eta_{T}\), at the same conditions. \(\eta=\eta_{T}-<\eta_{S}>\). (Continuum Terra PIV) capable of a repetition rate of 10 kHz, illuminated the flow with an approximately 2 mm thick light sheet in the field of view, FoV. A high-speed camera (Vision Research Phantom v641) with 2560 x 1600 resolution and a 105 mm lens at f# 16, and a calibration of 11.67 pixels/mm resulted in a FoV of 21.9 x 13.7 cm [5.4\(C\) (\(1.3D\)) x 3.4\(C\) (\(0.8D\))]. The flow seeding (10 \(\mu\)m silver-coated hollow-glass beads) produced particle images of approximately 3 pixels in diameter. To minimize laser reflections at the blade surface, matte black paint was applied. As shown in Figure 4b,c, the combination of the camera lens focal length and streamwise extent of the laser sheet necessitated multiple FoVs to capture the important stages of dynamic stall and flow recovery. Each FoV outline is color-coded to denote the flow segment, \(S_{k}\) (where \(k\) denotes the segment letter), used in the clustering analysis detailed in Section 2.5. Sequences of 9 image pairs for \(S_{a}\), \(S_{b}\) and \(S_{e}\), and 6 image pairs for \(S_{c}\) and \(S_{d}\) were acquired per rotational cycle with prescribed angular displacements between frames ranging from approximately \(5-6.5^{\circ}\), depending on desired phase resolution. A total of 139 image pairs (\(N\ =\ 139\) turbine cycles) were acquired at each rotational phase. FoV positioning relied on a combination of camera and turbine movement. A motorized, three-axis gantry positioned the camera relative to the laser sheet and provided the dominant adjustment for cross-stream FoV positioning, as well as fine adjustments in the streamwise direction. The limited streamwise extent of the laser sheet (Figure 4b,c) necessitated shifting the turbine by \(\approx\frac{1}{2}D\) upstream to illuminate and capture the downstream blade sweep. Both the turbine shaft and blade cast shadows in the laser sheet. Therefore, to obtain data adjacent to the suction and/or pressure sides of the foil at all phases of interest, PIV measurements were repeated with the turbine spinning in both clockwise and counter-clockwise directions as depicted in Figure 4b,c. By changing the direction of rotation, we exploit the symmetry of the system to minimize the impact of blade shadows on the following analysis. PIV processing was performed in LaVision DaVis (version 10.1.1). Background subtraction using a Butterworth filter on phase-matched images mitigated residual reflections and background illumination variation. The shadowed regions, visible turbine structures, and remaining reflections were manually masked prior to PIV processing. The phase-specific masks applied are functions of \(S_{k}\), \(\lambda\), and \(\theta\), but constant for all \(n\). Processing utilized a multi-grid, multi-pass cross-correlation algorithm with adaptive image deformation at a 75% overlap and 32 x 32 pixel final interrogation window size resulting in a 0.69 mm vector spacing (approximately 60 vectors per blade chord). Spurious vectors (less than 2%) were removed with a universal outlier median filter utilizing a 9x9 filter region. Figure 4: (a) A simple timing diagram for the PIV acquisition. Field of view locations captured for (b) CCW rotation and (c) CW rotation. Example blade and support structure shadows are shown. (d,e) PIV segments captured for the different fields of view and visual representation of corresponding \(A\) matrix assignments for input into the PCA pre-processor. All vector field post-processing was performed in MATLAB (R2020b). To align the different flow segments and limit the contribution of blade rotation in the PCA pre-processing, each FoV was translated from the flume reference frame to a blade reference frame. Figure 5 provides an overview of this process. First, we located and aligned the center of rotation between the different FoVs. Tracking a small dot on the end of the shaft and registering all resultant PIV vector fields to a common shaft location corrected phase-dependent shaft process (slight run-out on the cantilevered turbine shaft). Because of parallax and differences in the index of refraction between air and water, the imaged center of the turbine shaft does not correspond to the center of rotation of the turbine at the imaging plane. Therefore, we determined a best-fit location of the center of rotation in each FoV by manually fitting blades to the masked regions. Second, with the center of rotation located, the flow fields were rotated to the blade-centric coordinate system by locating the turbine blade in each \(\vec{\boldsymbol{V}}\) field and rotating the entire field to a common blade position. Third, after rotation, a constant crop boundary and common, segment-specific, mask were applied to each frame and the relative velocity fields with respect to the blade were computed as the vector sum of \(\vec{\boldsymbol{V}}\) with the blade tangential velocity component, \(r\omega\). The segment-specific mask was a function of \(\lambda\) only and was formed by combining all the phase-specific masks in a specific segment. Finally, the cropped fields were interpolated to a common grid relative to \(C/4\). The cropped relative velocity fields, \(\vec{\boldsymbol{\Phi}}(\lambda,\theta,n,x,y)\ =\ [u_{rel}(\lambda,\theta,n,x,y) \,\ v_{rel}(\lambda,\theta,n,x,y)]\) within each \(S_{k}\) are functions of, \(\lambda\), \(\theta\), and \(n\). ### Correlating Turbine Performance and Flow-field Variability with Hierarchical Clustering Cycle-to-cycle variations are present in performance and flow fields, but the correlation between the two is not obvious _a priori_. For example, if flow fields are observed in isolation, the significance of the observed flow structures on turbine performance is unknown. The flow fields are high-dimensional and a reduced order representation of \(\vec{\boldsymbol{\Phi}}\) is needed to compactly describe cycle-to-cycle variation. Clustering with a PCA pre-proccesor allows us to do so while considering all of the flow-field dynamics (and the interplay between them) in an unsupervised manner. Here we describe the flow-field clustering pipeline used for each flow segment. Using this pipeline, we correlate the variability between the simultaneously captured performance and flow fields. Figure 5: Overview schematic of the flow-field rotation pipeline. The center of rotation between the different fields of view is aligned, then the fields are rotated and cropped into a common blade-centric reference frame. A common mask between all the fields of view in a flow segment is applied and the velocity fields relative to the blade are calculated. #### 2.5.1 Flow-field PCA Pre-processor Figure 6 describes the flow-field clustering pipeline. Here the PCA pre-processor enables clustering of all the dynamics using a low-dimensional subspace that is interpretable and weighted by contribution to overall velocity variance. _PCA Setup:_ PCA represents the dynamics of complex data sets as the linear summation of orthogonal modes ranked by the amount variance in the data they describe [24]. The singular value decomposition (SVD) is used to compute the PCA modes for a data matrix \(\mathbf{A_{\#}}\) as \[\mathbf{A_{\#}}=\mathbf{\phi\sigma a^{*}}. \tag{4}\] where the PCA modes are columns of \(\mathbf{\phi}\), the singular values, which quantify the contribution of each mode, are diagonals of \(\mathbf{\sigma}\), and the phase evolution of the modes make up the rows of \(\mathbf{a^{*}}\). Here \(*\) denotes the matrix transpose and \(\#\) specifies a specific data matrix. The percentage of the flow-field variance described by a specific mode, \(\phi_{j}\), is quantified using the corresponding singular value \(\sigma_{j}\), as \(\Psi_{j}\ =\ \frac{\sigma_{1}}{\sum_{j=1}^{j}\sigma_{j}}\) where \(J\) is the total number of modes (smallest dimension of \(\mathbf{A_{\#}}\)). Here, each \(\mathbf{A_{\#}}\) is made up of reshaped \(\mathbf{\widetilde{\Phi}}\) column vectors, \(u_{rel}\) stacked on \(v_{rel}\). Any missing values (from the removal of spurious PIV vectors during processing) are linearly interpolated. The columns of each \(\mathbf{A_{\#}}\) were sorted chronologically [\((n=1,\theta_{1})\), \((n=1,\theta_{2})...(n=N,\theta_{end})\)] with the row-wise mean subtracted prior to SVD decomposition. The presence of the shadowed regions meant a common mask across all \(\theta\) would occlude the majority of the FoV from analysis, so different segments, \(S_{k}\), at each \(\lambda\) were strategically grouped to maximize data yield. This results in five distinct PCA computations of \(\mathbf{A_{\#}}\) (Figure 4). While a few of the \(\mathbf{A_{\#}}\) contained multiple \(S_{k}\), each \(S_{k}\) was clustered separately after PCA pre-processing. This is because each \(S_{k}\) is an independent data set (e.g., \(n=1\) in \(S_{a}\) is unrelated to \(n=1\) in \(S_{b}\)). _Computing Weighted Activation Pictures:_ To compactly describe how the dynamics evolve within a rotation, "weighted activation pictures" in the PCA subspace are constructed as \(\sigma\mathbf{a^{*}}\). This multiplication may also be interpreted as the projection of each \(\mathbf{\widetilde{\Phi}}\) onto the PCA basis, and encodes both the dominance and phase evolution of all the dynamics (PCA modes) of the system into lower-dimension representations for the hierarchical clustering algorithm. The result Figure 6: Overview of the flow-field clustering pipeline. Everything upstream of the hierarchical clustering is considered an element of the PCA pre-processor. This involves performing a singular value decomposition on the relative velocity fields in the blade-centric reference frame. The weighted activations (multiplication of the singular values and the phase-varying weights) are then separated by cycle. For each flow segment, the hiearchical clustering algorithm identifies two clusters from this population of cycles. of this multiplication is 139 weighted activation pictures (one for each \(n\)) in each \(S_{k}\). Each "picture" is composed of column vectors of the individual weighted activation profiles across all of the computed PCA modes. The individual weighted activation profiles are functions of \(\theta\), and describe the dominance of each mode across the range of \(\theta\) included in each \(S_{k}\). The weighted activation profiles associated with a mode, \(\phi_{j}\), are denoted as \(a_{j}\). #### 2.5.2 Flow-field Clustering After the PCA pre-processor, each \(\mathbf{\tilde{\Phi}}\) field is represented in a low-dimensional form suitable for clustering. The weighted activation pictures in each \(S_{k}\), were reshaped into column vectors and sorted into two clusters (the highest number of clusters that produces unique phase-averaged weighted activation profiles) via a hierarchical clustering algorithm with a Ward linkage method [45] to minimize variance and a Euclidian distance metric. We utilized the implementation within the MATLAB _clusterdata_ function. #### 2.5.3 Cluster Assignments After flow-field clustering, we have a set of cluster assignments in each \(S_{k}\) (Figure 6). We can use these cluster assignments to evaluate the correlation between the flow field, \(\mathbf{\tilde{\Phi}}\), and performance, \(\eta\), associated with the \(\mathbf{\tilde{\Phi}}\) clusters. This basis also allows us to investigate ties between the performance and flow-field variability and the potential sources of this variability by considering both conditional-averages and individual trajectories of different parameters based on flow-field cluster assignment. As shown in Figure 6, cluster assignments are denoted as \(\Phi_{k=l}^{k,c}\), where \(k\) is the segment designation (a-e), \(c\) is the cluster number (1 or 2), and \(l\) is the tip-speed ratio set point. Conditional-averages across \(n\), based on cluster assignment, for any variable \(X\) (e.g., \(\eta\), \(\mathbf{\tilde{\Phi}}\)) within one \(S_{k}\) are expressed as \(<X|\Phi_{k=l}^{k,c}>\). In cases where examining individual trajectories of \(X\) within a cluster is preferred, an equivalent set notation, \(\{\}\), denotes the subset of cycles of \(X\) assigned to the specified cluster. For example, a set of individual \(\eta\) trajectories based on \(\mathbf{\tilde{\Phi}}\) clusters is written as \[\eta_{\Phi}(\lambda,\theta,n,c)=\{\eta(\lambda,\theta,n)|\Phi_{\lambda=l}^{k,c }\}. \tag{5}\] For all segments, \(c=1\) always corresponds to the cluster with a higher associated time-averaged performance, \(\bar{\eta}\), for full revolutions. To quantify the statistical significance of the identified clusters, we utilized a two-sided Wilcoxon Rank Sum test [46] (MATLAB _ranksum_ function) with a 5% significance level. The null hypothesis is that the populations of \(X\) contained in each cluster are drawn from the same continuous distribution and have equal medians. Rejection of the null hypothesis means the populations of \(X\) contained in each cluster are statistically significant. Considering how individual trajectories or cluster conditional-averages differ from the phase-averages over all \(n\) can illuminate cluster-specific characteristics. This is done for both performance and flow fields. Conditional performance perturbations, \(\eta_{\Phi}^{\prime}\), are defined as the differences between the individual performance trajectories in each cluster and the phase-averaged performance, \(<\eta(\lambda,\theta,n)>\), within the same \(S_{k}\). They are computed as \[\eta_{\Phi}^{\prime}(\lambda,\theta,n,c)=\eta_{\Phi}(\lambda,\theta,n,c)-< \eta(\lambda,\theta,n)> \tag{6}\] These perturbation quantities help highlight where the clusters are performing better or worse than the phase-average. Conditional difference fields, \(\Phi^{\prime}\), highlight how the conditional-averaged velocity magnitude fields, \(||<\mathbf{\tilde{\Phi}}|\Phi_{\lambda=l}^{k,c}>||\), differ from the phase-averaged fields, \(||<\mathbf{\tilde{\Phi}}>||\). As with the phase-averaged fields, the conditional-averaged velocity magnitudes are computed from the component averages (i.e., \(||<u_{rel}>,<v_{rel}>||\)). The conditional difference fields, computed at each point in space (\(x,y\)), are defined as \[\Phi_{\Phi}^{\prime}(\lambda,\theta,c,x,y)=||<\mathbf{\tilde{\Phi}}|\Phi_{ \lambda=l}^{k,c}>||-||<\mathbf{\tilde{\Phi}}>||. \tag{7}\] ## 3 Results We explore the three potential sources of cycle-to-cycle performance variability: variations in near-blade flow fields, variations in inflow velocity, and hysteresis from previous cycles. Section 3.1 describes the statistical variability in performance and flow fields, highlighting the influence of tip-speed ratio and the blade's phase in the rotation. In Section 3.2, we utilize flow-field clustering to correlate cycle-to-cycle performance and flow-field variability. We conclude with a discussion of the impact of the freestream velocity perturbations in Section 3.3 and hysteresis in Section 3.4. ### Performance and Flow-field Variability The tip-speed ratio affects both time-average performance and the properties of cycle-to-cycle variation around that average, as visualized in Figure 7. Mean performance and its variability both increase with tip-speed ratio but performance reaches a maximum near \(\lambda~{}=~{}2.9\), while variability continues to increase at higher tip-speed ratios. For the remainder of this work, we focus on two cases: \(\lambda~{}=~{}1.5\) (sub-optimal performance) and \(\lambda~{}=~{}2.5\) (near-optimal performance) which exhibit differences in both time-averaged performance and cycle-to-cycle variability. Both cases are relevant to turbine operation because maximizing efficiency (optimal tip-speed ratio) is a control objective when the freestream velocity is below the turbine's rated speed, while the sub-optimal case corresponds to a strategy for shedding power above the rated speed. We note that the larger variation associated with \(\lambda~{}=~{}2.5\) is due in part to the larger performance and address this later by normalizing by the time-averaged performance as the coefficient of variation. A defining feature of cross-flow turbine performance is periodic variation within a cycle (i.e., performance variation with \(\theta\)). The polar representations of performance and flow fields for both tip-speed ratios in Figure 8 show that the largest cycle-to-cycle variation in phase-specific performance occurs in the upstream sweep around the performance peak (\(\theta~{}=~{}30^{\circ}~{}-~{}135^{\circ}\) for \(\lambda~{}=~{}1.5\) and \(\theta~{}=~{}60^{\circ}~{}-~{}165^{\circ}\) for \(\lambda~{}=~{}2.5\)). In agreement with time-averaged performance (Figure 7), the \(\lambda~{}=~{}1.5\) case exhibits less cycle-to-cycle variability during the smaller, narrower performance peak (lower time-averaged performance) than the \(\lambda~{}=~{}2.5\) case. The phase-specific variation in performance is appreciable, but secondary in comparison to the range in performance over a cycle. For both tip-speed ratios, the bivariate distributions of performance are unimodal at each \(\theta\), suggesting that this local variation is adequately described by the mean and a descriptor of the spread [14]. The phase-averaged flow fields of global velocity magnitude around a cross-flow turbine (Figure 8) depend strongly on blade position and tip-speed ratio. Overall, dynamic stall severity and turbine performance are inversely proportional for the two cases (i.e., the higher tip-speed ratio decreases stall severity and increases performance). Specifically, coherent structures from dynamic stall are more apparent for the sub-optimal case, \(\lambda~{}=~{}1.5\), than for the near-optimal case, \(\lambda~{}=~{}2.5\). In comparing the flow fields with turbine performance, we see that for \(\lambda~{}=~{}1.5\), the performance peak coincides with formation, growth, and shedding of the dynamic stall vortex. On the other hand, for \(\lambda~{}=~{}2.5\), the flow remains primarily attached until maximum performance, beyond which we observe increasing separation at the trailing edge. For both tip-speed ratios, performance is at a minimum during the downstream sweep, consistent with lower incident velocities due to extraction of momentum from the flow during the "power stroke". The flow fields are in qualitative agreement with dynamic stall theory (Section 2.1), as well as prior experiments [5, 7, 2] and simulation [47, 3] for cross-flow turbines. Because the lower tip-speed ratio leads to a wide oscillation in angle of attack, the \(\lambda~{}=~{}1.5\) case experiences "deep" dynamic stall. This is evidenced by a strong dynamic stall vortex that is shed before the maximum nominal angle of attack (Figure 8a), as well as prolonged post-stall flow separation. In contrast, for Figure 7: Time-averaged coefficient of performance for the one-bladed turbine investigated. The dashed line represents the time-average at each \(\lambda\). Near-optimal (green dot) and sub-optimal (purple dot) cases are highlighted. The bivariate distribution (yellow to blue shading) describes the range of time-average performance over individual cycles at each \(\lambda\). \(\lambda\ =\ 2.5\), the foil experiences only "light" dynamic stall with smaller vortex structures, limited flow separation until near the maximum nominal angle of attack, and faster post-stall flow recovery. To characterize the cycle-to-cycle variation in the flow fields, we compute the standard deviation fields, \(s_{\Phi}\). These are presented normalized by the mean, as the coefficient of variation, to facilitate comparisons with performance variability. Since the flow-field data is sparse in \(\theta\), we cannot calculate an accurate time-average, so we define the flow-field coefficient of variation as \(CV_{\Phi}(\lambda,\theta,x,y)=\frac{s_{\Phi}}{\dot{\phi}}\), where \(\ddot{\phi}\) is the mean of all the PIV vectors collected at a given phase. Figure 9 demonstrates the spatial extent of flow-field variability as a function of \(\theta\) and \(\lambda\). In this figure, \(CV_{\Phi}\) ranges from 1% to 150%. During the early portion of the cycle (locations i and ii, \(S_{a}\)), variability is confined to a region on the order of the foil thickness close to the blade and in the wake (Figure 9a,b) for both tip-speed ratios. The two cases diverge around \(\theta\) = 90\({}^{\circ}\) (beginning of \(S_{b}\)). There, for \(\lambda\ =\ 1.5\), flow-field variability expands in the vicinity of the dynamic stall vortex. In contrast, at the same \(\theta\), variability for \(\lambda\ =\ 2.5\) remains confined to the blade wake and the region of separated flow at the trailing edge. By the end of the cycle (location v, \(S_{e}\)), for \(\lambda\ =\ 2.5\), variation is once again primarily limited to the blade wake while for \(\lambda\ =\ 1.5\), we observe high variability in a large region of separated flow (locations iv.-v, \(S_{c}-S_{e}\)). We can compactly visualize the trends across phase by summation of \(CV_{\Phi}\) in space (i.e., \(\sum CV_{\Phi}\)). As shown in Figure 9c, \(\sum CV_{\Phi}\) is higher at all resolved phases for \(\lambda\ =\ 1.5\) than \(\lambda\ =\ 2.5\). For \(\lambda\ =\ 1.5\), \(\sum CV_{\Phi}\) increases dramatically during the growth of the dynamic stall vortex (\(S_{b}\)) and, for the resolved phases of the power stroke, reaches a maximum at location iii. This is in agreement with prior work that has consistently demonstrated large increases in cycle-to-cycle variation post-stall [12, 11, 22]. In contrast, for \(\lambda\ =\ 2.5\), \(\sum CV_{\Phi}\) remains relatively low during the power stroke, and the maximum variability likely occurs in a portion of the flow field not included in these experiments. For both cases, the highest observed flow-field variability occurs in the downstream sweep. In summary, cycle-to-cycle flow-field variability increases with stall severity and is concentrated around coherent structures originating from the blade or in the blade wake. The increase in flow-field variability for the lower tip-speed ratio is expected given the higher stall severity, but is inverted from the trends in time-average and phase-specific performance variability, which are higher for \(\lambda\ =\ 2.5\) (Figure 7 and 8). However, the coefficient of variation for performance, \(CV_{\eta}=\frac{s_{\eta}}{\eta}\) (Figure 9d), is higher for \(\lambda\ =\ 1.5\) across most of the cycle, consistent with flow-field variability. In other words, while \(\lambda\ =\ 2.5\) exhibits higher absolute performance variability, the \(\lambda\ =\ 1.5\) case has higher relative variability. Finally, it is instructive to consider the phase correlation between flow-field and performance variability. For \(\lambda\ =\ 1.5\), \(\sum CV_{\Phi}\) lags \(CV_{\eta}\), particularly in \(S_{a}\) (Figure 9c,d), which suggests that corresponding flow-field variability present during maximum performance variability likely occurs too close to the blade to be resolved by these PIV measurements. Similarly, for \(\lambda\ =\ 2.5\), the \(\sum CV_{\Phi}\) is minimal during the phases of maximum performance variability. For both tip-speed ratios, the observed high flow-field variability in the downstream sweep is associated with lower relative performance variability (we address this contradiction further in Section 3.2). ### Flow-field Clusters and Their Associated Performance Due to the non-coincidence between the performance and flow-field variability, we must employ a different approach to gain insight into how the observed flow-field variability is correlated with performance variability in other portions of the cycle. By utilizing the flow-field clusters and their associated, cycle-specific performance trajectories, \(\eta_{\Phi}\), we are able to better characterize the flow-field and performance variability, as well as identify correlations between them. This is done in an unsupervised manner, without relying on hand-engineered metrics. For this analysis, we focus on segment \(S_{b}\), which encompasses the end of the power stroke for both tip-speed ratios and the shedding of a dynamic stall vortex for \(\lambda\ =\ 1.5\). Figure 10 summarizes the comparison of the flow-field clusters and their corresponding performance trajectories for the two tip-speed ratios within segment \(S_{b}\). The conditionally-averaged difference fields (i and ii) highlight the deviation between the cluster conditionally-averaged fields and the phase-averaged fields (iii). The performance trajectories (iv) and their perturbations from the phase-average (v) for each cycle reveal correlations between performance and the flow-field clusters. For both tip-speed ratios, the conditionally-averaged difference fields reveal opposing behaviors between the clusters (e.g., regions of lower-than-average relative velocities in one cluster are coincident with regions of higher-than-average relative velocities in the other). In line with the coefficient of variation fields (Figure 9), the differences are more pronounced for \(\lambda\ =\ 1.5\) than for \(\lambda\ =\ 2.5\). Yet, for both tip-speed ratios, performance trajectories are separated by their associated flow-field clusters (iv and v). More significantly, the flow-field clusters for both tip-speed ratios are correlated with differences in maximum performance (Figure 10iv), even though \(S_{b}\) does not fully span the performance peak. This suggests that the flow-field variability in \(S_{b}\) stems from the growth and shedding of Figure 8: Bivariate distribution of phase-specific coefficient of performance (blue-to-purple shading with 0.05 \(\Delta\eta\) in the radial direction and \(2^{\circ}\)\(\Delta\theta\) in the \(\theta\) direction) for a deep dynamic stall case at \(\lambda=1.5\) and a light dynamic stall case at \(\lambda=2.5\). The orange line corresponds to \(\eta=0\), demarcating the power producing from power consumptive phases. Inset are phase-averaged global velocity magnitude fields (normalized by the freestream velocity) [NOTE: the blade path (blue dashed line) is not to scale]. The velocity fields are presented in the fixed global reference frame. Shaded regions at the periphery denote the radial extent of flow segments for PIV, \(S_{k}\). hydrodynamic structures which originate from but are unresolved in segment \(S_{a}\) due to their proximity to the blade. Finally, we note that, for \(\lambda\ =1.5\), the cluster-averaged performance perturbations reveal that cluster 1 and cluster 2 alternate phases of superior performance over the cycle, even though cluster 1 out-performs cluster 2 on a time-averaged basis. Considering all segments, the difference between time-averaged performance associated with the two clusters is highest in \(S_{a}\) (suggesting the greatest correlation between flow fields and performance is found here) and declines in segments later in the cycle for both tip-speed ratios The higher correlation between time-averaged performance and segment-specific flow fields around the performance peak is expected, as the majority of the power is produced there. Similarly, given the low performance throughout the downstream region, resolvable flow-field variability in this region would have a limited impact on time-averaged performance. Additionally, beyond the performance peak, the hydrodynamic features associated with power production advect, diffuse, and are diluted in the post-stall region, thereby obscuring their connection to performance earlier in the cycle. The obscuring of structures originating from the power stroke is further exacerbated in the downstream sweep by the switching of the pressure and the suction sides of the blade. Thus it is unlikely that the hydrodynamics in the downstream sweep retain any clear correspondence to those in the upstream sweep. We now dive deeper into the two tip-speed ratio cases for segment \(S_{b}\) to describe how the flow fields differ between clusters and their relationship with performance. We primarily focus on the \(\lambda\ =\ 1.5\) case since it exhibits strong flow-field variability. We then utilize the \(\lambda\ =\ 2.5\) case to highlight how, despite the lower flow-field variability and limited near-blade resolution, we are still able to extract useful insight from the flow-field clusters. #### 3.2.1 \(\lambda\ =\ 1.5\) For this tip-speed ratio, the deep dynamic stall results in growth and shedding of a dynamic stall vortex in segment \(S_{b}\). Difference fields for the cluster with higher time-averaged performance, cluster 1, highlight primarily lower-than-average velocities, with only small regions of higher-than-average velocity. The opposite behavior is seen for the poorer performing cluster, cluster 2. These velocity differences suggest several possible mechanisms: the dynamic stall Figure 9: Coefficient of variation fields (colorbar has been truncated at 50% for visualization) at select \(\theta\) for (a) \(\lambda\ =\ 1.5\) (b) and \(\lambda\ =\ 2.5\) and their accompanying relative velocity fields (normalized by the freestream velocity). The location of the \(\theta\) positions in (a-b) are labeled in (c-d) which show the (c) sum of the coefficient of variation fields (normalized by the number of data points) (d) and the coefficient of variation for performance for both \(\lambda\). vorticies for the two clusters differ in strength, in their location with respect to the blade (a consequence of different shedding timing), or both. The difference in time-averaged performance between the two clusters is comparatively small (\(1.5\%\) with respect to the time-average), but, as apparent in Figure 10a-iv, the performance peak for cluster 1 is slightly higher in amplitude and occurs slightly earlier in the cycle. The shift in phase of maximum performance is highlighted in Figure 11a. On average, maximum performance occurs at \(\theta~{}\approx~{}83^{\circ}\) for cycles in cluster 1 and at \(\theta~{}\approx~{}83.9^{\circ}\) for cycles in cluster 2. The distributions of the phase of maximum performance are statistically significant, as per the rejection of the null hypothesis for the Wilcoxon rank sum test (Section 2.5.3). These differences in the timing and amplitude of the performance peak between the clusters counter the phase-average trends over tip-speed ratio (Figure 11b) where a performance peak occurring earlier in the cycle has lower maximum and time-averaged performance, and is associated with operation at a reduced tip-speed ratio. To gain further insight into how vortex dynamics differ between the two clusters, we first consider modal analysis. A key benefit of the PCA pre-processor is the ability to interpret the resulting clusters through their modes and modal coefficients. The directions of the largest variance in the data are described by the modes and sorted by the singular values. Thus the first mode describes the hydrodynamics with the highest variability and the variation described decays as mode number increases. By considering the modal flow fields and their accompanying weighted activation profiles, we can identify the dynamics most associated with the variation and how their contribution changes with blade position and/or between clusters. Figure 12 shows the first three modes and the variability in their weighted activation profiles within segments \(S_{a}\) and \(S_{b}\). These three modes describe nearly \(14\%\) of the flow-field variance within these segments. We interpret modes Figure 10: Clustering analysis overview for for (a) \(\lambda~{}=~{}1.5\) (b) and \(\lambda~{}=~{}2.5\). (i,ii) Cluster conditional difference fields and (iii) phase averages for selected \(\theta\), with (iv) corresponding performance trajectories and (v) performance perturbations. The grid spacing in (i), (ii) and (iii) is \(C/4\). In (iv), each line is colored by cluster assignment and the black lines represent the cluster conditional-averages for performance or performance perturbations. The dashed rectangles denote the \(\theta\) range (\(S_{b}\)) where the flow fields in (i),(ii) and (iii) were captured. 1 and 2 as related to dynamic stall vortex shedding while mode 3 is attributed to near-blade vortex dynamics occurring earlier in the cycle. In this interpretation, mode 1 represents shedding dynamics that occur once the vortex core is at least a foil thickness away from the blade while mode 2 occurs earlier in the shedding process. The high variance explained by mode 1 is consistent with the velocity magnitude (Figure 9a and 10a-iii) and coefficient of variation fields Figure 11: Histograms of cycle-specific phase associated with maximum performance for (a) \(\lambda\,=\,1.5\) (c) and \(\lambda\,=\,2.5\). (b) Phase-averaged coefficient of performance for select tip-speed ratios. Figure 12: (left) Weighted activation profiles across each cycle, colored by the flow-field cluster assignments for \(\lambda\,=\,1.5\) are presented for \(S_{a}\) and \(S_{b}\). The opaque thick lines are the conditional-averages for each cluster and the black dashed line is the phase-average over all cycles. (right) Magnitude fields for the first three modes with variance explained for each mode noted. (Figure 9a), which are dominated by the shedding of the dynamic stall vortex. We observe that the weighted activation profiles for these three modes are well-converged in \(S_{a}\) but diverge between the flow-field clusters in \(S_{b}\). In \(S_{b}\), the better-performing cluster, cluster 1, has larger weighted activations in mode 1 and the weighted activations are shifted earlier in phase for mode 2. For mode 3, both clusters have similar mean-weighted activations everywhere except for \(\theta=90^{\circ}-120^{\circ}\). This is consistent with low flow-field variability in segment \(S_{a}\) and increased variability in \(S_{b}\) (Figure 9c and Figure 10). At \(\theta=119^{\circ}\), where the difference fields are most distinct, we see that the higher-than-average velocities closer to the blade for cluster 1 are captured by the higher mode 1 weighted activation. On the other hand, the lower mode 2 weighted activation for cluster 1 at this position captures the lower-than-average velocities farther from the blade. To establish some physical intuition for the differences between the two clusters in segment \(S_{b}\), we explore the chord-wise and chord-normal position of the dynamic stall vortex, (\(DSV_{C}\) and \(DSV_{\perp}\) respectively), and the reversed flow fraction, \(U_{\Phi}^{rev}\) (Figure 13a-c). The position of the vortex core is defined as the location of the maximum value of the swirling strength in individual flow fields reconstructed with 30 PCA modes. The truncation resulted in cleaner swirling strength fields which significantly improved tracking performance. There is no statistical correlation between the clusters and chord-wise DSV position. For \(\theta=107^{\circ}-137^{\circ}\) however, the chord-normal separation distance is statistically higher for cluster 1. The larger chord-normal separation distance combined with the earlier change in the \(DSV_{\perp}\) slope for cluster 1 is indicative of earlier vortex shedding. These results are consistent with [22] who showed for a pitching foil that clusters based on pressure data were able to reveal earlier shedding of a dynamic stall vortex. An increase in reversed flow fraction can indicate more flow separation on the blade and therefore more severe stall. We define this quantity as \[U_{\Phi}^{rev}\left(\lambda,\theta,n,c\right)=\frac{\sum\left[\left\{u_{rel} |\Phi_{k=l}^{s,c}\right\}<0\right]}{NxNy}, \tag{8}\] where the \([\ ]\) brackets represent Iverson Bracket Notation which, similar to the Kronecker Delta, returns a value of 1 when the statement in the brackets is true and a value of 0 when false. Cluster 1 exhibits more reversed flow everywhere in \(S_{b}\) (statistically significant). This is consistent with earlier separation and the possibility that cluster 1 cases have stronger dynamic stall vorticies, as evidenced by the higher-than-average velocities near the blade. In summary, we see clear differences between the clusters for modes that represent various periods of the dynamic stall process, and have shown that the dynamic stall vorticies have different velocity distributions and are shed at different times. Specifically, cycles in cluster 1, have better performance, more reversed flow, higher near-blade velocities in the dynamic stall vortex, and dynamic stall vortex cores that are further from the blade. These hydrodynamics are indicative of an earlier and potentially stronger stall which is consistent with the maximum performance occurring earlier in the cycle (Figure 11). #### 3.2.2 \(\lambda=2.5\) The higher tip-speed ratio case presents an opportunity to test the flow-field clustering method when there is less apparent flow-field variability due to the lighter dynamic stall and weaker vortex shedding that remains closer to the blade. Despite this, the difference in time-averaged performance between the two flow-field clusters is 2% with respect to the total time-average. This is actually slightly higher than for \(\lambda=1.5\) and consistent with the higher absolute performance variability for \(\lambda=2.5\). Referring once again to Figure 10, the amplitude of the performance peak for the better-performing cluster, cluster 1, is slightly higher, but, unlike for \(\lambda=1.5\), there is no apparent phase shift (Figure 11c). This is consistent with the convergence in the phase of maximum performance at higher tip-speed ratios in Figure 11b. Despite the relatively low variability, the flow-field clustering results in lower-than-average velocities at the trailing edge of the blade and slightly higher-than-average velocities near the leading edge for cluster 1, while the opposite is true for cluster 2. The first three modes capture nearly 14% of the total variance and depict spatial structures on the order of a foil thickness that remain close to the blade and in the wake (Figure 14). We interpret mode 1 as pertaining to attached flow on the blade throughout segment \(S_{a}\), followed by an increase in reversed flow at the trailing edge as \(\sigma_{1}a_{1}\) crosses zero (representing a change in sign of the mode), and the change in direction and magnitude of the relative velocity (Figure 10b-iii). Mode 2 represents increased flow along the entire blade when the weighted activations are positive, but enhanced separated flow at the trailing edge when negative. Similarly, mode 3 describes subtle changes in velocities along the blade. The discontinuity in mode 3 between segments is not likely the result of a rapid change in the dynamics. It is instead most likely a consequence of uncertainty in establishing the blade position between segments, which manifests as a slight misalignment between flow fields of different segments during the translation to the blade-centric reference frame. The \(\lambda\ =\ 2.5\) case is more sensitive to this error because the energetic dynamics are located adjacent to the blade, and therefore more susceptible to occlusion by the common mask, much more so than for \(\lambda\ =\ 1.5\). The weighted activation profiles reveal minimal deviations between the clusters, with the exception of mode 2 in \(S_{a}\) and mode 3 in \(S_{b}\). For mode 3, the relatively lower weighted activation for cluster 1 captures the lower-than-average trailing edge velocities. These lower velocities are indicative of more flow separation (theoretically limiting lift production) which contradicts the higher time-averaged performance for the cluster. In summary, the \(\lambda\ =\ 2.5\) case exhibits less flow-field variability in comparison to \(\lambda\ =\ 1.5\), as well as more muted conditional-average difference fields and modes, and less distinct cluster specific weighted activation profiles. Despite this, the flow-field clusters and their associated performance are distinct and meaningful. ### Impact of Freestream Velocity Perturbations For both tip-speed ratios, we observe a dichotomy between the flow-field and performance trends. The flow fields for cluster 1 show evidence of earlier and potentially stronger stall, but cluster 1 has higher time-averaged performance. While the flow-field and performance differences between the clusters could be the result of the stochastic dynamic stall process, an alternative hypothesis is that these variations are the result of freestream velocity perturbations. Any changes in the freestream velocity impact both the kinetic energy available in the flow and the instantaneous tip-speed ratio. Cycle-to-cycle variation in the rotation rate is negligible (standard deviations in \(\omega\) are 0.02-0.03% of the time-average), so any perturbations in the freestream will result in cycle-specific tip-speed ratios that differ from the average. Figure 15 demonstrates that for both tip-speed ratios, the instantaneous freestream velocities (advection-corrected as in Section 2.3.2) are correlated with the flow-field clusters. The difference between the mean of each cluster, is 1.45% Figure 13: Location of the dynamic stall vortex core in the chord-wise direction, \(DSV_{C}\), (a) and in the chord-normal direction, \(DSV_{\perp}\) (b) for both clusters. Vortex positions are normalized by the chord length, \(C\). (c) Reversed flow fraction. The solid lines are the conditional-averages associated with each cluster and the violin plots (_violinplot_ MATLAB function from [48]) at each \(\theta\) combine box plots and smoothed histograms to highlight the underlying distribution of the populations. The \(*\) denote phases where the result of the Wilcoxon rank sum test show we cannot reject the null hypothesis that the clusters are samples from continuous distributions with equal medians at the 5% significance level (i.e., the difference between the two distributions may not be statistically significant). and 1.2% of the time-averaged velocity for \(\lambda~{}=~{}1.5\) and \(\lambda~{}=~{}2.5\), respectively. These differences were found to be statistically significant. For both tip-speed ratio cases, cluster 1 has a higher conditionally-averaged inflow velocity. As a result, the blade is effectively operating at a lower tip-speed ratio for cluster 1 cycles while also encountering more kinetic energy in the flow. Since the coefficient of performance, \(\eta_{\Phi}\), is defined using the time-average of the cubed freestream velocity measurements acquired at the tip-speed ratio set point, it does not account for cycle-to-cycle inflow variability. Figure 14: (left) Weighted activation profiles across each cycle, colored by the flow-field cluster assignments for \(\lambda~{}=~{}2.5\) are presented for \(S_{a}\) and \(S_{b}\). The opaque thick lines are the conditional-averages for each cluster and the black dashed line is the phase-average over all cycles. (right) Magnitude fields for the first three modes with variance explained for each mode noted. Figure 15: Histogram of freestream velocities for clusters derived from flow fields in \(S_{b}\) at (a) \(\lambda~{}=~{}1.5\) and (b) \(\lambda~{}=~{}2.5\). Consequently, we consider a cluster-specific kinetic power, proportional to the conditional-average of the cubed freestream velocities associated with each cluster. Utilizing this, a cluster-specific coefficient of performance, \(\eta_{\Phi}^{*}\), is defined as \[\eta^{*}(\lambda,\theta,n,c)=\frac{P}{\frac{1}{2}\rho\{U_{\infty}^{3}|\Phi_{ \lambda=\lambda}^{s,c}\}LD}. \tag{9}\] A comparison between the performance trajectories around the peak and histograms of cycle-specific, time-averaged performance for \(\eta_{\Phi}\) and \(\eta_{\Phi}^{*}\) are presented in Figure 16. For \(\lambda~{}=~{}1.5\), both the \(\eta_{\Phi}\) and \(\eta_{\Phi}^{*}\) trajectories around the peak are statistically significant between the clusters (Figure 16a-i,ii). However, the locally higher \(\lambda\) cluster, cluster 2, subtly outperforms cluster 1 at maximum \(\eta_{\Phi}^{*}\). Similarly, the time-averaged distributions for both \(\eta_{\Phi}\) and \(\eta_{\Phi}^{*}\) are statistically significant between the clusters, but cluster 2 now outperforms cluster 1. These observations are now consistent with phase-averaged (Figure 11a) and time-averaged (Figure 7) performance trends across tip-speed ratios. The results are much the same for the \(\lambda~{}=~{}2.5\) case with the exception that the \(\eta_{\Phi}^{*}\) trajectories at the performance peak Figure 16: (a) \(\lambda~{}=~{}1.5\) and (b) \(\lambda~{}=~{}2.5\). Individual (i) coefficient of performance and (ii) cluster-specific coefficient of performance trajectories around the peak for the two clusters. The solid lines are the phase-average for cluster 1 and the dashed lines are the phase-average for cluster 2. Cluster-specific histograms of (iii) cycle-specific, time-averaged coefficient of performance and (iv) time-averaged, cluster-specific coefficient of performance. The \(*\) denotes that results of the Wilcoxon rank sum test show we cannot reject the null hypothesis that the clusters are samples from continuous distributions with equal medians at the 5% significance level. are no longer statistically significant between the clusters (Figure 16b-ii), though the time-averaged \(\eta_{\Phi}^{*}\) distributions remain distinct. In comparison to \(\lambda~{}=~{}1.5\), we observe a better collapse in \(\eta_{\Phi}^{*}\) for the \(\lambda~{}=~{}2.5\) case despite its higher performance variability. This is likely because the angle of attack profiles become less dependent on the tip-speed ratio as the tip-speed ratio increases (Figure 1). This means that, nominally, the sensitivity of the blade kinematics to inflow perturbations is inversely proportional to the tip-speed ratio. As a consequence, the hydrodynamics for \(\lambda~{}=~{}2.5\) are potentially less sensitive to inflow perturbations than for \(\lambda~{}=~{}1.5\). In summary, for both tip-speed ratios, performance based on a cluster-specific freestream kinetic power, \(\eta_{\Phi}^{*}\), confirms the hypothesis that the observed flow-field and performance differences between clusters are primarily caused by inflow velocity variations. This performance dependency on the inflow velocity is in agreement with the axial-flow literature which has consistently shown that turbine power output is correlated with inflow conditions [49]. Here, the increase in \(\eta_{\Phi}\) observed for cluster 1 is a consequence of not accounting for the increased inflow kinetic energy, which increases turbine power output even as the actual efficiency is degraded by the lower cycle-specific tip-speed ratio. While these results suggest that it might be preferable to calculate efficiency on a cycle-specific basis, we found that this approach increases performance variability in this data set. Consequently, we find that cluster-specific performance is more robust to uncertainty in the advection correction (Section 2.3.2). ### Cycle-to-Cycle Hysteresis As previously noted (Figure 9c-d, Section 3.2), the cycle-to-cycle flow-field variation during the downstream sweep does not coincide or correlate with performance variability. However, it is conceivable that flow-field variation at the end of one cycle could affect future cycles. To explore this possibility, we consider cluster conditionally-averaged flow fields in Figure 17, focusing on \(S_{e}\) for \(\lambda~{}=~{}1.5\), a case with distinct differences between the two clusters. Flow recovery in cluster 2 appears slower, with more separated flow and lower velocities near the trailing edge at \(\theta~{}=~{}356^{\circ}\). Despite this, we observe three factors that support a hypothesis of limited hysteresis between cycles. First, as the blade enters the next cycle (\(\theta~{}=~{}368^{\circ}\)), differences between the cluster 1 and cluster 2 flow fields has considerably diminished. Second, there is limited cycle-to-cycle performance variability at the beginning of the cycle (\(\theta~{}<~{}40^{\circ}\),(Figure 9). Third, there is limited variation in the weighted activation profiles for the flow fields at the beginning of the next cycle (Figure 12, \(S_{a}\)). If hysteresis from the flow-field variability in the prior cycle was present, we would expect flow-field variability to persist through \(\theta~{}=~{}0^{\circ}\) and to see attendant performance variability at the start of the cycle. Thus, it is unlikely that hydrodynamics in the previous cycle contribute to the observed variability in future cycles, especially in comparison to variation associated with the in freestream velocity. We note that this result is likely influenced by our choice of control scheme. Specifically, if turbine torque, rather than rotation rate, was the regulated quantity, the instantaneous rotation rate would vary within and between cycles, such that the blade kinematics would no longer be deterministic [44]. Under such a control scheme, we might see greater variation in the flow fields between cycles and stronger impacts on future cycles. Figure 17: Cluster conditionally-averaged flow fields in \(S_{e}\) for \(\lambda~{}=~{}1.5\) as the blade transitions from the downstream to upstream sweep. Discussion and Conclusion Cycle-to-cycle performance and flow-field variability for cross-flow turbines are often implicitly neglected through time- and phase-averaging, but, as demonstrated here, are statistically significant. This variability is potentially caused by dynamic stall's stochastic nature, freestream velocity perturbations, and hysteresis from previous cycles. In this work, we explored the extent and sources of cycle-to-cycle variability using near-blade flow fields and performance metrics for sub-optimal, \(\lambda\ =\ 1.5\), and near-optimal, \(\lambda\ =\ 2.5\), tip-speed ratios. The flow-field clustering technique developed for this purpose proved sensitive to cycle-to-cycle variations and highlighted correlations between performance and flow-field variability. This technique was effective for both tip-speed ratios despite the lower flow-field variability for \(\lambda\ =\ 2.5\), where dynamic stall is weaker. Overall, performance and hydrodynamic variability were found to be non-coincident in phase. Across all phases, the coefficient of variation in performance ranges from \(4\ -\ 20\%\). While an imperfect comparison, the phase-specific flow-field coefficients of variation are as high as \(150\%\), an order of magnitude greater. Performance variability is highest around the maximum performance within a cycle, but, because of limits to near-blade flow-field resolution, observable flow-field variability is minimal until beyond maximum performance. For \(\lambda\ =\ 1.5\), flow-field variability increases during the growth and shedding of the large dynamic stall vortex, and, for \(\lambda\ =\ 2.5\), during the growth of the separated flow region at the trailing edge. Despite this lag in the flow-field variability, clusters based on observed flow fields throughout the upstream sweep are correlated with time- and phase-averaged performance for both tip-speed ratios. This is not evident when considering only aggregate, statistical measures of variability. In contrast, the flow-field variability during the downstream sweep has little correlation with performance. Hysteresis and dynamic stall stochasticity may contribute to variability, but freestream velocity perturbations dominate the observed cycle-to-cycle variation in both the flow fields and performance. Given that the high flow-field variability at the end of the turbine rotation is not accompanied by high variability at the beginning of the rotation, it is unlikely that hysteresis impacts are substantial. While we cannot completely disentangle the impacts of freestream velocity perturbations and dynamic stall stochasticity, the cycle-specific freestream velocities are correlated with the flow-field clusters. These velocity perturbations are shown to impact the kinetic energy available in the flow and to perturb \(\lambda\) enough to influence the dynamic stall process. For \(\lambda\ =\ 1.5\), the better-performing cluster for \(\theta=95\ -\ 143^{\circ}\) exhibits higher inflow velocities, a higher reversed flow fraction, earlier dynamic stall vortex shedding, and a performance peak occurring slightly earlier in the cycle. All of these behaviors are consistent with the locally lower \(\lambda\), however, the higher performance is counter to the trend where a lower \(\lambda\) is associated with lower maximum and time-averaged performance. This contradiction is also present for the \(\lambda\ =\ 2.5\) case. When performance is calculated with a cluster-specific kinetic power, the apparent contradiction is resolved and the locally higher tip-speed ratio cluster outperforms the other. This suggests, for these conditions, that the change in the available kinetic power in the inflow has a greater influence than the perturbation to the local TSR. A clear performance and hydrodynamic dependence on assigned cluster is observed, despite the relatively low freestream turbulence intensity of 1.8-2.1%. That said, the differences in time-averaged performance between clusters are small (1-3% relative to the time-average of all data). Therefore, for these conditions, phase-averaging flow fields and performance is an effective way to investigate general trends. This work demonstrates that clustering is useful for more nuanced analyses that seek to understand the connections between observed flow fields and turbine performance. In field settings or laboratory conditions with higher turbulence, or the use of torque control, cycle-specific tip-speed ratios could be perturbed further from the phase-average. For such conditions, performance and hydrodynamic variability may increase, making a cluster-specific analysis a more necessary alternative to phase-averaging. Additionally, the performance dependency on cluster points to the potential extension of this work to developing probabilistic reduced-order models for prediction and control. We must note that, in future studies, it would be important to consider the number of clusters as a free parameter. Here, two clusters proved appropriate, but this may not be optimal for all cases. For the current data set, two clusters produce unique phase-averaged weighted activation profiles in the principle component analysis. While we utilize clusters based on the flow fields, PIV data collection is time intensive and generally confined to laboratory settings. Investigating the benefits of clustering on the basis of performance could be a direction for future research. The flow-field clustering approach contributes to our understanding of the mechanisms responsible for the performance and hydrodynamic variability of cross-flow turbines. It provides a more comprehensive picture of the phase-varying flow fields than aggregate, statistical representations, and provides conditionally averaged groups that are not based on subjective, hand-engineered metrics. We observe physically meaningful clusters representing a series of distinct flow-field evolutions. These clusters are correlated with performance when based on flow fields captured during the turbine power stroke, and reveal variations in timing of the dynamic stall process. ## 5 Acknowledgments The authors thank the Alice C. Tyler Charitable Trust for supporting the research facility and acknowledge the substantial contributions by Benjamin Strom, Hannah Ross, Aidan Hunt, Carl Stringer, Erik Skeel, and Craig Hill for their contributions to the development and upgrades of the experimental setup and code base. We also thank our colleagues in the Pacific Marine Energy Center for their continued support. Financial support was received from the United States Department of Defense Naval Facilities Engineering Systems Command and through the National Science Foundation Graduate Research Fellowship Program.
2308.09008
Probing the impact of Delta-Baryons on Nuclear Matter and Non-Radial Oscillations in Neutron Stars
The presence of heavy baryons, such as $\Delta$-baryons and hyperons can significantly impact various properties of Neutron Stars (NSs), like oscillation frequencies, dimensionless tidal deformability, mass, and radii. We explored these effects within the Density-Dependent Relativistic Mean Field formalism. Our analysis considered $\Delta$-admixed NS matter in both hypernuclear and hyperon-free scenarios, providing insights into particle compositions and their effects on NS properties. Our study of non-radial $f$-mode oscillations revealed a distinct increase in frequency due to the additional baryons. The degree of increase was significantly influenced by the meson-baryon coupling strengths. Notably, the coupling between $\Delta$-resonances and $\sigma$-mesons played a highly influential role. In some cases, it led to an approximately 20\% increase in the $f$-mode oscillation frequency of canonical NSs. These couplings also affect other bulk properties of NSs, including mass, radii, and dimensionless tidal deformability ($\Lambda$). Comparing our results with available observational data from pulsars (NICER) and gravitational waves (LIGO-VIRGO collaboration), we found strong agreement, particularly concerning $\Lambda$.
Probit Jyoti Kalita, Pinku Routaray, Sayantan Ghosh, Bharat Kumar, Bijay K. Agrawal
2023-08-17T14:23:10Z
http://arxiv.org/abs/2308.09008v3
# Probing the impact of Delta-Baryons on Nuclear Matter and Non-Radial Oscillations in Neutron Stars ###### Abstract The presence of heavy baryons such as \(\Delta\)-resonances and hyperons within Neutron Stars (NSs) can significantly impact their various properties. To investigate this, we utilize the DD-MEX model within the formalism of Density-Dependent Relativistic Mean Field (DDRMF) theory. We analyze \(\Delta\)-admixed NS matter in both hypernuclear and hyperon-free scenarios, gaining insights into particle compositions, emergence processes, and their effects on NS properties. These baryon species, particularly the \(\Delta\)-resonances, notably influence the nucleon effective mass, which is especially important since we observe that the charge neutrality constraints favor the early emergence of negatively charged heavy baryons. Meson-baryon coupling parameters affect the NS equation of state, leading to significant differences in stellar radii and maximum mass configurations as we vary them. Furthermore, we study the dimensionless tidal deformability (\(\Lambda\)) and non-radial \(f\)-mode oscillation frequencies, exploring how the presence of \(\Delta\)-resonances and their coupling with the \(\sigma\)-meson can directly influence observable bulk properties of NSs. When comparing our results with the available observational data from pulsars by NICER and gravitational wave data from the LIGO-VIRGO collaboration, we find a strong agreement, especially concerning \(\Lambda\). ## I Introduction Neutron Stars come to be when massive stars reach the end of their life journey as core-collapse supernovae. This transformation sets the stage for a variety of events that trigger oscillations within the star. These oscillations possess sufficient energy to be picked up by instruments designed to detect gravitational waves. These initiating events could be linked to the star's magnetic configuration, dynamic instabilities, accumulation of matter, and fractures in its outer layer [1; 2; 3; 4]. Kip Thorne pioneered the study of these disturbances within massive stars using the principles of general relativity [5; 6; 7; 8]. Substantial efforts have been invested in extending the basic concepts of oscillation theory from Newtonian physics to the more intricate framework of general relativity. These extensions aims to determine the frequencies at which oscillations occur and quantify the energy emitted in the form of gravitational waves [9; 10; 11]. The exploration of these oscillation frequencies involves solving equations that describe fluid perturbations alongside equations that govern how matter and spacetime curvature interact in the presence of strong gravitational forces [12; 13; 14; 15; 16]. These oscillations are categorized into two primary types: radial and non-radial, both of which are subjects of active research. Radial oscillations involve expansions and contractions akin to a pulsating motion that helps maintain the star's spherical shape [17; 18; 19; 20; 21]. In contrast, non-radial oscillations manifest as asymmetric vibrations centered around the star's core [9; 10; 11; 22; 23; 24; 25; 26]. These vibrations are guided by a restoring force that brings the star back to its equilibrium state. Non-radial oscillations can manifest in various modes, denoted as \(f\), \(p\), \(g\), \(r\), and \(w\)-modes, although not all of them contribute to the emission of gravitational waves. These modes gradually lose energy and are referred to as quasi-normal modes. The frequencies of these oscillations are significantly influenced by the internal characteristics of the NS, making them valuable tools for probing its interior through the field of asteroseismology. This approach has already provided insights into the properties of the NS's outer layer [27; 28; 29; 30; 31; 32; 33; 34]. NSs hold promise for asteroseismological study via gravitational waves, with expectations that the observation of gravitational waves generated by these oscillations will enable the determination of key properties such as mass, radius, and equation of state (EoS) [35; 36; 37; 38; 39]. Among the diverse oscillation modes, the fundamental (\(f\)) mode stands out as an acoustic oscillation intricately tied to the star's average density (\(M/R^{3}\)) [35; 36; 40; 41]. The particle composition in the interior of NSs has been extensively studied since Landau, Baade, and Zwicky first proposed the concept of NSs [42; 43]. Over the years, significant work has been conducted in this area, and it has now become conventional to consider the presence of the spin-1/2 baryons octet, also known as hyperons, in the core of NSs [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54]. Additionally, recent studies have also explored the existence of other heavy baryons like the \(\Delta\)-particles [55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65]. These heavy baryons play a crucial role in satisfying the observational constraints on NSs, which have been set by studying massive NSs [66; 67; 68; 69], analyzing the NICER data obtained from various pulsars [70; 71; 72; 73], and examining gravitational wave data from the LIGO-VIRGO collaboration [74; 75]. Among these constraints, special attention is given to the dimensionless tidal deformability (\(\Lambda\)) of the binary NS merger event GW170817, where the reported value was found to be below 720 within the 90% confidence interval [76]. Achieving such a low value of \(\Lambda\) requires a "softening" of the NS matter's EoS. This softening can be achieved by including heavier particles such as hyperons [77; 78], \(\Delta\) baryons [55; 63; 79; 80; 81; 82; 83; 84; 85], or even (anti)kaons [86; 87; 88; 89; 90] in the matter composition. However, the presence of these particles introduces its own challenges. Notably, hyperons have a significant impact on NSs, as they lead to increased repulsive interactions within the core, resulting in a considerable softening of the EoS [91; 92; 93]. While this softening is crucial to meet the observed upper bound on \(\Lambda\), it also causes the maximum mass configuration that NSs can attain to drop below the observed massive NSs with a mass of \(2M_{\odot}\). This discrepancy is commonly referred to as the "hyperon puzzle". Additionally, owing to their masses lying in a similar range as the hyperons, it should be reasonable to include \(\Delta\)-baryons into the composition as well, and we can expect them to appear in the NS matter at a similar density range as hyperons [94; 95; 96; 81; 97; 98; 99; 100; 94]. While early works on the topic had ruled out the possibility of the presence of \(\Delta\)-baryons within NSs [84; 97], later works have shown that their presence inside NSs is actually possible given that the \(\Delta\)-baryon's coupling parameters are properly constrained via available experimental measurements [98; 94; 95; 94; 95; 96; 81; 57; 83; 85]. Similar to hyperons, adding the \(\Delta\)-baryons also leads to softening of the EoS thereby further decreasing the maximum mass that the NS can attain [83]. This calls for the need of some mechanism that can lead to EoSs that are soft enough at the intermediate density range to satisfy the tidal deformability constraints while being stiff enough to result in mass-radius relations that satisfy the observations from massive NSs. Different approaches have been taken with this regard, including but not limited to, adding a repulsive 3-body force [92], addition of repulsive interaction between hyperons via the \(\phi\) meson [103; 104; 77], a \(\sigma\)-cut scheme that aims to keep the EoS stiff at high densities [105; 106; 107; 108], and density-dependent coupling constants [55; 56; 61; 62; 90; 109; 110; 111; 112; 113]. The approach adopted in this work to attempt to solve the EoS problem is to use the DD-MEX model [114] to study the NS matter by including hyperons and \(\Delta\)-resonance within the framework of the density-dependent relativistic mean field (DDRMF) theory. We also investigate their effects on the various macroscopic properties of NSs, including the dimensionless tidal deformability (\(\Lambda\)) and the non-radial \(f\)-mode oscillations. Radial oscillations in NSs for different matter compositions has been an active area of study [115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128] with the matter composition being recently extended to include \(\Delta\)-resonances as well [18]. Through this work we are proceeding further by studying, for the first time, non-radial \(f\)-mode oscillations in NSs with \(\Delta\)-admixed hypernuclear as well as hyperon-free matter. We organize this paper as follows. First, we present the theoretical formalism on which our calculations are based. We follow it up by studying the effects of \(\Delta\)-baryons and hyperons on NSs with density-dependent couplings. Finally, based on the results obtained we provide some conclusions. ## II DDRMF Lagrangian and equation of state In our study, we use the density-dependent relativistic mean-field (DDRMF) formalism to describe the NS composition. Specifically, we consider that the high density inside the core of a NS facilitates the presence of nucleons (neutrons and protons), hyperons (\(\Lambda\), \(\Sigma^{+,0,-}\), \(\Xi^{0,-}\)) and delta baryons (\(\Delta^{++,+,0,-}\)), with the inter-baryon strong force being mediated by three types of mesons (\(\sigma\), \(\omega\) and \(\rho\)). The Lagrangian density resulting from this model is given by [119; 55; 56], \[\mathcal{L}= \sum_{b\in N,H}\bar{\Psi}_{b}\Big{[}\gamma_{\mu}\Big{(}\iota \partial^{\mu}-g_{\omega b}\omega^{\mu}-\frac{g_{\rho b}}{2}\vec{\tau}\cdot \vec{\rho}^{\mu}\Big{)}-(m_{b}-g_{\sigma b}\sigma)\Big{]}\Psi_{b}+\sum_{l} \bar{\Psi}_{l}(\iota\gamma_{\mu}\partial^{\mu}-m_{l})\Psi_{l}\] \[+\sum_{d}\bar{\Psi}_{d}\Big{[}\gamma_{\mu}\Big{(}\iota \partial^{\mu}-g_{\omega d}\omega^{\mu}-\frac{g_{\rho d}}{2}\vec{\tau}\cdot \vec{\rho}^{\mu}\Big{)}-(m_{d}-g_{\sigma d}\sigma)\Big{]}\Psi_{d}\] \[+\frac{1}{2}\big{(}\partial_{\mu}\sigma\partial^{\mu}\sigma-m_{ \sigma}^{2}\sigma^{2}\big{)}-\frac{1}{4}\Omega_{\mu\nu}\Omega^{\mu\nu}+\frac{1 }{2}m_{\omega}^{2}\omega_{\mu}\omega^{\mu}-\frac{1}{4}\vec{\mathbf{R}}_{\mu\nu }\cdot\vec{\mathbf{R}}^{\mu\nu}+\frac{1}{2}m_{\rho}^{2}\vec{\rho}_{\mu}\cdot \vec{\rho}^{\mu} \tag{1}\] where we have used the Rarita-Schwinger-type Lagrangian density [120] for the \(\Delta\)-baryons, converting it to the form of a Dirac equation in the mean field approximation [121]. The baryon and lepton masses are represented by \(m_{i},\) where \(i\in n,p,l,H,D\), whereas the mesons masses are denoted by \(m_{\sigma}\), \(m_{\omega}\) and \(m_{\rho}\). The \(\omega\) and \(\rho\) meson field-strength tensors are given by \(\Omega_{\mu\nu}=\partial_{\mu}\omega_{\nu}-\partial_{\nu}\omega_{\mu}\) and \(\vec{\mathbf{R}}_{\mu\nu}=\partial_{\mu}\vec{\rho}_{\nu}-\partial_{\nu}\vec{ \rho}_{\mu}-g_{\rho}(\vec{\rho}_{\mu}\times\vec{\rho}_{\nu})\), respectively. The coupling constants \(g_{i}\) (\(i=\sigma,\omega,\rho\)) in the DDRMF model are scaled according to the baryon density (\(n_{b}\)) to reproduce the bulk properties of nuclear matter and this scaling is given by [122], \[g_{i}(n_{b})=g_{i}(n_{0})a_{i}\frac{1+b_{i}{(\eta+d_{i})}^{2}}{1+c_{i}{(\eta+d_ {i})}^{2}} \tag{2}\] for \(i=\sigma,\omega\) and \[g_{\rho}(n_{b})=g_{\rho}(n_{0})\exp{\{-a_{\rho}(\eta-1)\}}, \tag{3}\] where \(\eta=n_{b}/n_{0}\) and \(n_{0}\) is the nuclear saturation density. The parameter values along with the scaling coefficients corresponding to the DD-MEX model are listed in Table 1. The model's free parameters can be fitted by using ordinary nuclear matter composed of only neutrons, protons and electrons. We determine the hyperon-meson and \(\Delta\)-meson couplings by parameterizing them in terms of the nucleon-meson couplings, for which we introduce the ratio \(x_{ib}=g_{ib}/g_{iN}\), with \(i=\sigma,\omega,\rho\) and \(b=N,H,\Delta\), fixing \(x_{iN}\) at 1. Furthermore, the vector meson-hyperon couplings can be related to the vector meson-nucleon couplings via the SU(6) symmetry group as [55; 123], \[x_{\omega\Lambda} =x_{\omega\Sigma}=\frac{2}{3}\,,\quad x_{\omega\Xi}=\frac{1}{3}\,, \tag{4}\] \[x_{\rho\Sigma} =2\,,\quad x_{\rho\Xi}=1\,,\quad x_{\rho\Lambda}=0\,. \tag{5}\] The coupling constants for the scalar meson-hyperon couplings are fixed using the hyperon potential depths at saturation density defined as [124; 125; 126; 127; 128; 129] \[U_{H}^{(N)}=-g_{\sigma H}\sigma(n_{0})+g_{\omega H}\omega(n_{0})\,, \tag{6}\] and the values considered here are \(U_{\Lambda}=-30\)MeV, \(U_{\Sigma}=30\)MeV and \(U_{\Xi}=-14\)MeV. For the \(\Delta\)-meson couplings, the ratios \(x_{i\Delta}\) are varied within the ranges, \[0.8 \leq x_{\sigma\Delta}\leq 1.2\,,\] \[1.0 \leq x_{\omega\Delta}\leq 1.1\,, \tag{7}\] \[0.5 \leq x_{\rho\Delta}\leq 1.5\,.\] In order to satisfy the \(\beta\)-equilibrium condition in a NS with baryons and leptons, the chemical potentials of the particles must satisfy the following relations, \[\mu_{\Sigma^{-}}=\mu_{\Xi^{-}} =\mu_{\Delta^{-}}=\mu_{n}+\mu_{e}\,, \tag{8}\] \[\mu_{\mu} =\mu_{e}\,,\] (9) \[\mu_{\Lambda}=\mu_{\Sigma^{0}} =\mu_{\Xi^{0}}=\mu_{\Delta^{0}}=\mu_{n}\,,\] (10) \[\mu_{\Sigma^{+}}=\mu_{\Delta^{+}} =\mu_{p}=\mu_{n}-\mu_{e}\,,\] (11) \[\mu_{\Delta^{++}} =2\mu_{p}-\mu_{n}\,. \tag{12}\] These chemical potentials are given by, \[\mu_{b} =\sqrt{{k_{F}^{b}}^{2}+{m_{b}^{*}}^{2}}+g_{\omega b}\omega+g_{\rho b }\tau_{3b}\rho+\Sigma^{r}\,, \tag{13}\] \[\mu_{d} =\sqrt{{k_{d}^{d}}^{2}+{m_{d}^{*}}^{2}}+g_{\rho d}\tau_{3b}\rho+ \Sigma^{r}\,,\] (14) \[\mu_{l} =\sqrt{{k_{F}^{l}}^{2}+m_{l}^{2}}\,, \tag{15}\] where \(k_{F}\) is the Fermi momentum of the particle and \(\Sigma^{r}\) is a rearrangement term arising due to the density-dependent couplings given by, \[\Sigma^{r}=\sum_{b}\left[\frac{\partial g_{\omega b}}{\partial n_{b}}\omega n _{b}+\frac{\partial g_{\rho b}}{\partial n_{b}}\rho\tau_{3b}n_{b}-\frac{ \partial g_{\sigma b}}{\partial n_{b}}\sigma n_{b}^{s}+b\leftrightarrow d \right]. \tag{16}\] Here \(m_{b}^{*}\) and \(m_{d}^{*}\) are the effective mass given by, \[m_{b}^{*}=m_{b}-g_{\sigma b}\sigma\,,\quad m_{d}^{*}=m_{d}-g_{\sigma d}\sigma\,, \tag{17}\] and \(n_{i}^{s}\,(i\in b,\,d)\) is the scalar density given by, [126] \[n_{i}^{s}=\gamma_{i}\int\limits_{0}^{k_{F}^{i}}\frac{m_{i}^{*}}{\sqrt{k^{2}+{ m_{i}^{*}}^{2}}}\frac{k^{2}}{2\pi^{2}}\,\mathrm{d}k \tag{18}\] Alongside the chemical equilibrium condition, the NS matter also needs to satisfy charge neutrality condition which is imposed by the equation, \[n_{p}+n_{\Sigma^{+}}+2n_{\Delta^{++}}+n_{\Delta^{+}}=n_{\Sigma^{-}}+n_{\Xi^{- }}+n_{\Delta^{-}}+n_{e}+n_{\mu}\,. \tag{19}\] The equations of motion of the mesons are obtained using the relativistic mean-field approximation, \[m_{\sigma}^{2}\sigma =\sum_{b}g_{\sigma b}n_{b}^{s}+\sum_{d}g_{\sigma d}n_{d}^{s}\,, \tag{20}\] \[m_{\omega}^{2}\omega =\sum_{b}g_{\omega b}n_{b}+\sum_{d}g_{\omega d}n_{d}\,,\] (21) \[m_{\rho}^{2}\rho =\sum_{b}g_{\rho b}n_{b}\tau_{3b}+\sum_{d}g_{\rho d}n_{d}\tau_{ 3d}\,. \tag{22}\] \begin{table} \begin{tabular}{c c c c c c} \hline \hline Coupling & \(g_{\sigma N}\) & \(g_{\omega N}\) & \(g_{\rho N}\) & \(m_{\sigma}\) & \(m_{\omega}\) & \(m_{\rho}\) \\ Model & & & & (MeV) & (MeV) & (MeV) \\ \hline DD-MEX & 10.7067 & 13.3388 & 7.2380 & 547.3327 & 783 & 763 \\ \hline \hline \end{tabular} \end{table} Table 1: (a) Parameter values used in the DD-MEX model are listed. The meson-nucleon couplings for the \(\sigma\), \(\omega\) and \(\rho\) mesons included in the matter composition are given by \(g_{\sigma N}\), \(g_{\omega N}\) and \(g_{\rho N}\), respectively. The \(m_{\sigma}\), \(m_{\omega}\) and \(m_{\rho}\) are the meson masses and are given in units of MeV. (b) The coefficient values used in the scaling equations (2 and 3) for the DD-MEX model are listed. The energy density of the system can be written as, \[\varepsilon= \sum_{i\in b,\Delta}\frac{\gamma_{i}}{(2\pi)^{3}}\int\limits_{0}^{k _{\rm F}^{\rm F}}\sqrt{{m_{i}^{*}}^{2}+k^{2}}\,{\rm d}^{3}k\] \[+\sum_{l}\frac{1}{\pi^{2}}\int\limits_{0}^{k_{\rm F}^{\rm F}}k^{2 }\sqrt{m_{l}^{2}+k^{2}}\,{\rm d}k\] \[+\frac{1}{2}\big{(}m_{\sigma}^{2}\sigma^{2}+m_{\omega}^{2}\omega^{ 2}+m_{\rho}^{2}\rho^{2}\big{)}\,, \tag{23}\] while the pressure is given by, \[P= \sum_{i\in b,\Delta}\frac{\gamma_{i}}{3(2\pi)^{3}}\int\limits_{0}^ {k_{\rm F}^{\rm F}}\frac{k^{2}}{\sqrt{k^{2}+{m_{b}^{*}}^{2}}}\,{\rm d}k\] \[+\sum_{l}\frac{1}{3\pi^{2}}\int\limits_{0}^{k_{\rm F}^{\rm F}} \frac{k^{4}}{k^{2}+m_{l}^{2}}\,{\rm d}k+{n_{b}}\Sigma^{r}\] \[+\frac{1}{2}\big{(}{-m_{\sigma}^{2}\sigma^{2}+m_{\omega}^{2} \omega^{2}+m_{\rho}^{2}\rho^{2}}\big{)}\,. \tag{24}\] ## III Results and discussion We begin by exploring the characteristics of heavy baryons within NSs. The DD-MEX model, which was introduced in the previous section, provides a microscopic approach towards understanding the composition and possibility of occurrence of various heavy baryons in charge-neutral, \(\beta\)-stable NS matter. Specifically, our focus lies in understanding the behavior of NS oscillations with two different types of matter, hyperon-free NS matter composed of nucleons and \(\Delta\)-baryons (N\(\Delta\)) only, and \(\Delta\)-admixed hypernuclear matter, which encompasses nucleons, hyperons, and \(\Delta\)-baryons (NH\(\Delta\)). The behaviour of the nucleon effective mass with relation to the baryon density is a topic of significant interest when studying NS properties, such as the mass-radius relations and \(f\)-mode frequencies [130; 131]. In the absence of any other baryonic species, the nucleon effective mass (\(m_{n}^{*}\)) is expected to decrease asymptotically with baryon density \(n_{b}\). Addition of other baryonic species, such as hyperons or \(\Delta\)-resonances, causes the nucleon effective mass to decrease at a much faster rate due to the additional negatively contributing term from the scalar density dependence of the \(\sigma\) field in Eq. (17). In figures 1 and 2, we plot the normalized nucleon effective mass as a function of density to illustrate the effect of different baryons being present in the matter composition. We find that, keeping in agreement with the results obtained by Marquez _et al._[55], the value of \(m_{n}^{*}\) decreases to zero (at baryon densities above \(4.5n_{0}\)) for certain combinations of \(x_{b\Delta}\). This leads to the possibility that the nucleon effective mass could become zero at some density before the NS maximum mass configuration is reached. This can be solved by considering a phase transition to some exotic matter composition occurring at some density before \(m_{n}^{*}\) reaches zero, which is beyond the scope of the current work. Contrarily, we note that for many combinations of meson-\(\Delta\) coupling constants, the rate of decrease is less drastic than what was initially expected, leading to certain cases where \(m_{n}^{*}\) does not approach zero for any of the values of \(x_{\sigma\Delta}\) considered here. To gain deeper insights into the influence of the various particle species on the properties of NSs, we examine the population density of the different particles under consideration. Figures 3 and 4 present the plots for the threshold density at which these particles first appear in the system. For each baryonic species, we use a horizontal band which represents the variation caused by considering \(x_{\sigma\Delta}\in[0.8,1.2]\), with the mean value represented by the solid square. In \(\Delta\)-admixed NS matter (Fig. 3), we observe that after nucleons and leptons, the first particle to appear is the negatively charged \(\Delta^{-}\) baryon, which emerges near the \(2n_{0}\) mark. The charge neutrality condition imposed on the NS matter suppresses the presence of positively charged \(\Delta^{+}\) baryons, leading to the absence of \(\Delta^{++}\) baryons in combinations where \(x_{\omega\Delta}=1.1\). Furthermore, we find that in these combinations, the appearance of \(\Delta^{0}\) and \(\Delta^{+}\) baryons occurs only at the high-density limit and necessitates a large value of \(x_{\sigma\Delta}\gtrsim 1\). Moving on to \(\Delta\)-admixed hypernuclear matter (Fig. 4), we observe that the only hyperons present in the system are \(\Lambda\) and \(\Xi^{0,-}\). Similar to the N\(\Delta\) matter case, higher values of \(x_{\omega\Delta}\) have a comparable impact on the \(\Delta\)-baryons, causing them to appear at higher average densities. These results highlight that enforcing charge neutrality significantly favors the emergence of negatively charged baryons, with the spin-\(3/2\)\(\Delta^{-}\) being the most favored. The preference for \(\Delta^{-}\) over the lighter, neutrally charged \(\Lambda\) can be attributed to the more attractive potential of \(\Delta^{-}\) which can overcome the mass difference when replacing a neutron-electron pair. Moreover, increasing the value of \(x_{\omega\Delta}\) leads to a narrowing of the density range in which hyperons appear, thereby decreasing the average density at which they emerge. In this study, we applied the Tolman-Oppenheimer-Volkoff (TOV) equations of relativistic hydrostatic equilibrium [12; 13] to derive families of stars based on the equations of state (EoS) generated for different combinations of \(x_{\sigma\Delta}\), \(x_{\omega\Delta}\), and \(x_{\rho\Delta}\). The corresponding families of stars are illustrated in Fig. 5 for hyperon-free matter and Fig. 6 for \(\Delta\)-admixed hypernuclear matter. The color bar accompanying the figures indicates the varied \(x_{\sigma\Delta}\) values within the range \([0.8,1.2]\). In these plots, the solid black line represents the results from an EoS for matter containing only nucleons and leptons, while the black dashed line represents hypernuclear matter consisting of nucleons, leptons, and hyperons. The curves are plotted up to the maximum mass configuration obtained from their corresponding EoS. In addition to the mass-radius curves obtained from solving the TOV equations, we have incorporated observational constraints for comparison. The green horizontal band corresponds to constraints derived from the gravitational wave event GW190814 [74]. The two pink dashed boxes represent constraints obtained from 2019 NICER data of the pulsar PSR J0030+0451 [72; 73], while the blue dashed boxes depict constraints from 2021 NICER data of the pulsar PSR J0740+6620 [70; 71]. Despite the considerable uncertainties in the measurements, our models demonstrate agreement with the observational constraints for various matter composition scenarios, whether with nucleons and \(\Delta\)'s or with the inclusion of hyperons. From the figures, we see that the EoS of NS is affected by various couplings between mesons and baryons. In particular, the \(\Delta\)-resonances can play an important role, with the coupling constants \(x_{\sigma\Delta}\), \(x_{\omega\Delta}\), and \(x_{\rho\Delta}\) being the most relevant. The impact of these couplings on the stellar radius is shown in the figures, where we observe that increasing \(x_{\sigma\Delta}\) leads to a decrease in radius, as the attraction increases and the EoS softens at intermediate densities. Similarly, decreasing \(x_{\rho\Delta}\) results in smaller radii, as this reduces the repulsion associated with proton-neutron asymmetry. Notably, the presence of hyperons and \(\Delta\)'s together can increase the maximum mass limit beyond that of hyperonic matter if \(x_{\omega\Delta}\geq 1\), since the vector meson dominates at high densities and the \(\Delta\) coupling to the \(\omega\) meson is stronger than that of nucleons or hyperons. The relationship between these couplings and the maximum mass limit is complex and requires further discussion to be fully understood. When present in a binary system, NSs experience tidal effects caused by the companion's gravitational field. These effects can be quantified by means of the dimensionless tidal deformability (\(\Lambda\)), which is defined as \(\Lambda=\frac{2}{3}k_{2}C^{-5}\), where \(k_{2}\) is the tidal love number and C is the compactness [132, 133, 134]. We investigate \(\Lambda\) in two scenarios: (1) for \(\Delta\)-admixed NS matter, depicted in Fig.7, and (2) for \(\Delta\)-admixed hypernuclear Figure 1: Normalized nucleon effective mass as a function of density for NS matter composed of nucleons, leptons and \(\Delta\)-baryons. The sub-figures represent different combinations of \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\) values while we vary \(0.8\leq x_{\sigma\Delta}\leq 1.2\) in all of them (shown in color-bar on right). The solid black line represents NS matter composition of only nucleons and leptons whereas the dashed black line is for NS matter composed of nucleons, hyperons and leptons. matter, presented in Fig.8. In both cases, we explore various combinations of \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\), while varying \(x_{\sigma\Delta}\). To distinguish between nuclear and hypernuclear matter compositions, we use black solid and dashed lines, respectively, in our plots. Additionally, we include observational constraints on tidal deformability at the canonical mass (\(1.4M_{\odot}\)) from the gravitational wave events GW170817 [75] and GW190814 [74]. Our findings indicate that, for a given mass, increasing the coupling between the \(\sigma\) meson and the \(\Delta\)-baryons leads to a decrease in \(\Lambda\) compared to the scenario with nucleon-only NS matter. This reduction is due to the attractive interactions causing the EoS to soften. However, we observe that this decrease in \(\Lambda\) can be mitigated by enhancing the \(\omega-\Delta\) and \(\rho-\Delta\) coupling strengths, promoting repulsive interactions among the \(\Delta\)-baryons. Notably, among the coupling strengths, \(x_{\omega\Delta}\) proves to be the most effective in minimizing the decrease in \(\Lambda\), especially in the low mass region. Moving to the case of \(\Delta\)-admixed hypernuclear matter (Fig. 8), we find that the band of \(\Lambda\) at a given mass becomes broader due to the attractive interactions arising from the presence of hyperons. Remarkably, for values of \(x_{\sigma\Delta}\) greater than 1, the NS exhibits a comparatively lower \(\Lambda\) value than in the scenario with only nucleons and hyperons (\(NH\) only case). Additionally, we observe that increasing \(x_{\omega\Delta}\) above 1 has a noticeable effect on the maximum mass in this context. NS oscillations arising due to perturbations (either external or internal), cause emission of gravitational waves. These waves are emitted in different frequency modes with the fundamental mode being denoted by \(f\). Cowling approximation [135; 136; 14; 14; 15] is one of the most popular methods of solving the non-radial oscillations. Figures 9 and 10 illustrate the influence of interactions betwe Figure 2: Similar to Fig. 1 but for \(\Delta\)-admixed hypernuclear matter. mesons on the non-radial \(f-\)mode oscillation frequency for NSs composed of \(\Delta\)-admixed NS matter and \(\Delta\)-admixed hypernuclear matter, respectively. Consistent with the previous figures, the solid and dashed black lines represent \(N\) and \(NH\) matter compositions, respectively. As we increase \(x_{\sigma\Delta}\), the resulting star exhibits a smaller radius and lower mass, as depicted in Figs. 5 and 6. Consequently, the \(f-\)mode frequency is higher, as evident from our figures. Additionally, we observe that lower values of \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\) lead to a wider variation in the \(f-\)mode frequency for a given mass, particularly in the low mass region. This variation is attributed to the presence of a greater number of \(\Delta\)-baryons in the NS core, resulting from the larger attractive interaction and smaller repulsive interaction. Conversely, higher values of \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\) significantly compress the range of \(f-\)mode frequencies in the low mass region for a given mass, owing to the dominance of repulsive interactions. These observations are consistent with the effects of meson interactions on the NS radius, as shown in Figs. 5 and 6. Furthermore, similar to the dimensionless tidal deformability, the presence of hyperons also impacts the variation of \(f-\)mode frequency at a given mass. It notably increases the \(f-\)mode frequency significantly for \(x_{\sigma\Delta}\geq 1\). Figure 3: Densities at which the different \(\Delta\)-baryons first appear in the NS matter are depicted here. The sub-figures are the different combinations of \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\). In order to represent the variation in the appearance density of the baryons caused by varying \(x_{\sigma\Delta}\) we use a horizontal band for each particle. The average density at which a particle is appearing is depicted by the square marker in each band. For cases where a particle does not appear, its appearance density is not considered for its band. ## IV Conclusion In this study, we have attempted to understand how the presence of heavy baryons impacts the properties of NSs while keeping them constrained using available observational data. For this we employed the DD-MEX model within the DDRMF framework which allowed us to systematically explore how \(\Delta\)-admixed hypernuclear and hyperon-free NS matter impacts NS oscillations, and has helped us uncover intriguing insights into particle compositions, their emergence processes, and their profound influence on key NS properties. Notably, from our models we find that there is a significant impact on nucleon effective mass due to the influence of the presence of various baryon species, especially the \(\Delta\)-resonances. Keeping in agreement with available literature, we find that introducing these baryons into the composition leads to the nucleon effective mass becoming zero with increasing density, which raises interesting possibilities regarding the phase transitions occurring within those stars. But we also find that there are some configurations where the models do not approach zero effective nucleon mass even at extremely high baryon densities. Imposing the charge neutrality condition on the matter composition has shown that negatively charged baryons, particularly the \(\Delta^{-}\), are inherently more likely to nucleate than neutral or positively charged baryons. In particular, we find that the spin 3/2 particle \(\Delta^{-}\) has enough excess attractive potential to overcome the mass difference in replacing a neutron-electron pair, and is thus favored over the lighter and neutral \(\Lambda\)-hyperon. The effect of meson-baryon Figure 4: Similar to Fig. 3 but for \(\Delta\)-admixed hypernuclear matter. couplings, especially those of the \(\Delta\)-baryons, on the equation of state and by extension the radius and maximum mass configuration of NSs has emerged as one of the key insights that we have gained from our results. Their intricate interplay results in considerable variation between NSs models generated by us, with \(x_{\sigma\Delta}\) having the most significant impact on the equation of state's softening, particularly in the intermediate density regime. By meticulously incorporating observational constraints, we demonstrated a remarkable degree of agreement between our models and currently available data which serves to validate our findings for the different matter compositions of \(N\Delta\) and \(NH\Delta\). Furthermore, we extend our exploration to the realm of the dimensionless tidal deformability (\(\Lambda\)), which is a key parameter in understanding the interior of NSs. We find that the value of \(\Lambda\) is directly influenced by the amount of attractive and repulsive interactions within the stellar matter. These interactions are linked to the strength of the couplings between the mesons and the \(\Delta\)-resonances, with the variation brought about by their effect being most prominent in the low mass region. In this work, we also explored the various effects of heavy baryons on the non-radial \(f\)-mode oscillations of NSs. We found that the \(\sigma\)-\(\Delta\) coupling strength has a direct correlation Figure 5: Mass-radius curves showing the effect of varying \(x_{\sigma\Delta}\) with different combinations of \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\) for different compositions of NS matter with nucleons, leptons and \(\Delta\)-baryons. The solid and dashed black lines represent compositions of NS matter corresponding to nucleons and leptons, and nucleons, leptons and hyperons respectively. The value of \(x_{\sigma\Delta}\) taken for each curve is represented by the corresponding colour given in the color-bar on the right. Observational constraints are represented by the green band for GW190814 [74], the blue boxes for PSR J0740+6620 from the 2021 NICER data [70; 71] and the pink boxes for PSR J0030+0451 from the 2019 NICER data [72; 73]. with the frequency of the oscillation mode. This correlation can be attributed to the coupling's effect on the stellar mass and radius. The repulsive \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\) couplings were also found to contribute to the variation in frequency, especially for low mass NSs. Both of these results are consistent with our observations of the effects of meson interactions on NS radii. The variation in the fundamental mode oscillation frequency of NSs can be attributed to the quantity of \(\Delta\)-baryons present in the core of the NS. This is because we have observed that a larger \(x_{\sigma\Delta}\) value, coupled with smaller \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\) values, makes the conditions favorable for more \(\Delta\)-baryons to be nucleated in the stellar core, and vice versa. This perspective provides a novel understanding of how the presence of these resonances impacts NS properties and helps uncover some of their underlying dynamics. ## Acknowledgements P. J. K. thanks Khokan Singha and Sailesh Ranjan Mohanty for their help with the computations. Authors thank Prof. Constanca Providencia for her insightful discussion that enhanced the depth and quality of our work. B.K. acknowledges partial support from the Department of Science and Technology, Government of India, with grant no. CRG/2021/000101. Figure 6: Similar to Fig. 5 but for \(\Delta\)-admixed hypernuclear matter. Figure 7: Dimensionless tidal deformability (\(\Lambda\)) against NS mass for \(\Delta\)-admixed NS matter, showing the effect of varying \(x_{\sigma\Delta}\) with different combinations of \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\). To represent the different \(x_{\sigma\Delta}\) values, we use the corresponding color given in the adjoining color-bar. A solid black line is used to represent NS matter containing nucleons and leptons only, whereas the dashed black line is for NS matter containing nucleons, hyperons and leptons only. Observational constraints are represented by the green error-bar and grey shaded patch for GW170817 [75], and the blue error-bar for GW190814 [74]. Figure 8: Similar to Fig. 8 but for \(\Delta\)-admixed hypernuclear matter. Figure 9: \(f\)-mode oscillation frequency against NS mass for \(\Delta\)-admixed NS matter, showing the effect of varying \(x_{\sigma\Delta}\) with different combinations of \(x_{\omega\Delta}\) and \(x_{\rho\Delta}\). To represent the different \(x_{\sigma\Delta}\) values, we use the corresponding color given in the adjoining color-bar. A solid black line is used to represent NS matter containing nucleons and leptons only, whereas the dashed black line is for NS matter containing nucleons, hyperons and leptons only. Figure 10: Similar to Fig. 9 but for \(\Delta\)-admixed hypernuclear matter.
2301.05339
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology in film, games, virtual social spaces, and for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models, that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text, and non-linguistic input. We also chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method. Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.
Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, Michael Neff
2023-01-13T00:20:05Z
http://arxiv.org/abs/2301.05339v4
# A Comprehensive Review of ###### Abstract _Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology for creating believable characters in film, games, and virtual social spaces, as well as for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. The field of gesture generation has seen surging interest in the last few years, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text and non-linguistic input. Concurrent with the exposition of deep learning approaches, we chronic the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method (e.g., optical motion capture or pose estimation from video). Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development._ co-speech gestures, gesture generation, deep learning, virtual agents, social robotics **CCS Concepts** \({}^{\star}\) **Computing methodologies \(\rightarrow\) Animation; Machine learning; \({}^{\star}\) Human-centered computing \(\rightarrow\) Human computer interaction (HCI);** ## 1 Introduction This paper summarizes research on the synthesis of gesture motion, with a particular emphasis on more recent techniques using deep learning. The focus is on _co-verbal gesture_, gesture that accompanies speech. When considering the problem, a first reasonable question is "Why should we care about gesture at all?" Gesture plays at least three main functions. First, and most simply, it helps artificial agents and robots look more alive and be more engaging (this has been shown multiple times, e.g. [12, 13, 14, 15]). Second, it communicates functional information. This can include pointing or deictic gesture that establish reference; emblems that replace words, and imagistic metaphoric and iconic gestures that illustrate concepts and artifacts. Third, gesture communicates social information, including personality [16, 17, 18, 19], emotion [20, 21, 22, 23, 24, 25, 26, 27, 28] and subtext. Before summarizing work on gesture synthesis, it is worthwhile to consider how gesture can support a range of applications for virtual agents and robots. First, it is well established that gestures do indeed communicate [13, 14, 15] Hostetter's meta-analysis [16] presents three main findings for when gestures communicate: gestures depicting motor actions are more communicative than those depicting abstract topics; gestures that are not completely redundant have a larger impact on communication, and children benefit more from gesture than adults. Gestures communicate in a different manner than spoken language. They communicate particularly directly when being used to describe spatial concepts or object manipulation because there is a natural iconicity to these concepts, which is well portrayed in gestures. "Gesture permits the speaker to represent ideas that are compatible with its mimetic and analog format (e.g. shapes, sizes, spatial relationships) - ideas that may be less compatible with the discrete and categorical format underlying speech. Thus, when it accompanies speech, gesture allows speakers to convey thoughts that may not easily fit into the categorical system that their spoken language offers." [13]. The iconicity of gestures makes them more transparent than language, which is purely symbolic. Tversky argues that "[gestures] take advantage of human ability to make rapid spatial judgments and inferences. Neither depictions nor gestures can convey all the information surrounding an idea or set of ideas; this forces them to extract what is essential, to simplify the ideas, making them easier to comprehend and remember." [20]. They provide an additional code, a motor code for information, and additional codes are known to improve memory. Gestures are particularly congruent with actions or transformations [20]. A more detailed gesture typology is presented in Section 2. Nonverbal communication appears to be particularly important in providing appropriate social cueing [15]. Feelings, emotions and attitudes are often not made verbally explicit and must be inferred from nonverbal channels. The presence of nonverbal communication can radically change the outcome of an exchange. For example, a study comparing face-to-face and voice only union negotiations showed greater interpersonal communication in the face-to-face setting, whereas the speech-only communication focused more on content, was more impersonal, saw reduced praise, greater blame, more disagreement and was more likely to end in deadlock [16]. Additionally, gestures can be used to regulate a dyadic or group interaction by managing turn-taking [15, 16]. Kipp defines turn-taking as "assigning, holding or yielding a turn" in a dialog [14]. Bavelas [17] identified so-called "turn gestures" in dialogue interactions, with sub-categories of gestures that indicate: giving turn to the other speaker, accepting a turn from the other speaker or offering turn to any speaker in a group. Gesture can also support particular applications, for example, there is a growing body of evidence showing that gestures help people learn. There are at least three mechanisms by which this happens: learners watching a teacher gesture, learners performing gestures themselves, and teachers adapting their instruction based on information gained from the learner's gestures. For an excellent summary, see [14]. Gestures can link abstract concepts in the immediate environment, reduce cognitive load and enhance spoken communication [14]. A recent meta-analysis [16] of twenty experiments showed that the gesture already present in pedagogical agents is beneficial for student learning, with positive effects on transfer of knowledge, retention of learning and agent persona, but not on reducing cognitive load. Given the potential value of nonverbal communication, the question remains as to how to synthesize appropriate behavior in our computational systems. Formally, the problem is to generate an input to output mapping, where the input is some representation of the content to be expressed and the output is some representation of the behavior to perform. Most learning systems to date have assumed audio and/or text as the input, although we will see this has limitations. The output is generally either frames of pose data or a lexicalized representation of the gestures that should appear (e.g. accompany this sentence with a conduit gesture that displays the idea of conveying something to the interlocutor). The latter must then be converted to animation using some secondary system. Previous surveys have covered co-speech gesture synthesis with varying scope and emphasis [13, 14, 15, 16]. Wei et al. [13] focus on audio-visual learning for human-like machine perception, and briefly cover gesture synthesis. We focus on co-verbal gesture synthesis, including its theory, synthesis techniques with an emphasis on deep learning, and an eye towards application to virtual agents and robots. Ye et al. [16] surveyed deep-learning-based human motion modeling, and thus is related to our work via the emphasis on deep generative models. Our survey covers a larger scope of deep-learning-based gesture synthesis research and offers a more comprehensive set of key challenges that are specific to the problem. The survey by Liu et al. [17] is more closely related to our work, emphasizing co-verbal gesture generation for virtual agents and robots. Their work presents a scoping review of data-driven techniques for speech-to-gesture generation, related datasets, and evaluation metrics. We include a larger set of papers (40 vs. 19) and are able to provide a more in depth treatment of the technical material given the substantially longer STAR format. Overall, our survey makes the following contributions to the field: * A detailed discussion on the theory and motivation for co-verbal gesture synthesis. * A discussion on rule-based and statistical techniques, illustrating how these approaches can complement the strengths and weaknesses of recent deep-learning approaches. * An emphasis on deep-learning-based generation systems using input modality as an organizing principle for the research. * A discussion on the most commonly used speech-to-gesture datasets, collected via motion capture or pose estimation. * Identifying and detailing a set of key challenges for co-verbal gesture synthesis and potential research directions. The remainder of the paper begins by providing a deeper background on gesture, followed by a summary of synthesis techniques that have been developed to date and concludes with a discussion of major open problems. ## 2 Human gesticulation Manual Gestures are non-verbal, non-manipulative hand/arm movements that occur during speech [25, 26, 27]. We will refer to manual gestures as simply "gestures" in this work, although gestures can in general be performed by other body parts, such as the head. Gestures aid in the communicative intent and are closely linked to accompanying speech in terms of timing, meaning and communicative function. For instance, gestures can be used for pointing to resolve references to objects ("what is that") or illustrate concepts that would otherwise be difficult to explain verbally [13]. Therefore, gestures play an important complementary role to speech because they enable broader and more efficient expression of personality, emotion and motivation of the speaker [12, 13, 14]. Additionally, gesture plays an important cultural role, because members of a community can either identify with or easily understand the emotions and attitudes of those around them through these non-verbal cues [15]. Gestures can take many forms depending on the speaker, and the morphological rules governing their construction equally vary. Based on Kendon's gesture categorization [15], McNeill proposed "Kendon's Continuum" [25, 26] where gesture categories are sorted in increasing _lexicalization_, that is the degree to which they adhere to formal, language-like grammatical rules, as illustrated in Figure 2. In this framework, the least lexicalized (conversational gesture) have obligatory speech, while the fully lexicalized (sign language) have little or no obligatory speech as the gestures themselves gain explicit lexical properties. Most importantly, the fully lexicalized end of the spectrum (sign languages) have formal syntactic structure like spoken languages, but this is absent from coverbal gesticulations. This lack of structure is one of the challenges in producing coverbal gesture as the behavior can be highly idiosyncratic. There are many different forms of gesture and McNeill [25, 26] argues for a dimensional view in which the dimensions are iconic (images of the concrete), metaphoric (images of the abstract), deictic (pointing) and beat (flicks of the hand in time to rhythm of the speech). See Figure 3. An iconic gesture might show the size of a box being discussed by drawing it in space, whereas a metaphoric gesture might indicate an abstract concept, such as all ideas are included, by making an umbrella shape. A given gesture may load on multiple of these dimensions, for example displaying both iconicity and deixis. An additional category are adaptors (or self-adaptors), which are self manipulations such as scratching one's nose or bracing fingers. These are not designed to communicate, but do convey information about personality [26]. Kendon introduced a three-level hierarchy to describe the structure of gestures [15]. The top level is the _gesture unit_. Gesture units start in a rest pose, contain a series of gestures, and then return to a rest pose. The starting and ending rest pose need not be the same. A _gesture phrase_ encapsulates an individual gesture in this sequence. Each phrase can in turn be broken down into a sequence of _gesture phases_. These include: a _stroke_, which is the main meaning carrying movement of the gesture and has the most focused energy; a _preparation_, which is a motion that takes the hands to the required position and orientation for the start of the stroke; a _prestroke hold_, which is a period of stillness in which the hands are held at the staring point of the stroke, before the stroke begins; a _poststroke hold_, in which the hands are held at the end position of the stroke; and finally, a _retraction_, that returns the hands to a rest pose. All phases are optional except the stroke. The pre- and poststroke holds function to synchronize the gesture with speech. There are many challenges for automatically synthesizing gesture, for instance to drive virtual agents in human-computer interaction. One theory on the origins of gesture, the growth point hypothesis [26], argues that gesture and language emerge from a common communicative intent. Some communication may take verbal form and some nonverbal, with some being replicated across both. Some agent architectures, such as SAIBA [16], have tried to model this communicative intent. This allows nonverbal communication to be unique, carrying different information than the verbal channel. Many gesture synthesis approaches, and all deep learning approaches that we are aware of, do not model a communicative intent. Instead, they synthesize gesture from audio, text or both. This necessarily means that the gestures will be redundant with these other channels, and thus more limited than actual human gesture. Another challenge is that gesture is idiosyncratic [26], so different people may gesture in very different manners. The same person may also generate different gesture even when delivering Figure 2: Kendon’s Continuum of gesture categories, as described by McNeil [26, 26] the same text. Finally, gestures are synchronized in time with their co-expressive speech. About 90% of the time, the gesture occurs slightly before the co-expressive speech [10] and rarely occurs after [11]. While the earlier occurrence of gesture is common in human behavior, and research on animated characters also indicates a preference for this slightly earlier timing,it also indicated that people may not be particularly sensitive to errors in timing within a +/-.6 seconds [25]. ## 3 Approaches for gesture synthesis Synthesizing co-speech gestures is essential for creating interactive and believable virtual characters in human-computer interaction (HCI), graphics and social robotics. Thus significant effort has been applied and progress has been made for applications in virtual agents [21, 22, 23, 24, 25] and humanoid robots [26, 27, 28, 29, 30]. Neff identifies the two main sub-problems of generating gesture as the _specification problem_ and the _animation problem_[28]. The specification problem is concerned with determining _what_ gestures are to be performed by the character, and the animation problem entails _how_ to generate the appropriate hand motion. Gesture specification can use a range of inputs including speech, prosody, text and communicative intent, where rule-based, statistical and learning-based models have been used to determine the appropriate gesture. Similarly, gesture animation has used a range of procedural, physics-based or learning-based models to produce the hand motion. Gesture generation models can be divided into two main categories: rule-based and data-driven. Within the latter, there are two sub-categories, statistical and learning-based. Rule-based systems [29, 30, 31, 32, 33] use carefully designed heuristics to select the appropriate gesture for a given speech input. Data-driven systems instead learn to associate speech with corresponding gestures from the data, and we expound on the sub-categories next. Statistical systems [30, 31, 32, 33] typically precompute probabilities or assign a prior distribution over the given gesticulation data and a gesture is sampled from the distribution based on speech input. Learning-based models [30, 31, 32, 33, 34, 35, 36, 37, 38, 39] make the fewest assumptions about the distribution of gesticulations and instead optimize the parameters of a complex non-linear function to map the input speech into the appropriate gesture. This non-linear function is usually implemented as a deep neural network with some form of a gradient-based optimization algorithm, and thus we simply refer to them as deep learning approaches. While animation may be synthesized using a range of methods for each technique, rule-based and statistical approaches have generally predicted a gesture label that is used to index either hand-animated or pre-recorded gesture clips that are then used to synthesize the final sequence. In contrast, deep-learning approaches have tended to synthesize motion on a per-frame basis. Figure 4 illustrates the development of the gesture generation field, specifically how different approaches handle the trade-off between naturalness and communicative efficacy. The early approaches were intent-driven and hence had high communicative efficacy [29, 30, 31]. They were not very natural, since they were mainly inserting pre-defined animations. Later approaches used statistics to analyze and retrieve gestures from large databases [30, 31, 32]. Statistical approaches improved gesture naturalness, while slightly compromising communicative efficacy. Finally, modern approaches are mainly deep-learning-based, making the fewest assumptions about the underlying distribution of gesture data [30, 31, 32]. Deep learning-based approaches can generate continuous and fairly natural gestures, but they are significantly less communicative. Mo Figure 3: Relational graph of gesture categories and their defining properties. Figure from Kipp [31] (used with permission) tivated by this challenging trade-off, recent notable research has proposed hybrid systems for generating natural and semantically meaningful gestures, by combining rule-based and deep learning-based approaches. [12, 13, 14, 15]. We review the seminal approaches in rule-based, statistical and learning-based generation next. In Section 4, we discuss what, in our estimation, are some of the most impactful, rule-based systems. We discuss them in chronological order for ease of understanding and to emphasize how the approaches influenced one another. The selected works have in common that they pioneered approaches for speech-driven hand or facial animation by devising heuristics and domain-specific languages for modeling behavior intent, planning, and realization. Since the focus of the paper is on data-driven systems, we only review these selected works. For a more detailed review of rule-based systems, we recommend the review article by Wagner et al. [16]. ## 4 Rule-based approaches Cassell et al. [2] presented _Animated Conversation_, the first rule-based system to automatically generate context-appropriate hand gestures, facial movements and intonation patterns between multiple human-like agents. Notably, this work was one of the first to explore the latent relationship between speech and gesture for generating realistic animation. The system initiated a dyadic interaction between two agents through a dialogue generator and planner. The generated text was transformed into speech through a text-to-speech system [1] and deep symbolic representations were used to encode timing, intonation and the corresponding gesture prototypes. The gesture prototypes were used to perform the full gesture. The result was agents with appropriate and well-synchronized speech, intonation, facial expressions and hand gestures. However, the system was limited to domain-specific dialogue generation between two agents, which not only restricted free-form conversation (by restricting the discourse) and gesture animation, but also precluded real-time interaction with a human user. Thorrison proposed _Ymir_, [17] which improved on the Animated Conversation framework by enabling multimodal input from a user, including speech, gaze, gesture and intonation. It consisted of multiple modules for input perception, dialogue generation, decision making and action schedulers in order to produce well-synchronized hand animation. However, although this offered more interactivity with a user, the system could only produce limited multi-modal output in real time. The work of Cassell et al. [2] subsequently improved on the two frameworks by integrating the real-time multi-modal interactivity of Ymir with the symbolic generation and richer multi-modal synthesis capability of Animated Conversation. The result was an embodied conversational agent framework that produced reactive characters that behaved intuitively and robustly in conversations, albeit still limited to dialogue deriving from a static knowledge base. Another of the seminal works in a rule-based generation was the Behaviour Expression Animation Toolkit proposed by Cassell et al. [2]. BEAT took typed text as input and could synthesize well-synchronized speech, gesture, facial animation and intonation. The system used contextual information latent in the text to choose pre-recorded hand, arm and facial movements by relying on a set of carefully designed heuristics from previous nonverbal conversational behavior research. BEAT was highly extensible allowing animators to insert new rules that parameterize personality, movement characteristics, scene constraints and desired animation style. Alternatively, Kopp et al. [12] proposed a model-based approach for generating complex multimodal utterances (i.e. speech and gesture) from XML specifications of their form. Instead of relying on pre-recorded gestures, as the previously discussed approaches did, the system applied non-uniform cubic B-Splines to form a gesture trajectory that satisfies all velocity and position constraints. The authors demonstrated the multimodal capabilities of the system through Max: a virtual reality-based agent that interacts and assists a human user through construction tasks, by using prosodic speech, deictic and iconic gestures, gaze and emotive facial expressions [11]. Facial expressions, gaze direction and head movements are essential non-verbal behaviors that communicate the intent and emotional state of a speaker. They can also act as facial gestures e.g. "raised eyebrow" or gaze direction utilized to resolve a referent object or direction. Therefore, endowing virtual agents with such qualities can make them more anthropomorphic. Pelechaud et al. [2], developed Greta: a 3D virtual agent whose facial gestures communicated the agent's emotional state. The system was designed as a BDI agent (i.e. prior Beliefs, Desires and Intentions) [18]. A Dynamic Belief Network (DBN) modelled Greta's constantly evolving emotions and computed the triggering thresholds and evolution of her emotions, resulting in emotive verbal and non-verbal behaviour. The development of new rule-based systems often necessitated the development of a new domain-specific language (DSL), usually based on XML. Examples of these include an XML processing pipeline in the BEAT system [2], MURML for multimodal behavioral planning and animation of Max [11, 12], APML for representing the agent's behaviour semantically [2], and RRL for representing simulations of multimodal dialogue [23]. However, these DSLs were often incompatible with each other even as the systems solved similar Figure 4: Overview of the development of the gesture generation field, as outlined by Stefan Kopp at the GENEA Workshop 2020. Figure by Stefan Kopp. Used with permission. or overlapping objectives. As a result, a group of researchers developed a unified language for multimodal behaviour generation for virtual agents, called the Behavior Markup Language (BML) [13]. BML was designed in the context of a comprehensive framework with intent planning, behavior planning and behavior realization stages. Within this framework, BML described the desired physical realization and thus connected behavior planning to behavior realization. BML became the standard format for rule-based systems, finding use in open-source frameworks like SmartBody [14], and other agent embodiments like humanoid robots [15]. The development of BML led to continued advances in rule-based systems, even as some research started to explore learning-based systems. For instance, Marsella et al. [12] generated facial expressions and behaviors (including gestures, head movements, eye saccades, blinks and gaze), for a 3D virtual character, by analyzing the prosodic and acoustic features of speech, as well as shallow analysis of the utterance text to determine rhetorical and pragmatic content. Ravenet et al. [10] generated metaphorical gestures by leveraging BML to extract metaphorical properties from input speech. Their system leveraged BML annotations to synchronize speech audio and gestures, and configure gesture characteristics (e.g., hand shape, movement, orientation) to convey the desired representational meaning during behavior realization. Overall, BML continues to be a standard domain-specific language for behavior planning and realization in rule-based gesture generation systems. Rule-based gesture generation systems can produce high-quality gestures that are well synchronized with speech. Due to their reliance on pre-recorded motion, hand animation or carefully engineered systems for generating gestures, rule-based systems can have better motion quality than learning-based systems. Hand-tuned rules may also better preserve semantics within their limited domain. However, the gesture distribution is often not diverse. Moreover, the carefully designed rules require significant expert knowledge which is laborious and not scalable. Such systems are inflexible in that they can only produce a small set of plausible gestures for a particular speech input or scenario. Therefore, the inability to produce diverse gestures in a non-deterministic manner means the resulting virtual agents (or any other embodiments) can only behave in an expressive and naturalistic way for limited examples. Data-driven methods were proposed to try to overcome these limitations. Given the overall advances in deep learning, they may eventually also produce the highest quality motion. We review the two data-driven sub-categories next, statistical and deep learning-based methods. ## 5 Data-driven approaches ### Speech-Gesture Datasets Any data-driven method is fundamentally limited by the data it is trained on. The number of datasets suitable for machine-learning on human gesture data has been steadily rising, as has their size. Table 1 provides an overview of major datasets for gesture generation, and their characteristics such as size, motion format (2D or 3D), modalities, included annotation, and more. It is seen that the dataset sizes have reached new heights of 100+ hours in recent years, and there is also greater diversity in terms of the number of speakers, thanks to 3D pose estimation from video. Unfortunately, only a small fraction of datasets contain high-quality finger motion, which is of great importance for generating expressive and meaningful gestures. There are two main methods for obtaining motion data for gesture synthesis: optical motion capture [14, 15, 16], LDM\({}^{*}\)19,JSCS19,FM18] or pose estimation from monocular video [14, 15, 16, 17]. Existing datasets recorded using motion capture are usually smaller, since that method of data collection is much more expensive and labor-intensive, and generally takes place in a controlled studio environment. Emotion is often acted. The main advantages of the resulting data is that movements are in 3D and have high quality. This method is also the best at capturing finger motion. Datasets instead obtained from pose estimation can be an order of magnitude larger, as they can be sourced from online videos. This enables finding genuine, in-the-wild gestures, and the material can be large enough to include much more diversity. The downsides are relatively lower motion quality (fingers being especially hard) and being limited to 2D motion only. Recent monocular video work has lifted the skeleton motion to 3D [15]. In practice, the amount of data needed is likely to depend on the application at hand. While gathering data from a specific target speaker of interest is usually better than having an equivalent amount of data from non-target speakers, the gesture manifolds of different speakers nonetheless often have a significant overlap. It has been found that, starting from a generative model trained on one individual style, one requires only two minutes of data to fine-tune a gesture generation model for another style [15]. Recent work has also demonstrated the possibility of learning to embed different gesture styles, which then can be used for zero-shot adaptation to the style of an unseen target speaker with no training data of the target speaker [13, 14]. Techniques for augmenting gesture data so as to increase the amount of motion data for training have also been studied, especially mirroring [14, 15]. ### Statistical and early machine learning approaches In statistical systems, the latent relationship between speech and gesture is modeled by the statistics of the underlying gesture distribution, instead of being encoded by an expert. Compared to rule-based systems, statistical approaches make fewer assumptions about the speech-gesture association, instead either pre-computing conditional probabilities for the gesture data or assigning a prior probability distribution. Similar to our approach in Section 4, we focus on a subset of statistical approaches that, in our estimation, are some of the most impactful in the field. The works are described in chronological order to illustrate advances in statistical systems. Kipp proposed one of the earliest statistical systems, which modeled an individual's gesture by analyzing an annotated co-speech dataset and producing a _gesture profile_[16]. The data was annotated using the video annotation tool ANVIL [17] to define a gesture profile consisting of individual properties such as handedness, timing and communicative function. The gesture profiles were then modeled using statistical models inspired by work in speech recognition and dialogue act recognition [14]. The plausibility of a gesture was estimated using conditional probabilities on gesture bi-grams, and the occurrence of the gesture given semantics from input text. The result was statistical models for an individual's gesture properties like handedness, transitions and timing, forming an individual's gesture profile. The profiles were then used to generate plausible gestures from annotated input speech. The generation process had distinct stages: 1) Assigning semantic tags to input text; 2) Generating all possible gestures, adding them to an intermediate graph representation, and assigning probability estimates to the graph; 3) Filtering and temporal placement of gestures using text-gesture associations and timing profiles, respectively. The final output was an XML action script that could be used in a downstream animation system. Extending this approach, Neff et al. [21] proposed a statistical system that learned gesture profiles but also added a characteristic-specific animation lexicon. The system had two distinct phases. The pre-processing stage started with a video corpus of a character, hand-annotated in ANVIL [15]. The annotation process was similar to that of Kipp [15], but with an additional English-speaking character. Given the annotated data, a gesture profile (a statistical model) and animation lexicon were created, where the latter consisted of hand orientation, torso posture and data for afterstrokes (i.e. subsequent repeated hand movements after a prominent stroke), for each gesture lexeme. The full-automated generation phase had two distinct paths: 1) _re-creation_ that took in an annotated video as input and could re-create the gestures (observed in the video) in the animation system, useful for validating the annotations; 2) _gesture generation_ that could generate gesture from novel annotated text without the need for video input. Either path leveraged the character's gesture profile to generate a gesture script. The gesture script was used in the animation engine to generate the final animation either through a kinematic or dynamic simulation algorithm. Bergman and Kopp proposed a different statistical approach for modeling the transformation of speech that describes objects into iconic gestures that resemble said objects [14]. The proposed system generated coordinated speech and gesture by leveraging propositional and imagistic knowledge representations for content planning and concrete speech and gesture formulation. Their work involved dyadic conversations where one speaker gives spatial directions to another after exploring a virtual environment in VR. The study investigated what contextual factors are important in the formation of speech and gesture describing physical objects. As part of their framework, they developed a Bayesian network for gesture formulation. The Bayesian network defined a probability distribution over gesture properties such as indexing, placing, shaping, drawing and posturing. The probability distribution also took into account the idiosyncratic patterns for mapping visuospatial referents onto gesture morphology, i.e. the specific way an individual might index, shape or draw a gesture when describing a referent object. Gesture formulation resulted in fine-grained features including hand shape, wrist location, palm direction, extended finger direction, movement trajectory and direction. For the final animation, the framework leveraged the rule-based Articulated Communicator Engine (ACE) [13] to realize synchronized speech and gesture. Bergman and Kopp closely followed up with a hybrid framework combining data-driven and model-based techniques to model iconic gestures using Bayesian _decision_ networks [14]. They used a similar corpus of dyadic interactions with spontaneous speech and gesture employed for direction giving and landmark description. The corpus was richly annotated with temporal segments, gesture morphology and references to objects for iconic gestures. Extending their earlier work that used Bayesian networks [14], they used Bayesian decision networks, supplemented by decision network nodes [12]. Bayesian decision networks enabled them to formulate gesture generation as a finite sequential decision problem by combining probabilistic and rule-based components. For example, the decision to include a certain gesture or the morphology of the gesture could be encoded as a decision node (activated by a rule) or chance node (activated by probability) with a specific \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Name** & **Site** & **\# of sp.** & **Mot. format** & **Modalities** & **HQ fing.** & **Diabag?** & **Link** \\ \hline IEMOCAP [14] & 12h & 10 & mp4 video & Ges. Audio, Text, Emotion & & Dialog & still.usc.edu/ant/Timing/Speech/Gessure/ \\ \hline SAGA [14] [15] & 12h & 6 & mp4 video & Ges. Audio, Gest. properties & Dialog & www.biot.uni-amene.dea.dk/Bas/Bas/SaG/Agn/html \\ \hline Creative-IP [14] & 22h & 16 & 3d joint rot. & Ges. Audio, Text, Emotion & Dialog & still.usc.edu/Crest/TF/myReclease.htm \\ \hline MP-EBED [14] & 1.43h & 8 & 3d joint rot. & Ges. Text & & Monolog & EMD.tubelingen.mpg.de \\ Gesture-Speech Dataset [21] & 5h & 2 & 3d joint rot. & Ges. Audio & ✓ & Monolog & bi-D/2Qui/Spm/d \\ \hline CUM [15] & 55.8h & 50 & 3d joint rot. & Ges. Audio, Text & & Dialog & danni.prom probability distribution. To generate a gesture, the Bayesian network defined a probability distribution over gesture morphological features, based on object referent features, discourse context and the previously performed gesture. Levine et al. proposed a hidden Markov model (HMM) to select the most suitable motion clip from a motion capture database, by using prosody-based features extracted from the original speech [10]. The trained HMM used prosody cues to select the most appropriate gesture sub-units from the motion capture, ensuring that the chosen sub-units transition smoothly and are appropriate for the tone of the current utterance. However, directly associating prosody with gesture sub-units created a dependence on the quality and amount of training data, which made the system susceptible to overfitting. Levine et al. [10] improved upon the previous system by proposing "gesture controllers" that decoupled the kinematic properties of gestures (e.g. velocity, spatial extent) from their shape. Gesture controllers inferred gesture kinematics using a conditional random field (CRF) that analyzed the acoustic features in the input speech and learned a distribution over a set of hidden states. The hidden states encoded the latent structure of gesture kinematics without regard for the morphology of the gesture which reduced the number of false correlations and thus alleviated overfitting. Finally, a Markov Decision Process (MDP) took the hidden states and the distribution over them as input and used an optimal policy (learned via the reinforcement learning algorithm) to select the appropriate gesture clips. Chiu et al. [14] maintained the use of prosodic features to learn a probabilistic model for gesture generation. They restricted their study to learning gesture types that are associated with prosody, i.e. rhythmic movements (beats). The gesture generator was based on a modified Hierarchical Factored Conditional Restricted Boltzmann Machine (HFCRBM) [15]. They first built a compact motion representation by training a conditional Restricted Boltzmann Machine (CRBM) using an unsupervised learning algorithm. Then the HFCRBM generator autoregressively took in the previous gesture representation and a sequence of audio features extracted from the original speech to generate the gesture representation for every time step, until the full motion sequence was completed. Finally, they smoothed discontinuities between frames by reducing the acceleration of wrist joints if they exceed a set threshold. However, their approach was restricted to rhythmic gestures and thus did not consider other commonly occurring gesture types such as iconic, pantomimes, deictic, emblematic and metaphoric. Recently, Yang et al. [21] proposed a statistical motion-graph-based system that generated gestures and other body motions for dyadic conversations, that were well synchronized with novel audio clips. They constructed a motion graph that preserved the statistics of a database of recorded dyadic conversations. During generation, the graph was used to search for the most plausible motion sequence according to three constraints of audio-motion coordination in human conversations: 1) coordination to phonemic clause; 2) listener response; 3) partner's hesitation pause. The system adapted motion graphs, successfully employed in locomotion [10, 11, 12], for free-form conversation gestures with a lot more stylistic variation. Their conversational motion graph was significantly larger than that for locomotion due to the richness of conversational gestures. Given such a large graph, the system balanced search efficiency and style diversity by leveraging a stochastic greedy search algorithm to find a high-quality animation, well synchronized with the audio. Statistical models provide more flexibility than rule-based systems and capture the non-determinism found in conversational gestures. In fact, a lot of the statistical principles (i.e. learning a probability distribution over the gesture data, through maximum likelihood estimation (MLE)) are still useful and relevant to most state-of-the-art methods, currently dominated by deep learning-based models. However statistical systems usually considered a limited number of independent variables based on painstakingly annotated gesture data. Deep learning-based models provide more flexibility through great representative capacity as well as making even fewer assumptions about the statistics of the underlying data. We describe this family of models next. ### Deep learning approaches Deep learning-based generative models recently gained interest because of their ability to synthesize data from abstract representations of training datasets. They are increasingly prominent in character animation applications, including character control in games, and facial or gesture animation conditioned on speech and text in virtual agents. Such models typically make few assumptions about the underlying data distribution (except useful inductive biases), and learn their parameters to fit the data through gradient-based optimization of an objective function. The use of deep-learning approaches has moved the field forward substantially in terms of perceived naturalness, but arguably represents a step backwards in terms of communicative efficacy with respect to previous methods, as illustrated in Figure 4. Instead, the main targets of systems based on deep learning have been human-likeness and appropriateness for speech audio and semantic content. The former is the degree to which the generated gesture motion visually resembles believable human behavior, while the later is how suitable it is for a given speech audio, text input, or other contextual information. Early deep-learning systems ignored semantics, instead focusing on improving human-likeness [1, 12, 13]. Later approaches have tried to incorporate semantics in order to generate meaningful gestures. The first attempts could generate only a handful of such gestures [11, 12, 13]. Although more recent work suggest that progress can [13] and has been made [10], appropriateness remains a challenge. This can be seen from the GENEA Challenge (where GENEA stands for Generation and Evaluation of Non-verbal Behavior for Embodied Agents), which is a recurring large-scale comparison of gesture synthesis systems, whose most recent iteration [21] found that the human-likeness of motion can now reach the level of human motion capture, while appropriateness is still barely above chance. The proliferation of deep learning in conversational gesture generation has led to a large number of approaches that can be grouped based on the input modalities, i.e. audio, text, audio and text, or audio with other non-communicative modalities, and control parameters. We employ this taxonomy to organize our exposition and give a summary of the models and their respective categories in Table 2. We only include approaches that produce hand gestures and were published before the submission deadline for our review. In sections 5.3.1, 5.3.2, and 5.3.3 we discuss generation approaches that use audio-only, text-only, and a combination of audio and text input, respectively. Section 5.3.4 focuses on approaches that use non-linguistic input, i.e. input other than speech audio or text. Finally, Section 5.3.5 explores approaches that employ control input. The approaches within each modality section are presented in chronological order to reflect the evolution of the field. #### 5.3.1 Audio input Hasegawa et al. [17] proposed an autoregressive approach to generate gesture from audio utterances using a bi-directional LSTM [16]. The bi-directional LSTM learned audio-gesture relationships with both backward and forward consistencies over a long period of time. The model was trained with a then novel audio-gesture dataset, collected using a headset and marker-based motion capture [18]. The model predicted a full skeletal human pose from the utterance features input at every LSTM timestep. Temporal filtering was then used to smooth out discontinuities in the generated pose sequences. Kucherenko et al. [19] extended the work of Hasegawa et al. [17], removing the need for temporal smoothing through representation learning of an autoencoder. The proposed model transformed audio input into a gesture sequence in the form of 3D joint coordinates. They achieved this by (i) learning a lower dimensional representation of human motion using a denoising autoencoder consisting of a motion encoder (called MotionE) and a motion decoder (called MotionD) and (ii) training a novel speech encoder (called SpeechE) to transform speech to the corresponding motion representation with reduced dimensionality. During inference, the SpeechE predicted the motion representations, based on a given speech signal, and the MotionD decoded the motion representations into gesture sequences. However, their approach was deterministic and thus unable to capture the commonly observed phenomena where a person gesticulates differently at different points of the same utterance. Deterministic generative approaches usually learn their parameters using a regression objective, e.g. L1 (Mean Absolute Error) or L2 (Mean Squared Error). Optimizing with either of those objectives typically forces the model toward learning to generate the mean representation of the data, producing averaged motion for different inputs, and resulting in undesirable results; usually called regression to the mean. Several approaches avoided this by incorporating probabilistic components into their objectives. Probabilistic components can increase the range of gesture motion in multiple ways, namely: (i) greater range of motion for different inputs, or (ii) stochastic motion for the same input. The most prominent are implicit log-likelihood evaluation via adversarial learning with Generative Adversarial Networks (GAN) [15], explicit log-likelihood evaluation via variational inference with Variational Autoencoders (VAE) [16], and exact log-likelihood evaluation via invertible transformations with Normalizing Flows [19, 20]. GANs aim to do implicit density estimation of the underlying distribution through the interplay of a generator that tries to produce samples that are representative of the data, and a discriminator that strengthens the generator by classifying samples as real (from the distribution) or fake (not from the distribution). Multiple gesture generation approaches added an adversarial objective as a term in a composite loss function, which increased the range of gesture motion although still deterministic for a given audio input [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 221, 222, 233, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 285, 286, 287, 288, 289, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 304, 305, 306, 307, 308, 309, 310, 320, 331, 332, 333, 341, 342, 343, 344, 345, 346, 347, 348, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 300, 393, 301, 394, 302, 303, 304, 305, 306, 307, 308, 309, 310, 307, 309, 311, 312, 313, 314, 315, 316, 317, 318, 319, 319, 320, 321, 323, 324, 325, 326, 327, 328, 329, 333, 334, 345, 346, 347, 348, 358, 359, 360, 371, 372, 373, 374, 375, 376, 378, 379, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 392, 303, 393, 304, 305, 306, 307, 308, 309, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 334, 345, 346, 347, 348, 358, 359, 360, 370, 371, 373, 375, 376, 378, 379, 380, 382, 383, 384, 385, 386, 387, 388, 389, 390, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 422, 43, 444, 45, 46, 47, 48, 49, 411, 43, 44, 41, 44, 45, 47, 49, 42, 42, 44, 45, 46, 48, 49, 43, 44, 47, 48, 49, 40, 41, 44, 45, 46, 47, 49, 43, 48, 49, 40, 42, 45, 49, 44, 46, 47, 48, 49, 40, 42, 45, 49, 40, 43, 41, 45, 46, 47, 48, 49, 40, 43, 49, 41, 45, 47, 49, 42, 46, 49, 43, 42, 45, 46, 48, 49, 40, 44, 45, 47, 49, 45, 48, 49, 40, 46, 49, 41, 45, 49, 40, 47, 48, 49, 40, 41, 45, 49, 42, 46, 49, 43, 40, 44, 45, 46, 47, 48, 49, 41, 45, 49, 42, 46, 47, 48, 49, 40, 45, 49, 40, 46, 47, 49, 40, 47, 48, 49, 40, 49, 41, 40, 49, 42, 40, 40, 41, 45, 49, 42, 46, 47, 48, 49 tempted to capture the diversity in gesticulation, independent of audio information. In a similar vein, Ghorbani et al. [14, 15], used a VAE-based framework for style controllable co-speech gesture generation conditioned by a zero-shot motion example i.e., an instance of a motion style unseen during training. Given an audio input and a motion example, they generated an encoding of the audio and a style embedding from the motion, and the two latent codes were used to guide the generation of stylized gestures. The variational nature of the style embedding enabled them to easily modify style through latent space manipulation or blending and scaling of style embeddings. Moreover, the probabilistic nature of the model enabled the generation of varied gestures for any audio and exemplar motion input. The resulting model performed favorably against state-of-the-art probabilistic techniques [1] in terms of naturalness of motion, appropriateness for speech, and style portrayal. Taylor et al. [17] adapted the conditional Flow-VAE framework [18], combining the advantages of the VAE and normalizing flow architectures,to generate spontaneous gesture movement for speaker and listener roles in a dyadic interaction. They used the Flow-VAE framework for modeling expressive gesture because of its ability to improve generative capacity of the VAE by estimating the latent space with a highly complex distribution using a normalizing flow, instead of the standard Gaussian. Their autoregressive framework was trained on a set of previously generated gestures, an audio input window and the dyadic role, i.e. speaker or listener, as input. The preceding gestures were encoded into a latent variable then transformed into a complex distribution using a normalizing flow, conditioned by the audio window and role in the dyad. Their decoder then generated the next gesture based on the latent variable sampled from the complex distribution. The resulting model could generate expressive co-verbal gestures in a dyadic setting based. Hybrid systems that combine deep learning and database matching components can also help tackle the regression to the mean problem [19, 20, 21, 22]. Indeed, this approach has been used effectively in motion synthesis problems, e.g. game animation where high fidelity motion is crucial [11]. In the context of conversational gesture, the intuition is that modeling the association between high dimension audio input and gestures, represented by exact joint positions or angles, using standard regression objectives (L1 or L2 loss) discourages the model from producing otherwise plausible gestures that do not exactly match the ground truth, thus greatly reducing the variety of generated gestures. Alternatively, the audio-gesture association can be modeled by predicting higher-level parameters for gesture motion. Ferstl et al. [12] realized this idea by learning to map audio to gesture via higher-level expressive parameters, specifically gesture velocity, acceleration, size, arm swivel angle, and extent of hand opening. First they pre-trained a model to associate audio prosodic features to the expressive parameters. Then they predicted gesture timing by extracting pitch peaks in the audio signal. At inference time, the prosodic features were used to estimate the expressive parameters that were in turn used to search for a matching gesture in the database, and the pitch peaks were used to temporally position the matching gesture. Finally, synthetic preparation and retraction phases were added to connect the gestures in the sequence. Another interesting approach for preserving gesture form is through audio-based search in a video gesture database where the gestures are represented by video frames. Zhou et al. [23] explored this idea in a gesture reenactment task by generating a gesture video for an unseen audio input, using gesture frames from a reference video. They first encoded the reference gesture video as a "video motion graph" - a directed graph where each node represented a video frame and corresponding audio features, and the edges represented transitions. The graph encoded how the reference video can be split and re-assembled in difference graph paths. In order to increase graph connectivity, i.e, diversity of plausible path, they added synthetic edges based on a frame pose similarity threshold computed using the SMPL pose parameters [13]. Given unseen audio input as a guide, they traversed the graph using a beam search algorithm [14] to find the most optimal path or order of gesture frames that best matches the speech audio. For graph paths that contain temporally disjoint frames, they trained pose-aware video blending network to synthesize smooth transitions between the frames. #### 5.3.2 Text input Approaches that used audio as the primary modality produced well-timed hand movements that tend to be highly associated with acoustics, largely corresponding to beat gestures. However, the lack of text transcript means they were not informed by the structure and context inherent in the text, for example, semantic meaning and punctuation. Such structure can help produce more meaningful and communicative gestures. Therefore, next, we describe some approaches that used text as the primary input modality. Ishi et al. [10] proposed a text-based gesture generation approach for controlling a humanoid robot. They modeled the text-to-gesture motion translation by associating words to concepts, concepts to gesture categories (i.e. iconic, metaphoric, deictic, beat, emblem and adapter), and gesture categories to gesture motions. Further, they estimated conditional probabilities to model the association between word concepts and gesture categories, and between gesture categories and gesture motion clusters that were pre-computed with the k-means clustering algorithm. Yoon et al. [24] proposed an encoder-decoder approach that transformed speech text, from a dataset based on TED talks, into a sequence of gestures. They created the TED video dataset by picking video segments with the speaker's upper body and hands. Then they performed pose estimation using OpenPose [15] and removed segments containing noisy or no estimations. Speech text was converted into a sequence of 300-dimensional word vectors using pre-trained GloVe embeddings [25]. Similarly, poses estimations were converted to 10-dimensional vectors using principal component analysis (PCA). Therefore, co-speech gesture generation became a sequence-to-sequence translation problem from word embeddings to human poses encodings. The encoder part of the network was a bi-directional GRU [23] taking in speech text one word (vector) at a time and capturing bi-directional context. The last hidden state of the encoder was passed into the decoder, also a bi-directional GRU. The decoder also took previous pose estimations to condition the prediction of the next pose, in addition to using soft-attention [1] to focus on specific words when predicting the next pose. Finally, the generated 2D poses were mapped to 3D and executed on the NAO humanoid robot. Recently, Bhattacharya et al. [1] used text transcripts to produce expressive emotive gestures for virtual agents in narration and conversation settings, using MPI-EBEDB, a dataset of actors performing multiple emotion categories (amusement, anger, disgust, fear, joy, neutral, pride, relief, sadness, shame, surprise) [26]. Their approach consisted of Transformer-based encoders and decoders [27], where the encoder took in the text transcript sentences (encoded as GloVe embeddings [25]) to produce an encoding which was concatenated with the agent at \begin{table} \begin{tabular}{|c|c|} \hline **Objective** & **Full name** \\ \hline Adv & Adversarial Loss \\ CCE & Categorical Cross Entropy \\ CC-NCE & Cross-modal Cluster Noise Contrastive Estimation \\ ETC & Edge Transition Cost \\ EM & Expectation Maximization \\ GeoD & Geodesic Distance \\ WGAN-GP & Wasserstein-GAN Gradient Penalty \\ Hamm & Hamming Distance \\ Huber & Huber Loss \\ IR & Imitation Reward \\ KL & Kullback–Leibler Divergence \\ L2 & L2 Distance \\ MAE & Mean Absolute Error \\ MLE & Maximum Likelihood Estimation \\ MSE & Mean Squared Error \\ NLL & Negative Log-likelihood \\ SIMM & Structural Similarity Index Measure \\ TR & Task Reward \\ Var & Variance \\ WCSS & Within-cluster Sum of Squares \\ \hline \end{tabular} \end{table} Table 3: Training objective abbreviations in Table 2 tributes such as narration/conversation, intended emotion, gender and handedness. The previous pose's encoded concatenation and 3D joint positions were passed as input to the Transformer decoder to generate the next pose's joint positions. The process was repeated in a recurrent manner until the full pose sequence was generated. #### 5.3.3 Audio and text input An interesting trade-off exists between audio-based and text-based gesture generation systems. audio-based generators have access to intonation and prosody which helps generate rhythmic or kinematic gestures (e.g. beats) but lack semantic context. Conversely, text-based generators have access to semantic context which helps generate meaning-carrying gestures (e.g. iconic or metaphoric), but lack intonation and prosodic information. Therefore, combining the audio and text modalities enables a gesture generator to learn to produce semantically relevant and rhythmic co-speech gestures. Although generating meaning-carrying gestures using audio only is theoretically possible, it is unlikely since prosody is suitable for kinematics, but not sufficient to infer shape which is associated with meaning [11]. As far as we know, meaningful gestures from speech audio alone have not been empirically demonstrated. Instead, combining audio with text appears to be the most promising approach to generating meaningful gestures to date. We, therefore, focus on approaches that combine these two modalities for generating meaning-carrying, communicative gestures. Chiu et al. [14] proposed an approach that combined the text and prosody of the speech to generate co-verbal gestures. Their model, called the Deep Conditional Neural Field (DCNF), was a combination of a fully-connected network, for representation learning, and a Conditional Random Field (CRF), for temporal modeling. For the gesture prediction task, the model took in a text transcript, part-of-speech tags and prosody features as input, and predicted a sequence of gesture signs which were a set of predefined hand motions. Leveraging the representation power of deep learning models for multimodal input (i.e. audio and text) for co-speech gesture generation was the next logical step. In fact, three groups of researchers independently proposed the first deep-learning-based gesture generators that used both audio and text to generate continuous gestures, namely Yoon et al. [14], Ahuja et al. [1] and Kucherenko et al. [15]. We discuss their pioneering work combining audio and text next, followed by subsequent efforts in the area. Yoon et al. [14] proposed a gesture generation approach that combines the tri-modal context of speech, text and speaker identity to produce gestures that were human-like, and matched the content and rhythm of the speech. The model processed input speech and text with a speech and text encoder, respectively. The speaker identity was used to sample the intended speaker from a learned style embedding space. Together the three features (i.e. speech encoding, text encoding, and style) were passed to a gesture generator to produce the sequence of poses. Closely related was Liang et al. [16] whose framework utilized audio and text information in order to generate meaningful gestures by disentangling semantic and beat gestures. Their system consisted of two encoders, one that took in audio and text to encode semantics, and another that took in audio volume and beat to encode non-semantic information. The encoded information from both encoders ensured the disentanglement of semantic and beat gestures, while decoder took this information and was trained to encourage generation of meaningful semantic gestures. Ahuja et al. [1] identified two key challenges in an attempt to learn the latent relationship between speech and co-speech gestures. First, the underlying distributions for text and gesture are inherently skewed and therefore necessitated the need to learn their respective long tails, accounting for rarely occurring text or gestures. Second, gesture predictions are made at the sub-word level, which necessitated the need to learn the relationship between language and acoustic cues that may give rise to, or be accompanied by, a particular gesticulation. So motivated, they proposed the Adversarial Importance Sampled Learning (AISLe) framework, that combined adversarial learning with importance sampling to balance precision and coverage. The model took in speech and text transcripts and performed encoding and alignment between subwords and acoustics, using a multi-scale Transformer [20]. The resulting alignment was passed to the model's generator to predict the pose sequence and an adversarial discriminator was used to determine if the pose was real or fake. For optimizing the adversarial objective, the AISLe framework scaled the loss function such that rarely occurring gesture samples, the long tail of the distribution, were weighted more than those that are more likely to occur. Kucherenko et al. [15] proposed an autogressive generative model that combined speech acoustics and semantics to produce arbitrary acoustically-linked or semantically-linked gestures. The key insight of their approach was to envision a gesticulation system that encompasses so called "representational" gesture types (i.e. iconic, metaphoric and deictic) that convey semantics, and beats that are synchronized with acoustics. Their approach took a concatenation of semantic features that were extracted using BERT [1] and acoustic features represented as log-power mel-spectograms as input into an encoder. Then they integrated past and future context for each gesture pose frame via a sliding window operation over the encoded speech features. The model generated each pose autoregressively where each was conditioned on the information of three preceding frames to ensure motion continuity. Their extensive evaluation indicated that autoregression for continuous motion and combining audio and text had the most significant positive impact on the quality of the generated gesticulations. Equally inspired by autoregressive generative models, Korzun et al. [1, 1] reimplemented the text-only recurrent framework by [20] to accommodate both text and audio input. The proposed model was a combination of a recurrent context encoder, inspired by [15], that generated hidden states for 3-second audio and text context windows and a recurrent encoder-decoder that took in the concatenated results of the context encoder and used an attention mechanism to condition the generation of the final gesture motion. Similar to Yoon et al. [15], they trained the model using the continuity and variance objectives to ensure fluid and natural-looking gestures. The resulting model produced gestures that were deemed natural and appropriate as part of the GENEA Challenge 2020 [15]. Designing generation systems that produce meaningful gestures is one of the major goals in non-verbal behavior research. Spurred on by this question, Kucherenko et al. [14] investigated whether contemporary deep learning-based systems could predict gesture properties, namely phase, type and semantic features, as a way to determine if such systems can consistently generate gestures that convey meaning. Their model used both audio and text for predicting gesture properties, through two distinct components that predicted the probability for gesticulation and probabilities for the aforementioned set of gesture properties. They conducted their experiments on a direction-giving dataset with a high number of representational gestures [13]. Their experiments showed that gesture properties related to meaning such as semantic properties and gesture type could be predicted from text features (encoded as FastText embeddings [1]), but not from prosodic audio features. Conversely, they found that rhythm-related gesture properties (e.g. phase) could be better predicted from audio features. In order to mimic the communicative intent of co-speech gestures, it is crucial to understand and model the complex relationship between speech acoustics, text, and hand movements. An interesting approach is to group gestures with distinct movement properties in order to find emergent rhetorical categories. Saund et al. [2] investigated this approach by modeling the rhetorical, semantic, affective and acoustic relationships between gestures and co-occurring speech audio and text, for a hypothetical gesture generation system. They first used k-means clustering to cluster speech-gesture pairs into functional domain clusterings (i.e. rhetorical, affective and semantic) based on functional tags generated from third-party natural language parsers. The speech-gesture pairs were refined into sub-clusters based on gesture motion. Therefore each speech-gesture pair belonged to at least one sub-cluster (based on motion), within one functional cluster (based on its assigned functional tags). At run-time, a hypothesized virtual agent would leverage the same pre-trained parsers and clusters to analyze an input speech and text transcription, select a functional cluster and from that a motion sub-cluster. The agent could then either choose an appropriate gesture from a pre-recorded library or the centroid gesture in the motion sub-cluster. Motion graphs, commonplace in conventional animation systems (e.g. [15, 16, 17]), can be effective at producing realistic non-verbal behavior because they rely on databases of high-quality motion capture or RGB video. As we discussed before, they were effectively employed for audio-driven gesture reenactment using video-based motion graphs [23]. Zhou et al. continued this trend for audio and text by adapting a motion-graph-based music-to-dance system [20] for co-speech gesture generation [21]. They first built a database of audio, text and gesture clips from 3-tuples of (audio, text transcript, gesture), using a splitting algorithm. For each audio clip, they generated a style signature using StyleGestures [1], and a rhythm signature using a binary encoding scheme that denotes the presence of words by leveraging the word-level timing information in the text transcript. For the corresponding gesture motion, they generated a style signature, parameterized by the same attributes as StyleGestures [1] (e.g. wrist speed, radius and height), and a rhythm signature using a similar binary scheme that denoted the presence of pausing, sharp turning or a stroke phase of the gesture. During synthesis, they computed the rhythm and style signatures for input audio and text and used a graph-optimization algorithm to find gesture clips that closely matched the generated style and rhythm in terms of Hamming distance, and minimized the motion transition in the graph. This model performed on par or better than motion capture data in terms of Naturalness in the GENEA Challenge 2022 [23]. Style transfer is a widely adopted optimization technique in deep learning for blending visual content and style, e.g. given a content image and a reference image that specifies the style, adjust the content image to match the style [1]. In the context of co-speech gesture generation, it might be desirable to transfer the speaking style of one speaker to the predicted gestures of another. Ahuja et al. [1] learned unique style embeddings for multiple speakers that enabled either generation of gestures consistent with the original speaker in the audio input, or style transfer by combining the audio input of one speaker with the style embedding of a different speaker. Although they proposed the PATS data where multiple modalities such as audio, gesture pose and text have style and content, they focused on gesture pose style to learn _unimodal_ speaker-specific style embeddings. Fares et al. [15] leveraged the multiple modalities in the PATS dataset to learn _multimodal_ style embeddings based on audio, text and gesture pose input. Their framework consisted of a speaker-style encoder that used speaker audio, text and gesture pose to learn a multimodal style embedding, and a sequence-to-sequence decoder that generated gestures based on audio and text, and conditioned on the desired speaker's style embedding. Furthermore, unlike the work of Ahuja et al. [1] that required the entire speaker's gesture data to learn the speaker's style embedding, their trained speaker-style encoder could generate style embeddings in a zero-shot manner i.e., for speaker styles not seen in the training set. A key tenet of semantically meaningful gestures is that they are appropriate for the given utterance. To achieve this, there needs to be a greater emphasis on generating precise gestures using audio and text as grounding (i.e. the appropriateness of the gesture to the utterance), versus generating diverse gestures. Lee et al. [1] investigated this approach and made an interesting observation about human gesticulation, that multiple semantically different utterances are often accompanied by the same gesture. They thus proposed a contrastive-learning framework that constrained the mapping of semantically different utterances to a smaller subset of relevant high-quality gestures. They introduced a novel contrastive learning objective that preserved similarities and dissimilarities of gestures in the latent representation. The objective ensured that latent language representations of two semantically different utterances were close together if they were accompanied by the same gesture. They first clustered gestures based on similarity or dissimilarity, then created positive (similar gesture poses) and negative (dissimilar gesture poses) required for the standard contrastive learning objective. Finally, they learned gesture-aware embeddings via a contrastive and adversarial objective. The resulting embedding space was used to generate gestures that were semantically relevant and closer to the ground truth. When designing 3D avatars, it may be desirable to have a holistic animation system that includes facial and full-body movement. Combining audio and text modalities can be effective at achieving this goal because of their rhythmic and semantic properties. Zhuang et al. [23] investigated this approach by proposing a hybrid system consisting of Transformer-based encoder and decoder modules, and motion-graph retrieval module to generate facial motion and full-body motion that included gestures. Their encoder used both audio and text, in the form of phoneme labels and Mel Frequency Cepstral Coefficients (MFCC) and Mel Filter Bank (MFB) features, to generate 3D facial parameters for synchronous lip movement. Simultaneously, the decoder used speech features, previous expression motion and semantic tags to generate 3D facial parameter for expression. The motion-graph retrieval sub-system used speech audio and text to find the most appropriate body motion segments, including gesture, that correspond to the text semantics and rhythm in the audio. Finally the facial and body motion were used to drive a skinned polygonal model. #### 5.3.4 Non-linguistic modalities Several deep learning-based systems complemented input audio or text with additional information that could reasonably be deemed relevant to co-speech gestures. This included speech context, speaker style, discourse or an interlocutor's movements. Sadoughi and Busso [20] proposed a system that bridges rule-based and learning-based techniques in order to select gestures that are communicative and well synchronized with speech. They proposed a Dynamic Bayesian Network (DBN) which took in speech and two constraints to condition the generation. The constraints were: 1) discourse function, which restricts the model to behaviors that are characteristic of that discourse class (e.g. questions); 2) prototypical behaviors, which restricted the model to certain target gesticulations (e.g. head nods). Given constraints on prototypical behaviors, the approach could be embedded in a rule-based system as a behavior realizer creating head and hand trajectories that are temporally synchronized with speech. In a dyadic conversation between interlocutors, there can be a lot of spontaneous non-verbal behavior that is influenced by the nature and tone of the interaction. Leveraging the co-adaptation of non-verbal behavior between interlocutors present in human-to-human interactions, cf. [1, 2, 3], can enable virtual agents to be naturally conversational and collaborative. Ahuja et al. [1] proposed the Dyadic Residual-Attention Model (DRAM), a framework that could interactively generate an avatar's gesticulation conditioned on its speech and also the speech and gestational of a human interlocutor in a telepresence setting. In order to generate natural behavior, the avatar had to consider its own speech as well as the speech and gesticulation of the human. The DRAM model generated natural dyadic behavior by taking in the speech and pose history of the avatar as well as the speech and pose history of the human to adapt the avatar's gesticulation accordingly. The idea of conditioning the motion of a deep-learning-based agent on interlocutor speech and motion has subsequently been used in several other works. Jonell et al. [1] used a model based on normalizing flows for generating head motion and facial expression, while Nguyen and Celiktutan [24] used conditional adversarial learning to drive full-body skeletons. Both of these works found that statistically significant improvements in generated behaviors were achieved by being interlocutor-aware. A similarly interesting dyadic scenario is human-robot interaction where one of the interlocutors is a social robot. In this case, the robot must exhibit natural non-verbal behavior in order to be engaging and interesting. Therefore, it is desirable for the robot to mimic human non-verbal motion with gestures that are natural and communicative. Deichler et al. [1] investigated this idea by proposing a combination of a data-driven and physically-based reinforcement learning (RL) framework to generate pointing gestures learned from motion capture data. Given a diverse motion capture dataset of pointing gestures and corresponding targets, they trained RL control policies adapted from [1, 2] to imitate human-like pointing motion while maximizing the reward based on pointing precision. Automatic synthesis and animation of gestures that accompany affective verbal communication can endow virtual agents with emotional impetus. Bozkurt et al. [1] directly mapped emotional cues in speech prosody into affect-expressive gestures. They investigated the use of three continuous affect attributes (i.e. activation, valence and dominance) for the speech-driven synthesis of affective gesticulation. They proposed a statistical model based on hidden semi-Markov models (HSMM) where states were gestures, and observations were speech prosody and continuous affect attributes. They first estimated the affective state from speech prosody and then used the state and speech prosody to predict gesture clusters. The gesture segments were animated using a unit selection algorithm [1], and discontinuities were smoothed using an exponential smoothing function. Finally, the smoothed sequence was animated in Autodesk MotionBuilder. Text encodes important semantic information, potentially useful for conveying meaningful emotion through gesture, although it encodes fewer cues about emotional state compared to audio e.g., intonation and speech pauses. An interesting approach is to combine text with an intended emotion for affective gesture generation. Bhattacharya et al. [1] pursued this approach by combining text transcripts associated with narrative or conversational acting and emotion labels, to produce expressive emotive gestures for virtual agents. The emotions represented were amusement, anger, disgust, fear, joy, neutral, pricle, relief, sadness, shame and surprise [24]. Their approach consisted of a Transformer [24] encoder and decoder, where the encoder took in the text transcript sentences, intended emotional state and agent attributes (e.g. narration/conversation, intended emotion, gender, handedness). The previous pose's encoded concatenation and 3D joint positions were passed as input to the Transformer decoder to generate the next pose's joint positions. The process was repeated in a recurrent manner until the full affective pose sequence was generated. A speaker's identity or style can affect how they gesticulate, as some speakers gesture a lot while others rarely do. Moreover, they may also prefer particular gesture forms, and use different hands or gesture sizes. Modeling such variation in non-verbal behaviour can help make virtual agents seem unique and have a personality. To this end, Yoon et al [24] used speaker identity to guide gesture generation that matched the speaker's style. Their adversarial approach combined the tri-modal context of audio, text and speaker identity to produce gestures that were human-like, and matched the content and rhythm of the speech. The model processed input audio and text with an audio and text encoder, respectively. The speaker identity was used to sample the intended speaker from a learned style embedding space. Together the three features (i.e. audio encoding, text encoding, and style) were passed to a gesture generator to produce the sequence of poses. Similarly, Ahuja et al. [1] learned a mixture of adversarial generators, representing diverse gesticulation styles of speakers from talk-show hosts, lecturers and televangelists. Learning speaker-specific generators enabled one speaker's style to be aligned with, or transferred to, the audio of another speaker. Developing robust deep-learning-based gesture generators requires large amounts of diverse gesture data from real world scenarios, captured either via motion capture or pose estimation from videos. However, capturing or estimating hand gestures is very challenging because of the intricate finger motion, relatively small size of hands with respect to the whole body and frequent self-occlusions [11, 12, 13, 14]. In contrast, capturing body motion (up to and including the arms) is less error prone because the joints are further apart and the articulations are relatively simpler. Therefore, the "upper body" motion as a modality can be an informative prior for generating conversational hand gestures. Ng et al. [12] investigated this idea while making the observation that body motion is highly correlated with hand gestures. Their proposed approach took in 3D upper-body motion (up to the wrist) and predicted 3D hand poses. In addition to upper-body motion, the model could take in 2D images of hands and produce the corresponding 3D hand pose estimations. Similar to [18], they used a combination of a L1 regression loss for the model training signal and an adversarial loss to ensure realistic motion. The learned body-motion-to-hands correlation was versatile enough for several use-cases, namely conversational hand gesture synthesis, single-view 3D hand-pose estimation and synthesizing missing hands in motion capture data and image-based pose estimation data. #### 5.3.5 Control input Although control can take either linguistic or non-linguistic forms, it is distinct because it can convey the explicit design and execution intent of an animator. Multiple works in motion synthesis use control as an additional input either during the training phase or the inference phase of learning-based models (e.g. [15, 16]). Typically, during training, the control signal is used to train the system to generate animations with certain biomechanical constraints such as posture, gait, etc. During inference, control may be introduced to impose style-related constraints [17] or user input [15, 16, 17]. In the context of conversational gesture, Alexanderson et al. [1] trained a probabilistic model that generated spontaneous co-verbal gesture that was conditioned on control constraints such as wrist height, radial extent and handedness. However, the constraints were introduced at training time, meaning modeling a new constraint required re-training the entire model. Habibie et al. [18] provided a more flexible approach. They first learn a speech-to-gesture motion search through a kNN algorithm, and then refine the motion using conditional GAN. Style control can be exerted at runtime by dynamically restricting the portion of the database that the kNN algorithm is run on, allowing style variation even within an extended utterance without the need to retrain. Control can also be imposed by implicitly specifying the desired gestures by learning emergent prototypes of gesture shape or form. Qian et al. [19] explored this idea by learning conditional vectors, so-called "template vectors", that could determine the general appearance and thus narrow the potential range of plausible gestures. Their framework took in audio and a zero initialized condition vector, through a 1D UNet-based autoencoder, in order to generate the corresponding gestures as 2D joint positions. During training, they periodically updated the condition vector, through back-propagation, using the gradients computed on the L1 regression loss between the generated and ground-truth gestures. They regularized the template vector space through the KL-divergence between the vectors and a normal distribution. They also separately pre-trained a VAE to reconstruct ground truth gestures and used the resulting latent space to encode gestures into template vectors. At test time, they sampled arbitrary template vectors, either learned through back-propagation or extracted by the pre-trained VAE, to generate diverse gestures. Animators typically want to specify high-level style parameters to convey design intent e.g. energetic oratory gesticulations or subdued gestures to convey sadness. Additionally, it is desirable to specify the style once in the workflow and for the animation system to generate arbitrarily many motions for that specification. However, there is a gap between desired abstract design intent and existing deep-learning-based style control systems that tend to rely on biomechanical constraints such as wrist speed, radius or height [1, 18]. Style specification is also not data efficient, requiring as many samples as the size of the training set for the model to learn a style [1, 1]. We conclude this section by discussing several works that proposed approaches for data-efficient style specification [15, 16, 17]. Ghorbani et al. [15, 16] proposed a framework that improves on high-level style portrayal by using exemplar motion sequences that demonstrate the intended stylistic expression of gesture motion. Their framework was able to efficiently extract style parameters in a zero-shot manner, only requiring a single example motion and was able to generalize to example motions (and therefore styles) unseen during training. Fares et al. [17] used an adversarial framework to learn a speaker-style encoder that could generate speaker-specific style embeddings from novel multimodal inputs - audio, text and gesture pose - not seen during the training phase. The framework generated co-speech gestures in a style that is either consistent with the original speaker in the audio or a different speaker, depending on the chosen style embedding. Ahuja et. al [1] proposed an adversarial domain-adaptation approach for personalizing the gestures of a source speaker with plenty of data, with the style of a target speaker with limited data, using only 2 minutes of target training data. Given a model pretrained on a large co-speech gesture dataset, their framework could adapt the model's parameters using a smaller target dataset by modeling the cross-modal grounding shift, i.e., the change in distribution of speech-gesture associations, and the distribution shift in the target gesture space. The approach's ability to identify distributions shifts between the source and target domain for parameter updates, enabled the model to extrapolate to gestures in the target distribution without having seen them in the source distribution during pretraining. ## 6 Key Challenges of Gesture Generation Animating co-verbal gestures is still a very challenging problem because gestures are spontaneous, highly idiosyncratic and non-periodic. Rule-based approaches generate well-formed gestures by leveraging recording motion, but are inflexible and lack gesture diversity. Additionally, the hand-designed rules are non-exhaustive and often prescriptive, and hence may not be reflective of gestures which occur naturally and spontaneously. Data-driven approaches improve on diversity and flexibility but tend to produce marginally natural gestures that appear more like well-timed hand waving, are not communicative and have little meaning. Although state-of-the-art systems employ speech and/or text information, they still do not handle semantic grounding of gestures properly, evidenced by gestures that seem to lack meaningful information when compared to the ground truth. Furthermore, due to the probabilistic nature of gestures, its idiosyncrasies, rich semantic content makes the evaluation process especially challenging and subjective. In this section, we discuss the limitations of the current work and possible future directions in context of what we view as the key challenges of gesture generation, namely: 1. evaluation (in Section 6.1), 2. data (in Section 6.2), 3. human-like gestures (in Section 6.3), 4. multimodal grounding (in Section 6.4), and 5. multimodal synthesis (in Section 6.5). ### Evaluation Evaluation is of central importance to gesture generation, both for developing co-speech gesture generation systems and for assessing their performance and capabilities in various aspects, as well as those of the field as a whole. However, evaluating gestures is challenging due to the stochastic nature of gestures and the highly subjective nature of human gesture perception. A comprehensive review of evaluation practices in gesture generation can be found in [22]. We recommend that readers consult that review regarding best practices, but also provide an overview of key open challenges in gesture evaluation here. #### 6.1.1 Subjective Evaluation One important aspect to evaluate for gesture-generation systems is the human-likeness of the generated gestures, which is measured and compared through human perceptual studies, often with comparable stimuli presented side by side as in e.g. [22, 23, 24]. On the other hand, evaluating the other aspects such as the appropriateness and/or specificity of generated gestures in the context of speech and other multimodal grounding information (see Section 6.4) is quite challenging, especially since differences in the human-likeness of the motions being compared tends to interfere with perceived gesture appropriateness (cf. the results in [22]). To alleviate this challenge for appropriateness, a new evaluation paradigm of matched vs. mismatched gesture motion has recently been proposed [25, 26, 27, 28]. In this setup, human participants are asked to choose between two motion clips that both were generated by the same system, and therefore have similar appearance and human-likeness, but where one clip is intended to be appropriate to the situation (e.g., the motion in it corresponds to the actual speech audio in the video) whereas the other is chosen at random (e.g., it was generated by feeding unrelated speech audio into the same system instead, and does not match the actual audio track). The extent to which humans are able to identify the video that matches the situation can be used both to probe the strength of grounding in different modalities, and to assess gesture appropriateness for speech, rhythm, interlocutor behavior, etc., while controlling for human-likeness. We expect this methodology to gain wider adoption and advance the state of the art in subjective assessment of different aspects of co-speech gestures. Another compelling area for future work is to evaluate gesture generation in actual interactions, since the ultimate goal of embodied conversational agents is to enhance human-computer communication and interaction. Initial studies [22, 23] have found that embodied agents that perform gestures generated by data driven models as opposed to performing no gestures, attract more attention from the audience. A larger attention span on a gesticulating agent is indicative of a more engaging communicative quality of gestures and opens doors to evaluating gesture generation in a more natural setting. Although the situated and time-demanding nature of such interactions, coupled with their reliance on many non-gesture components necessary to create interactivity (e.g. human wizards or automatic speech recognition, dialogue systems, and text-to-speech), make proper interactive evaluation challenging and seldom done, it is an important long-term goal for evaluations in the field. Given the difficulties in comparing different research papers in the field, we think that controlled, large-scale comparisons [22, 23] with open data and materials are going to play an important role to develop the co-speech gesture field and its evaluation practices in the shorter term. This is similar to the role challenges have played in the development of text to speech [22] and the wide use of leaderboards and benchmarks across deep learning today. #### 6.1.2 Objective Evaluation While subjective metrics from appropriately designed user-studies are the gold standard in co-speech gesture evaluation [22], they are expensive and time consuming, and thus lack scalability. There is therefore interest in objective metrics to automatically assess synthetic motion, for example its its human-likeness. Objective metrics are useful to measure progress during model development in a heavy compute, data-driven learning setup. A natural metric is accuracy of prediction (i.e., how often the predicted position of a joint is within some tolerance of the joint position in a human motion capture clip), which is often called the Probability of Correct Keypoints (PCK). However, this quantity is often not indicative of performance due to the one-to-many nature of the gesture-generation problem. Two examples of human motion for the same speech might involve very different joint positions, and thus have low mutual agreement. Measuring the mean squared error (MSE) between generated motion and human motion capture suffers from the same issue. Statistics of motion properties such as acceleration and jerk have been used as an alternative for quantifying and comparing generated gesture distributions [14], but there is no compelling evidence that these metrics correlate with subjective assessments of motion human-likeness. To improve the measurement of distributional similarity of gestures, new objective quality metrics based on innovations from image processing, namely the Frechet Inception Distance (FID) [15] and the Inception Score [16], were proposed in [1], [2] and [1] respectively. Among these proposals, only [2] computes the Frechet distance in a learned space. There has also been work in learning to estimate the human-likeness of gestures from databases of gesture motion and associated subjective ratings data [1]. However, learning to predict human preference can be difficult even from relatively large training databases, as seen in similar research into predicting the subjective ratings of synthetic speech [17]. Since the above approaches depend on motion data only, they can only give an indication of whether or not generated motion is statistically similar to the human motion capture in the database, but not how appropriate the motion is for the context in which it occurs (whether it is _grounded_ in that context). The methods can therefore not assess whether or not the motion is synchronized with the co-occurring speech, whether the motion is semantically relevant, etc. In general, unlike human-likeness, not many techniques have been proposed for objectively quantifying properties like gesture diversity or different kinds of motion appropriateness. One exception is the recent Semantic Relevance Gesture Recall (SRGR) metric from [1], which proposes to quantify the semantic relevance of gesture by using semantic scores, annotated in the speech text data, to weight the probability of correct keypoints between the predicted and ground-truth gestures higher when the ground-truth gesture has a high semantic score. This is a step in the right direction for evaluating semantic appropriateness, but may suffer from the same issues as regular PCK due to the idiosyncratic, one-to-many nature of gesticulation. Given the impact that the Inception Score and the Frechet Inception Distance have had in driving progress in image generation, reliable metrics that estimate gesture human-likeness and especially appropriateness for e.g. the rhythm and semantics of co-occurring speech are an important continuing challenge, where recent and future innovations are likely to have significant impact on the field. ### Data Compared to machine-learning applications in text, speech, and images, gesture-generation is currently a data-limited field. A particular bottleneck is finger motion, which is difficult to capture accurately even through motion capture; cf. Table 1. When finger motion is unreliable or unavailable, a possible mitigation might be to predict finger motion from other information, for example the rest of the body as in [15]. In general, motion capture data is high quality, but laborious to capture, particularly when considering large scale data corpora. Other issues arise due to the high variation in gesture behavior. It can vary based on the individual, the environment, the number of people interacting, their emotional state and the topic of the conversation. Some of this variation is grounded in information that cannot be effectively recorded because it, e.g., is internal to a speaker (such as their emotional state), or that is rarely captured, such as properties of the space in which an interaction is taking space. But even if one were to capture or control for many of these these sources of variation, a great diversity in gesture behavior and realization would persist, which will be difficult to cover in any database we can record. In the long term, if we can achieve sufficiently reliable 3D gesture extraction from monocular, in-the-wild online video, that will be a game-changer for the field of co-speech gesture generation. It promises to have a transformative impact on both perceived authenticity and model capabilities, similar to how very large datasets for deep learning has powered recent advances in generative models for text and images, such as GPT-3 [2], DALL-E [21, 22], and Stable Diffusion [23]. At present, works that study the use of in-the-wild data for gesture synthesis exist, for example [24, 25, 26, 27], but the quality of the data and the gestures do not yet amount to such a leap forward. ### Human-Like Gestures The most prominent research target in deep-learning-based co-speech gesture generation has long been perceptual quality. This is similar to the focus on perceptual surface quality in other areas such as image generation [14, 21, 22, 23] and speech synthesis [28, 29]. One reason for this focus might be that perceptual surface quality is easier to estimate using standardized procedures, compared to quantities such as "gesture appropriateness for speech". See, especially, the rapid quality improvements in the image-synthesis field, once reasonable objective metrics such as the Inception Score [15] and the Frechet Inception Distance [15] became available. Just like deep generative methods in general have advanced greatly in recent years, there is strong evidence from large evaluations that the human-likeness of the best gesture-generation systems is improving as well [24, 25]. The better the visual quality of the avatar and greater range of expressive motion, the easier it should be to spot differences between natural and synthetic motion. From this perspective, head motion (which only has three degrees of freedom) might for example be easier to make indistinguishable from human head motion, than it is to generate convincing arm and finger motion. In this light, the achievement of GestureMaster [22] in the GENEA Challenge [24] is particularly noteworthy, since the synthesized upper- and full-body gestures produced by this model were rated higher than the original motion capture from the human speaker. Although a very impressive result, this may partly be attributed to the presence of some motion clips with motion-capture artifacts, especially for the fingers, that may reduce the perceived human-likeness of the notional human reference motion. At the same time, even "high quality" gesture motion on a high-fidelity avatar is still judged as being far from human: in the GENEA Challenge 2022 [24], neither the human motion capture nor the best performing system came near the rating of 100 that would correspond to being "completely human-like". More specifically, the median human-likeness of the best performing synthesis system were 69 for upper-body motion and 71 for full-body motion, with scores of 63 and 70 for human motion capture, respectively. Our statement comes with several caveats. Some of the gap up to a score of 100 might be attributable to shortcomings of motion capture when it comes to capturing the full range of human expression. For example, how an avatar moves and its lack of face, mouth, gaze and lip motion behavior can impact the visual qualities of the avatar. Even in the case of speech synthesis, where recreating human behavior is as easy as playing back an audio recording, it is well known that humans tend to rate the human-likeness of high-quality recordings of human speech as around or below 4.5 on a 5 point scale; see for example the naturalness scores in the large and careful evaluation in [14]. Complete human-likeness may thus in practice be achieved at a score below the maximum on the any given ratings scale. All that said, we believe that human-likeness can and will be improve further in the future, especially with more accurate motion capture and more lifelike avatars to display motion on. As for the path that the gesture generation will take towards achieving new heights in human-likeness, we can look to history, and to other fields. Data-driven generative modeling like [1, 19, 20, 21] took over as the state of the art in co-speech gesture generation with the advent of publicly available motion capture datasets suitable for training deep-learning architectures. Since then, a variety of deep generative approaches have been applied (see Table 2), and human-likeness keeps improving [21, 22]. There is no doubt interesting work to come in applying recent diffusion models [23, 24, 25], already considered for general motion synthesis [26], to gesture generation. While generated gestures from data-driven machine learning models are convincing, a lack of large scale gesture datasets currently limit the human-likeness of these approaches. Hence, in the short term, we may expect hybrid systems such as GestureMaster [25, 26] to be the leaders in human-like gesture generation. Specifically, these are systems where machine-learning decides which general properties are needed of the gestures, but the actual gesture motion is primarily realized by assembling pre-recorded motion clips and frames, like in motion graphs [2, 2, 27] and motion matching [28]. In the long-term, however, purely deep learning models are likely to take over. This would match the trajectory followed by text-to-speech synthesis, where hybrid systems once gave the best perceptual quality [26], but pure deep-learning-based approaches trained on very large speech databases have recently taken the crown [29]. ### Multimodal Grounding Visually human-like geosculation is not the only goal of gesture generation. As discussed in the introduction to this article, a key goal with generating co-speech gestures is to facilitate communication, in much the same way as gestures enrich human communication. This requires gestures that not only exhibit human-like movement on the surface but also are appropriately _grounded_ in the context of the interaction, so that they can contribute to it. In more engineering-oriented terms, systems must take many relevant modalities as input, and make use of this information in an adequate way, to obtain synthetic gestures that can fulfill the same communicative roles as human gesticulation does. It can be difficult to capture this information both in training data and at synthesis time, as well as to make meaningful use of it in the gesture generation. Grounding information can take many forms. Consequently, this section discusses challenges in grounding gesture-generation in a variety of relevant multimodal aspects (system inputs), beginning with aspects internal to the speaking agent, and then discussing grounding in other parties in the conversation as well as in the surrounding space. More specifically, we cover grounding in 1. temporal information (Section 6.4.1); 2. semantic content (Section 6.4.2); 3. speaker identity, personality, emotion, and style (Section 6.4.3); 4. interlocutor behavior (Section 6.4.4); and 5. spatial information (Section 6.4.5). We also discuss some derived challenges posed by the often weak correlation between grounding information and the gesture motion (Section 6.4.6), and how gestures may be grounded in the creative intent of a system designer (Section 6.4.7). #### 6.4.1 Temporal Grounding Gestures are temporal, which is a result of their correlation with a heavily temporal acoustic modality, along with the fact that they might depict occurrences or trace out paths or shapes over time. The rhythmic nature of the gestures (i.e. beat gestures) in context of acoustic prosody has been studied heavily since the era of rule based gesture synthesis [23, 24]. Fast forward to approaches with data-driven synthesis, some explicitly rely on extracted prosodic features [21], while others [18, 2] learn implicit embeddings from acoustics which prosody is one of the key components. It seems clear that gesture production must be grounded in the rhythm of audio data, and appropriate beat gestures will be challenging to achieve from text transcriptions alone, without timing information [21]. Alternatively, both audio and gesture must be synthesized to have comparable rhythmic structure. #### 6.4.2 Semantic Grounding Beyond the rhythmic nature of gestures, there is often a semantic meaning associated with the performed gesture. The small size of gesture-generation databases, and the complicated relationship and weak correlation between speech semantics and gesture form (see Section 6.4.6), mean that it is unrealistic to expect systems to learn to generate semantically appropriate gestures driven by speech acoustics alone. Text, on the other hand, is a compact way to represent much of the semantic content behind co-speech gestures, and has been heavily studied since the era of rule-based gesture synthesis [23] as well as in data-driven synthesis [2, 2, 24, 25, 26, 27]. Current data-driven approaches typically attempt to gain semantic awareness by relying on deep-learning based language models trained on large amounts of text, such as [25, 26]. Recent large language models based on large amounts of text [BMR*20] have indeed been capable of generating text with surprisingly coherent semantics, suggesting that they can capture lexical meaning to a significant extent. While the inclusion of text has improved human perception of automatically generated gestures [KJvW*20, ALIM20, YCL*20, AGL*22], it is still not trivial to measure the semantic content of gestures (see the discussion in Section 6.1). Hence, it is unclear how much (if any) of the improved human perception can be attributed to the semantic awareness created due to the use of language models, nor how much of the bottlenecks that exist may be removed with continuing progress in neural language models. More broadly, there is a need for gesture synthesis models to perform better with regard to semantics. Gesture is most powerful when it conveys information, and doing this effectively has been a challenge for most deep learning systems; cf. Figure 4. #### 6.4.3 Identity, Style, Emotion, and Personality Co-speech gestures are idiosyncratic. The manifold of gestures performed by a speaker are not just a function of the content of the speech, but are also dependent on the identity, emotional state and the context of the speaker. Generating personalized gestures based on speaker identity became possible with the influx of large scale multi-speaker datasets [YCL*20, ALNM20]. Several GENEA Challenge 2022 [YWK*22] systems also make use of speaker identity. A deeper analysis of the impact of speaker identity input [KNN*22] shows that different speakers have different gesture-property prediction certainty, evoking even more interest in the idiosyncrasies of co-speech gestures. More recently, it was also shown that a short motion clip can be used for style control in "zero-shot style adaptation" [FGP022, GFC22, GFH*1]. For many applications, it is desirable for the designer to be able to control the nature of the motion. This goes beyond replicating idiosyncratic motion recorded of an individual to being able to specify novel characters. We are far from having ways to author a character with an imagined personality for a particular application. Apart from the speaker identity, the emotional or affective state of a speaker also impacts the gestures performed by them. A striking example of this is the large range of expressive motion variation with the same lexical message explored in the Mimebot data [AONB17]. Building emotionally aware embodied agents is a common research direction [CBFV16, SZGK18]. More recently, data-driven models have been explored where affective cues were learned using a dedicated encoder in an adversarial setup [BCRM21] to imitate these patterns of affective behavior. It is important to be able to drive these emotions in a way that is consistent with a character's personality and to be able to shift mood and emotion over time. One way forward might be to leverage findings from the literature of gesture and motion perception, which has identified many useful properties of gesture motion that correlate with the perception of personality [Lip98, KG10, SN17] and emotion [NLK*13, CN19]. By changing these properties in synthesized gestures, we may exert some control over the perceived speaker personality and emotion [AHKB20, HES*22]. Again, speech synthesis provides an analogy, where it was recently shown that simple and easy additions of filler words and pausing can meaningfully and reliably be used to alter listeners' perception of speaker certainty [KLSG22]. #### 6.4.4 Interlocutor-Aware Gestures While non-verbal behavior is impacted by internal state of the speaker, the external context also guides the types of gestures a speaker might perform. In a dyadic conversation, the model must be aware of the behavior of the interlocutor while generating the relevant gestures [AMMS19, JKHB20, NC22, YYYH20]. This includes modeling appropriate listener behavior as well as speaker behavior. Characters must modify their behavior to react to the content, mood and timing of interlocutors. Characters must be able to be surprised, angered, pleased, etc. based on what their interlocutor may say. Given the increasing availability of dyadic datasets with motion capture for both conversational parties, we expect to see more research in this direction in the next few years. #### 6.4.5 Spatially Aware Gestures Even more generally, the context could also include spatial understanding of the environment. For example, the correctness of deictic gestures relies on the information about objects and directions in a scene. To carry communicative value, most of these gestures will therefore require access to visual and/or spatial information beyond what may be contained in the speech - think about a phrase such as "You need to go _that_ way", which completely lacks information about which direction the system should point. People also use spatial configurations in complex ways while gesturing, for example, placing ideas in a referential space in front of them and then referring to ideas by referring to the space they have been located in. While studies that involve external contexts are quite common for downstream tasks like navigation [SKM*19], non-verbal behavior generation in multiple external contexts is up and coming [DWAB22, KNN*22] which makes it a promising research direction, if relevant data can be obtained. #### 6.4.6 Weak Correlations with Grounding Information Let's imagine that we have access to all the variables discussed thus far that impact the dynamics of co-speech gestures, such as acoustics, text, speaker identity, emotional state, and external contexts. Further imagine that we are able to gather large-scale datasets with all these variables, which is unlikely to ever happen due to the combinatorical explosion of possible combinations of different factors. Would having this rich input information and broad data coverage be sufficient to confidently predict the specific co-speech gestures that a given speaker will perform? The best we can likely say is "Maybe!" While large scale datasets may enable us to minimize our epistemic uncertainty about geuscitation, it is unclear how significant the stochasticity is, i.e. aleatoric uncertainty, of these gestures will be. The situation is analogous to the problem of prosody in text-to-speech, where there can be many possible acoustic realizations and intonation contours for the same lexical input [YWK15, LTHY17, WSZ*18]. Significant variation persists even when a speaker is asked to read the same text several times under exactly the same circumstances [HMS*14]. To handle ambiguity in gesture realization, it is compelling to consider probabilistic models, since they can "hallucinate" the missing information and stochastic components of non-verbal behavior, as a way to resolve the one-to-many problem for motion and gesture generation [14, 15]. #### 6.4.7 Grounding Gestures in Creative Intent Gesture authoring enables an animator or system creator to design and edit motion, e.g. making a character appear less nervous or stressed, thus grounding the animation within the designer's creative intent. Typically, animation design intent is captured through key-framing or motion capture. However, these approaches are difficult to scale for nonverbal behavior because the former requires specialized animation skills, while the latter requires expensive camera setups and laborious post-processing. Automatic gesture generation approaches in part solve the scalability issue by the ability to generate abundant motion data, but they struggle with high-level control. For instance, attempts at handling control either bake in mechanistic, low-level control signals like wrist height, wrist velocity, and radial extent [1], or they generate gestures that deviate from the intended control specifications [17]. Moreover, in multi-speaker scenarios, they are unable to capture the variability of different speakers' geuscitation, and cannot distinguish between gesture types used in a certain scenario (e.g. deictic gestures for a lecturer in front of display) from gesture style differences between speakers [1]. Yoon et al. [20] recently proposed an innovative approach to this challenge: an authoring toolkit that balances gesture quality and authoring effort. The toolkit combines automatic gesture generation using a GAN-based generative model [21] and manual controls. The generative model first produces a gesture sequence from speech input, and animator can interactively edit the motion through low-level pose control and coarse-level style parameters. We think similar gesture authoring approaches that maximize design intent and gesture quality, while minimizing authoring effort will be important for grounding non-verbal behavior within the animator's creative intent. ### Multimodal Synthesis Human communicative behavior is not only grounded in multiple modalities and information streams, but is also expressed through multiple modalities. A complete virtual agent agent will need to listen, observe, decide, speak, and move. On the generation side, verbal behavior generation is considered separate from non-verbal behavior, and the generation of non-verbal behavior is in turn typically broken into several smaller sub-problems treated in isolation. Head motion might be treated separately from lip motion, facial expression, and gaze; finger motion might be treated separately from arm motion; and lower-body motion might be separated from the motion of the upper body. A long-term goal would be to bring these sub-problems together, to create more coherent synthetic behavior with a wider range of possible expressions, and eventually unify the synthesis of these expressions with verbal behavior generation. Recent work has explored learning full-body gesture motion (including the head and the lower body), e.g. [1] and the submissions to the full-body tier of the GENEA Challenge 2022 [20]. Another line of work has considered training verbal (text-to-speech) and non-verbal (speech-to-gesture) synthesis systems on the same data [1] and, subsequently, merging them into one single network that generates both speech audio and gesture motion [21]. Given the strides that have been made in generating convincing speech audio from text [20], adapting successful text-to-speech methods to simultaneously generate both acoustics and joint rotations, as was done in [21], seems like a compelling direction for future work. This not only brings advantages in terms of modeling efficiency (the gesture-generation systems will possess information about, e.g. prosodic prominence without having to learn to extract that information from speech audio), but also more closely resembles models of human communication such as the growth-point hypothesis [22], and could enable gestures that not only complement but, as in Kendon's continuum (see Figure 2), replace or augment speech with novel information. This may require even deeper representations of communicative intent, as approaches that generate gesture based on text and/or audio are restricted to redundant gestures, but gesture that is non-redundant with the spoken audio is a key part of human behavior. ## 7 Broader Impact High quality gesture synthesis can advance a range of applications by allowing computational systems to leverage nonverbal communication. This can allow more natural and fluid communication of both functional and affective information, which will prove useful in a range of assistive applications, employing both agents and robots. These include tutors, rehabilitation trainers, relational agents for health and eldercare, and personal assistants. They can also support richly interactive entertainment experiences in which you can have meaningful interactions with virtual characters. The development of the technology also raises potential ethical issues which must be given careful consideration. Some of the issues are common to many deep learning approaches that involve human data. For instance, what kind of bias is in the data that is used? Does it represent the full range of human nonverbal behavior, or only specific language groups, ethnicities and social strata? Will people using these models take care to match the input data with the desired output representation or will the data be mismatched, using the wrong gender, ethnicity, age, etc. on synthesized characters? What are the ownership rights associated with data that may be scraped from a web source? Do you own own your gesture style? How can consent be obtained for online data? The technology could also make it easier to generate deepfakes, i.e., synthetic media that mimics the likeness of real people, especially of politicians and other public figures that have a lot of video data online. Prominent examples include photorealistic lip motion from audio [23], real time facial expression re-enactment [24] and talking-head video synthesis [25]. The technology can be adapted to create synthetic nonverbal motion for nefarious purposes such as political propaganda, financial fraud and fake news. Moreover, a more unique consideration for nonverbal behavior results from people's tendency to entrain to their interlocutors. If they entrain to synthetic models they may interact with, does this have any impact on their own behavior? It is important for both researchers and developers of this technology to devise ways to mitigate these risks. ## 8 Conclusion This paper summarizes the history of gesture generation, from early work on rule-based systems to the explosion of recent work using deep learning approaches. Deep learning approaches have employed a range of input, including text, audio and various control signals, and used a wide set of architectures. Most systems have focused on monologue generation, but work is beginning to explore dialog and richer notions of context. Despite substantial progress, the field is still young and there are very significant challenges to solve. These include better datasets, improved subjective and objective evaluation practices, higher quality motion, producing more meaningful gestures, adequately addressing the stochasticity of gesture, providing adequate control over the output and matching the rich set of grounding that supports human gesture, from multi-person interaction to adequately representing the spatial context of the conversation. There is much exciting work to come. ## Acknowledgments S. N. was partially supported by an IBM PhD fellowship award. G. E. H. was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. S. N. and M. N. were partially supported by the National Science Foundation on grant IIS 2232066. The authors are grateful to Stefan Kopp for Figure 4 and to Konrad Tollmar, and anonymous reviewers for reviewing the manuscript.
2301.03990
Transition from chemisorption to physisorption of H2 on Ti functionalized [2,2,2]paracyclophane: A computational search for hydrogen storage
In this work, we studied the hydrogen adsorption-desorption properties and storage capacities of Ti functionalized [2,2,2]paracyclophane (PCP222) using density functional theory and molecular dynamic simulation. The Ti atom was bonded strongly with the benzene ring of PCP222 via Dewar interaction. Subsequently, the calculation of the diffusion energy barrier revealed a significantly high energy barrier of 5.97 eV preventing the Ti clustering over PCP222 surface. On adsorption of hydrogen, the first H2 molecule was chemisorbed over PCP222 with a binding energy of 1.79 eV with the Ti metals. Further addition of H2 molecules, however, exhibited their physisorption over PCP222-Ti through the Kubas type H2 interaction. The charge transfer mechanism during the hydrogen adsorption was explored by the Hirshfeld charge analysis and electrostatic potential map, and the PDOS, and Baders topological analysis revealed the nature of the interaction between Ti and H2. The PCP222 functionalized with three Ti atoms showed a maximum hydrogen uptake capacity of up to 7.37 wt%, which was fairly above the US-DOE criterion. The practical H2 storage estimation revealed that at ambient conditions, the gravimetric density of up to 6.06 wt% H2 molecules could be usable, and up to 1.31 wt% of adsorbed H2 molecules were retained with the host. The ADMP molecular dynamics simulations assured the reversibility by desorption of adsorbed H2 and the structural integrity of the host material at sufficiently above the desorption temperature (300K and 500K). Therefore, the Ti-functionalized PCP222 can be considered as a thermodynamically viable and potentially reversible H2 storage material.
Rakesh K. Sahoo, Sridhar Sahu
2023-01-10T14:39:00Z
http://arxiv.org/abs/2301.03990v1
Transition from chemisorption to physisorption of H\({}_{2}\) on Ti functionalized [2,2,2]paracyclophane: A computational search for hydrogen storage. ###### Abstract In this work, we studied the hydrogen adsorption-desorption properties and storage capacities of Ti functionalized [2,2,2]paracyclophane (PCP222) using density functional theory and molecular dynamic simulation. The Ti atom was bonded strongly with the benzene ring of PCP222 via Dewar interaction. Subsequently, the calculation of the diffusion energy barrier revealed a significantly high energy barrier of 5.97 eV preventing the Ti clustering over PCP222 surface. On adsorption of hydrogen, the first H\({}_{2}\) molecule was chemisorbed over PCP222 with a binding energy of 1.79 eV with the Ti metals. Further addition of H\({}_{2}\) molecules, however, exhibited their physisorption over PCP222-Ti through the Kubas-type H\({}_{2}\) interaction. Charge transfer mechanism during the hydrogen adsorption was explored by the Hirshfeld charge analysis and electrostatic potential map, and the PDOS, Bader's topological analysis revealed the nature of the interaction between Ti and H\({}_{2}\). The PCP222 functionalized with three Ti atoms showed a maximum hydrogen uptake capacity of up to 7.37 wt%, which was fairly above the US-DOE criterion. The practical H\({}_{2}\) storage estimation revealed that at ambient conditions, the gravimetric density of up to 6.06 wt% H\({}_{2}\) molecules could be usable, and up to 1.31 wt% of adsorbed H\({}_{2}\) molecules were retained with the host. The ADMP molecular dynamics simulations assured the reversibility by desorption of adsorbed H\({}_{2}\) and the structural integrity of the host material at sufficiently above the desorption temperature (300K and 500K). Therefore, the Ti-functionalized PCP222 can be considered as a thermodynamically viable and potentially reversible H\({}_{2}\) storage material. Computational Materials Research Lab, Department of Physics, Indian Institute of Technology (Indian School of Mines) Dhanbad, India **Keywords:** Hydrogen storage, DFT, ADMP, [2,2,2]paracyclophane, PCP222, ESP, Chemisorption, Physisorption ## 1 Introduction Extensive use of fossil fuels not only results in the depletion of those energy resources but also leads the world towards an alarming environmental catastrophe in terms of pollution and global warming. These consequences have motivated researchers across the globe to search for alternative sustainable and environment-friendly energy resources. Therefore, hydrogen drew the attention because it is considered as an ideal, pollution-free, and sustainable energy carrier, which can replace fossil fuels by fulfilling the energy need of the world, and thus can resolve the pollution due to fossil fuels[1, 2]. However, the major difficulty in hydrogen energy as fuel for domestic and vehicular application is its efficient storage and delivery at ambient conditions. Hydrogen can be stored mainly in two ways: system-based and material-based. System-based storage methods which is being adopted by few industries require huge volume vessels which should be made of composite material to withstand high pressure (~70 MPa) making the process quite expensive. However, compressed hydrogen storage systems are reported to have low volumetric densities, even at high pressure [3], and hydrogen storage in liquid state requires a very low temperature (~-253\({}^{\circ}\)C) under high pressure (~250-350 atm) which is highly prone to safety concerns. On the other hand, the solid-state material-based hydrogen storage method is substantiated as efficient alternative to use hydrogen energy provided it adsorbs and desorb a desirable amount of H\({}_{2}\) at ambient conditions [4]. In solid-state materials, hydrogen is usually adsorbed by the physisorption or chemisorption process. In the physisorption process, the adsorbed hydrogen binds in molecular to the surface of host materials through weak interaction (adsorption energy ~ 0.1-0.8 eV/H\({}_{2}\)). However, in the chemisorption process, the H\({}_{2}\) molecules dissociate into individual H atoms and migrate to the host materials by producing a strong chemical bond (with a binding energy of \(>\)1 eV/H\({}_{2}\)) with the host atoms. Another type of adsorption process observed is similar to the physisorption, in which the inter-atomic H-H bond in the H\({}_{2}\) molecule is elongated but not dissociated and adsorbed by Kubas-type orbital interactions[2]. It enhances the H\({}_{2}\) adsorption energy and makes most of the H\({}_{2}\) storage capacities that fulfil the target of the US department of energy (DOE-US) [5, 6]. Since last few years, researchers are engaged extensively to study various materials, including carbon nanostructures [7, 8], metal hydrides [9, 10], graphene [11, 12], metal alloys[13, 14], metal-organic frameworks (MOF)[15, 16], and covalent-organic frameworks [17], etc. for the reversible hydrogen storage at ambient condition. However, it has been reported that these materials often have several limitations, including poor storage capacity, instability at significantly high temperatures, and low reversibility at normal temperatures. For example, Mg-based metal hydrides showed a high storage capacity of up to 7.6 wt% under ambient condition, however; it could be used only for 2-3 cycles [18].Similarly, metal alloys have very poor reversibility when used as hydrogen storage materials [19]. Using MOFs as H\({}_{2}\) storage materials, researchers could attain up to 15 wt% of storage capacity at temperatures and pressures of 77 K and 80 bar. However, under normal environmental condition its gravimetric and volumetric storage capacity remained very low [20]. To address the aforesaid issue and to develop commercially effective hydrogen storage materials, the experimentally synthesized organic compounds functionalized with transition metals (TMs), such as TM-doped organometallic buckyballs, TM-ethylene, etc., were introduced and investigated extensively [21, 22]. Early reports show that the TM atoms form a strong bond with the \(\pi\) -electron delocalized compounds through the Dewar mechanism and adsorb hydrogen molecules via Kubas interaction[23, 24]. For example, Chakraborty _et al_. studied the hydrogen storage in Ti-doped \(\Psi\)-graphene and reported an H\({}_{2}\) uptake capacity of up to 13.1 wt% with an average adsorption energy of -0.30 eV/H\({}_{2}\)[25]. Dewangan _et al_. predicted up to 10.52 wt% of H\({}_{2}\) adsorption in Ti-functionalized holey graphyne via the Kubas mechanism with adsorption energy and desorption temperature of 0.38 eV/H\({}_{2}\) and 486 K, respectively[26]. Numerous theoretical and experimental studies revealed that metal-adorned small organic molecules like C\({}_{\text{n}}\)H\({}_{\text{n}}\) could capture a large number of H\({}_{2}\) molecules. For example, Zhou _et al_. estimated hydrogen uptake capacity up to 12 wt% in TiC\({}_{2}\)H\({}_{4}\) with H\({}_{2}\) binding energy of 0.24 eV/H\({}_{2}\)[27, 28]. High capacities of H\({}_{2}\) storage in TMC\({}_{2}\)H\({}_{4}\) (M = Ti, Sc, V, Ni, Ce, Nb) complexes was reported by Chaudhari _et al_.[29, 30, 31]. At low benzene pressure (35 millitorrs) and ambient temperature, TiC\({}_{6}\)H\({}_{6}\) was experimentally shown to absorb up to 6 wt% hydrogens [32]. Phillips _et al_. obtained an H\({}_{2}\) uptake of up to 14 wt% and quick kinetics at room temperature on TiC\({}_{6}\)H\({}_{6}\) by laser ablation; however, the experiments did not discuss the desorption process[33]. Recently, Ma _et al_. theoretically studied an interesting combination of chemisorption and physisorption in Ti-doped C\({}_{6}\)H\({}_{6}\) and reported an uptake capacity of 6.02 wt % with complete desorption at 935 K [34]. Mahamiya _et al_. revealed the H\({}_{2}\) storage capacities of 11.9 wt % in K and Ca decorated biphenylene with an average adsorption energy of 0.24-0.33 eV [35]. Y atom doped zeolite showed high capacity adsorption of H\({}_{2}\) with binding energy 0.35 eV/H\({}_{2}\) and the desorption energy of 437K for fuel cells[36]. Macrocyclic compounds, like paracyclophane (PCP), a subgroup derivative of cyclophanes, comprises aromatic benzene rings with number of -CH\({}_{2}\)- moieties linking the subsequent benzene rings [37]. The PCPs are easier to synthesize in the laboratory; they can be functionalized with metal atoms due to the presence of aromatic benzene rings in the geometry, making them a feasible alternative for hydrogen storage prospects. For instance, Sathe _et al._ studied the Sc and Li decorated PCP and reported the molecular H\({}_{2}\) physisorbed via Kubas-Niu-Jena interaction resulting in up to 10.3 wt% H\({}_{2}\) uptake capacity [38]. The hydrogen storage transition metal (Sc, Y) functionalized [1,1]paracyclophane was investigated by Sahoo _et al._ and reported a storage capacity of 6.33-8.22 wt%, with an average adsorption energy of 0.36 eV/H\({}_{2}\) and desorption temperature of 412 K - 439 K[39]. The H\({}_{2}\) storage on Li and Sc functionalized [4,4]paracyclophane shows an uptake capacity of 11.8 wt% and 13.7 wt%, as estimated by Sathe _et al._[40]. Kumar _et al._ revealed the combination of physisorption and chemisorption of hydrogen on Sc and Ti functionalized BN-analogous [2.2]PCP[41]. They showed the first hydrogen molecule chemisorbed on the host material followed by physisorption of other H\({}_{2}\), resulting in a storage of ~8.9 wt% via Kubas interaction. Numerous other metal-decorated macrocyclic compounds have been explored as hydrogen storage possibilities, with storage capacities above the DOE requirement; however, only a few have shown practical H\({}_{2}\) capacity at varied thermodynamic conditions. Though few PCP-based hydrogen storage systems are available in the literature, the [2,2,2]paracyclophane, which is experimentally synthesized by Tabushi _et al.[42]_ is yet to be explored as a hydrogen storage material. In the present work, we investigated the chemisorption and physisorption properties of hydrogen molecules on [2,2,2]paracyclophane (PCP222) functionalized with Ti atoms and estimated their hydrogen uptake capacity at varied thermodynamics. In paracyclophane, there are many molecules in the group and are named after their pattern of arene substitution. The preceding square bracket number, "[2,2,2]" in [2,2,2]paracyclophane, indicates that the consecutive benzene rings (3 benzene rings) in paracyclophane are linked with two (-CH\({}_{2}\)-) moieties. The linking bridges are relatively short; thus, the separation between consecutive benzene rings is small, which develops a strain in the aromatic rings. This strain in the rings can be utilized for Ti functionalization over the aromatic benzene ring. Due to the strain and metal functionalization, the aromatic benzene rings lose their inherent planarity. We choose to functionalize Ti metal atoms over the PCP222, as the d- block transition metal elements are well known for reversible hydrogen adsorption and could bind the H\({}_{2}\) molecules via Kubas interaction[25, 26]. Though there are few reports available based on hydrogen storage in macrocyclic organic compounds and other Ti-doped nanostructures, our work is the first to investigate the efficiency of Ti-functionalized PCP222 using the atomistic MD simulation, practical storage capacity, and diffusion energy barrier estimation ## 2 Theory and Computation We have performed the theoretical calculations on [2.2.2] paracyclophane (PCP222) and their hydrogenated structures within the framework of density functional theory (DFT)[43]. In the computation, the advanced hybrid oB97Xd functional is used, and molecular orbitals (MO) are expressed as the linear combination of atom-centered basis function for which the valence diffuse and polarization function 6-311+G(d,p) basis set is used for all atoms. oB97Xd includes the long-range and Grimme's D2 dispersion correction which is a range-separated version of Becke's 97 functional[44, 45]. It is important to note that the oB97Xd technique is a trustworthy method for studying the non-covalent interactions, Organometallic complexes, and their thermochemistry. To ensure the studied structures are in true ground state on the potential surface, the harmonic frequencies of all the systems are determined and are found to be positive. All the theoretical computations are performed with the computational program Gaussian 09[43]. In order to investigate the binding strength of titanium (Ti) atoms on the PCP222, we have calculated the average binding energy of decorated Ti atoms by using the following equation. \[E_{b}=\frac{1}{m}\left[E_{PCP222}+mE_{Ti}-E_{PCP222+mTi}\right] \tag{1}\] Where \(\text{E}_{\text{PCP222}}\), \(\text{E}_{\text{Ti}}\), and \(\text{E}_{\text{PCP222+mTi}}\) is the total energy of PCP222, Ti atom and Ti-decorated PCP222 respectively. m is the number of Ti atoms added PCP222. The average adsorption energy of molecular hydrogen with metal atoms is calculated as[46]. \[E_{ads}=\frac{1}{n}\left[E_{PCP222+mTi}+nE_{H_{2}}-E_{PCP222+mTi+nH_{2}}\right] \tag{2}\] Where \(\text{E}_{\text{PCP222+mTi}}\), \(\text{E}_{\text{H2}}\), and \(\text{E}_{\text{PCP222+mTi+nH2}}\) is the total energy of host material, hydrogen molecule and hydrogen trapped complexes respectively. n is the number of H\({}_{2}\) molecules adsorbed in each complex. The global reactivity descriptors such as hardness (n), electronegativity (\(\chi\)), and electrophilicity (\(\omega\)) were estimated and used to study the stability and reactivity of Ti functionalized PCP222 and their hydrogen adsorbed derivatives [47, 48]. The energy gap between the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) is computed to assure the kinetic stability of the studied systems. Further, to understand the electronic charge transfer properties, the Hirshfeld charge and electrostatic potential map (ESP) were explored. Moreover, partial density of states (PDOS) investigation was also carried out to further understand the process of hydrogen interaction. The topological parameters were studied using Bader's theory of atoms in molecules (AIM) to analyze more about the nature of the interaction between metal on PCP222 and adsorbed hydrogen molecules. To obtained the hydrogen uptake capacity, gravimetric density (wt%) of hydrogen is calculated using the following equation[49]: \[H_{2}(wt\%)=\frac{M_{H_{2}}}{M_{H_{2}}+M_{Host}}\times 100 \tag{3}\] Here M\({}_{\mathrm{H2}}\)represent the mass of the total number of H\({}_{2}\) molecules adsorbed and M\({}_{\mathrm{Host}}\) represent the mass of metal-doped PCP222. ## 3 Results and Discussion ### Structural properties of PCP222 The optimized geometrical structure of PCP222 is depicted in Figure 1(a). PCP222 has three benzene rings connected by two -CH\({}_{2}\)- moiety as a bridge between the consecutive rings. The distance between the two consecutive -CH\({}_{2}\)- moiety and the -CH\({}_{2}\)- across the benzene ring are found to be 1.54 A and 5.84 A respectively, which is consistent with the earlier experimentally reported value by Cohen-Addad _et al._[50]. To validate the \(\pi\) aromaticity of the optimized molecule, we computed the Nucleus Independent Chemical Shift (NICS) of PCP222 before functionalizing by any metal atom. The NICS values are determined with 1 A increment from the center to 3 A above the three benzene rings. NICS(1) is found to be negative maximum (-10.1 ppm), suggesting the aromatic nature of PCP222. This indicates that the benzene rings of PCP222 are \(\pi\) electron-rich and can bind a metal atom outside the benzene rings. Next, we explore different possible adsorption sites of pristine PCP222, such as C-C bridge of benzene ring (B1), CH\({}_{2}\) moiety and benzene bridge (B2), CH\({}_{2}\) - CH\({}_{2}\) bridge (B3), and above the center of benzene (R\({}_{\rm c}\)) which are depicted in Figure 1(a). To design the host material for hydrogen adsorption, a single Ti atom is positioned about 2 A above at the regioselective sites of PCP222, and the resulting structure is re-optimized. The binding energy between Ti and PCP222 calculated using Equation 1 at different adsorption sites shows that the Ti atom is stable at two positions, B3 and R\({}_{\rm c}\) sites of PCP222 with binding energies of 0.37 eV and 2.20 eV, respectively which fairly agree with the previously reported value of Ti on CNT by Yildirim _et al_. [51]. Hence, the most favourable site for Ti atom functionalization is at the R\({}_{\rm c}\) site above the benzene ring of PCP222. #### 3.2.1 Bonding mechanism of Ti on PCP222 To understand the binding mechanism of Ti on PCP222, we analyzed the partial density of state (PDOS), electrostatic potential map (ESP), Hirshfeld charge, and Bader's topological parameters of the Ti functionalized PCP222 system as discussed below. #### Density of states Figure 1: (a) Optimized structure of PCP222 with all possible marked adsorption site marked, (b) Ti functionalized PCP222 The Ti atom is functionalized on PCP222 via the Dewar mechanism in which \(\pi\)-electron gets transferred from the highest occupied molecular orbitals (HOMO) of the substrates to the vacant d-orbital of Ti followed by the back-donation of charges from the partially filled d-orbital of Ti to empty \(\pi^{*}\)-anti-bonding of the benzene ring of PCP222[26]. To understand the orbital interaction between the Ti and C atom of PCP222, we have performed the partial density of states (PDOS) calculation of PCP222-Ti and the result is plotted in Figure 2. Figure 2 clearly shows that the electronic states of the Ti atom and the C atom of PCP222 overlap below and above the Fermi level (E = 0). The transferred electrons partially fill the unoccupied states of PCP222, as seen by the intense peaks near the Fermi level. This infers an orbital interaction between Ti and C atom of PCP222 mediated by charge transfer. The fact is also obvious because Ti has the relatively lower ionization potential than the C atom. ### ESP and Hirshfeld charges To get a picture of electronic charge distribution over the PCP222 during Ti functionalization, we plotted the electrostatic potential (ESP) map over the total electron density, as shown in Figure.S1. The variation of electron density in the ESP map is shown by using different colour codes, which follows the pattern of accumulation and reduction of electron density as; red (maximum electron density) \(>\)orange \(>\) yellow \(>\) green \(>\) blue (minimum electron density). In the ESP plot (Figure.S1), the red region over the benzene ring of PCP222 implies the aggregation of electron density. After Figure 2: Density of states plot on Ti and C atom on PCP222 the functionalization of the Ti atom, the region changed to dark blue, indicating the deficiency of electron density around the metal making it susceptible to bind with the guest molecules. Moreover, region around the carbon atoms of PCP222 turns from red to green supporting the charge transfer as discussed above. The estimated Hirshfeld charge on C and Ti atoms is computed to be -0.121 e.u and +0.511 e.u, respectively, which makes the Ti atom nearly ionic, opening the possibility for H\({}_{2}\) adsorption. #### 3.2.2 Diffusion energy barrier calculation According to earlier reports, the aggregation of transition metal atoms on the substrate may lower the ability of the host material for hydrogen adsorption. So, before hydrogen adsorption on the surface of PCP222, it is necessary to study the possibility of metal clustering on the substrate. If the Ti atom is displaced from its stable adsorption position on PCP222 due to an increase in temperature, there is a strong possibility of metal clustering. Since the Ti binding energy on PCP222 (2.2 eV/Ti) is lower than the cohesive energy of an isolated single Ti atom (4.85 eV), we evaluated whether or not there is an energy barrier for Ti atom diffusion on PCP222. The diffusion energy barrier is calculated by displacing Ti to a finite neighbourhood (\(\delta r\)) over the adsorption site of PCP222. as shown in Figure 3. The difference in energy calculated between the initial and that of the close neighbourhood is then plotted with the diffusion coordinates as shown in Figure 3. The figure illustrates the diffusion energy barrier to be 5.97 eV, which is sufficient to prevent the diffusion of the Ti atom over PCP222 and therefore avoid Ti-Ti clustering which is also supported Figure 3: Ti diffusion energy barrier over the PCP222 by the works of Dewangan _et al._[26] and Chakraborty _et al._[25]. Therefore, the present Ti-functionalized PCP222 can be considered a suitable candidate for hydrogen adsorption. **3.3 Adsorption of H\({}_{2}\) molecules on PCP222-Ti** To investigate the hydrogen adsorption on the surface of Ti functionalized PCP222, we added the H\({}_{2}\) molecules sequentially to PCP222-Ti. First, we added a single H\({}_{2}\) molecule at about 2 A above the Ti atom functionalized on PCP222 and allowed the system to relax. It is observed that the H\({}_{2}\) molecule dissociates into two fragments of H atoms and forms chemical bond with the Ti atom. The Ti-H bond length is found to be 1.75 A which is close to the experimental result for titanium monohydride [52]. The H-H bond distance is noted to be about 2.8 A (Figure 4(a)). The binding energy between Ti and H is calculated to be 1.79 eV which lies in the range of chemisorption mechanized by Kubas's interaction [2, 38]. Similar result was also reported by Ciraci _et al_. for the adsorption of a single H\({}_{2}\) molecule on Ti-decorated SWNT8 ( and SWBNT ) where the H\({}_{2}\) molecules dissociate into individual H atoms with a binding energy of 0.83 eV/H (0.93 eV/H) and H-H- distance of 2.71A (3.38 A)[51, 53]. However, when two H\({}_{2}\) molecules are simultaneously added to the sorption center, the calculated average adsorption energy is reduced to 0.95 eV/H\({}_{2}\), with the average H-H bond length stretching from 0.74 A to 0.8 A. This result is consistent with the observation of the adsorption of the molecules in the vicinity of the Ti atom. \begin{table} \begin{tabular}{c c c c c c} \hline **Name of complex** & **Bridge C-C** & **R\({}_{\text{c}}\)-Ti** & **Ti-H** & **H-H** & **E\({}_{\text{ads}}\) (eV)** \\ \hline PCP222-Ti & 1.542 & 1.566 & & & \\ PCP222-Ti-2H & 1.540 & 1.800 & 1.750 & 2.796 & 1.797 \\ PCP222-Ti-2H\({}_{2}\) & 1.540 & 1.765 & 1.770 & 0.884 & 0.953 \\ PCP222-Ti-3H\({}_{2}\) & 1.540 & 1.798 & 1.830 & 0.852 & 0.784 \\ PCP222-Ti-4H\({}_{2}\) & 1.540 & 1.818 & 1.905 & 0.806 & 0.672 \\ PCP222-Ti-5H\({}_{2}\) & 1.540 & 1.842 & 2.332 & 0.816 & 0.554 \\ PCP222-Ti-6H\({}_{2}\) & 1.540 & 1.842 & 2.633 & 0.804 & 0.467 \\ PCP222-Ti-2H-1H\({}_{2}\) & 1.540 & 1.822 & 1.926 & 0.800 & 0.480 \\ PCP222-Ti-2H-2H\({}_{2}\) & 1.540 & 1.837 & 1.868 & 0.803 & 0.474 \\ PCP222-Ti-2H-3H\({}_{2}\) & 1.540 & 1.851 & 1.899 & 0.801 & 0.406 \\ PCP222-Ti-2H-4H\({}_{2}\) & 1.540 & 1.837 & 2.840 & 0.774 & 0.256 \\ \hline \end{tabular} \end{table} Table 1: Average bond distance between carbon bridge (C-C), center of PCP222 benzene ring (R\({}_{\text{c}}\)) and Titanium atom (R\({}_{\text{c}}\)-Ti), Titanium and hydrogen molecules (Ti-H\({}_{2}\)), and hydrogen Hydrogen (H-H) in Å. Average adsorption energy of H\({}_{2}\) on PCP222-Ti. clearly indicates the adsorption process to be physisorptive. This is because of reduced interaction strength between Ti atoms and H\({}_{2}\) molecules caused due to screening effect. From the ESP analysis (7) it is obvious that simultaneous presence of two H\({}_{2}\) molecules reduces the charge densities of Ti and H\({}_{2}\) thereby inducing a weak charge polarization which causes the physisorption of hydrogen on the surface of Ti functionalized PCP222. Another way of generating similar isomeric configuration is chemisorption induced physisorption of H\({}_{2}\) molecules on Ti functionalized PCP222 in which one H\({}_{2}\) molecule is adsorbed over n PCP222-Ti-2H (Figure 5(b)). Interestingly, this configuration is 0.37 eV lower in energy than that of PCP222-Ti-2H\({}_{2}\), and the H\({}_{2}\) adsorbed with lower adsorption energy (0.48 eV). Therefore, we proceed with both configurations for further hydrogen adsorption. Sequential adsorption of H\({}_{2}\) molecules on PCP222-Ti results in the maximum adsorption up to 6H\({}_{2}\) molecules. The adsorption of 3rd, 4th, 5th, and 6th H\({}_{2}\) molecules to PCP222-Ti reduces the average H\({}_{2}\) adsorption energy to 0.784, 0.68, 0.554, and 0.467 eV/H\({}_{2}\), respectively. On the other hand, successive addition of H\({}_{2}\) molecules to PCP222-Ti-2H leads to maximum adsorption of four hydrogen molecules. More addition of H\({}_{2}\) molecules beyond maxima in both the cases causes them to fly away from the sorption center. It is observed that the average adsorption energy decreases with an increase in the number of H\({}_{2}\) molecules in the system which is due to the steric hindrance among the adsorbed H\({}_{2}\) crowed and the increase in distances between the H\({}_{2}\) and sorption centers. The estimated data of adsorption energy and geometrical parameters of all the bare hydrogenated systems and presented in Table 1. **3.3.1 Partial density of states** The partial density of states (PDOS) of Ti and H atoms of the hydrogen adsorbed PCP222-Ti with the chemisorbed, and physisorbed hydrogen is plotted in Figure 6. The adsorption of 1H\({}_{2}\) to the host resulting in chemisorption is contributed from the strong overlapping of H and Ti orbital near -9 eV. Upon adsorption of another H\({}_{2}\) molecule over PCP222-Ti-2H, the peaks of \(\sigma\)-orbital (HOMO) of hydrogen and Ti orbital appears at around -15.7 eV below the Fermi level and \(\sigma^{*}\) (LUMO) of hydrogen interacts with the orbital of Ti and chemisorbed H above the Fermi level (figure 6(b)) which can be explained by the Kubas mechanism in which a small charge transfer occurs from the \(\sigma\)(HOMO) orbital of H\({}_{2}\) to the vacant 3d orbital of the Ti atom, followed by a back-donation of charges in the other direction from the partially filled 3d orbitals of Ti to \(\sigma^{*}\) (LUMO) of H\({}_{2}\) molecules. When two H\({}_{2}\) molecules are introduced simultaneously to the PCP222-Ti, similar DOS peaks are observed, suggesting the H\({}_{2}\) adsorption via the Kubas mechanism. Figure 6: Partial density of state on Ti and H atoms of (a) PCP222-Ti-2H, (b) PCP222-Ti-2H-1H\({}_{2}\), (c) PCP222-Ti-2H\({}_{2}\), and (d) PCP222-Ti-6H\({}_{2}\) mechanism. However, here the \(\sigma\) orbital of H\({}_{2}\) splits into several peaks in the range of -15.2 to -6.2 eV and moves closer to the Fermi level inferring lower in the interaction strength. On adsorption of 6H\({}_{2}\) molecules to Ti functionalized PCP222, the \(\sigma\) orbitals split into numerous peaks in a broad range of -16.3 eV to -6.1 eV with enhanced intensity. This signifies that the adsorption strength gets weaker with an increase in the quantity of H\({}_{2}\) molecules in the host systems. #### 3.3.2 Electrostatics potential and Hirshfeld charges To obtain a qualitative depiction of electronic charge distribution over the bare and hydrogenated PCP222-Ti, we generated and plotted the electrostatic potential (ESP) map on the total electron density as shown in Figure 7 The charge distribution is used to determine the active adsorption region for the guest hydrogen molecules. The dark blue zone above the Ti atom on PCP222-Ti (Figure 7(a)) and the dark red region over the first adsorbed hydrogen atom indicates a strong interaction between them leading to chemisorption of hydrogen atom. Upon adsorption of two H\({}_{2}\) molecules simultaneously, the region over Ti turns from dark blue to light blue, suggesting the fact that, positive charge get transferred from the Ti atom to the adsorbed H\({}_{2}\) and C atom of PCCP222 thereby inducing charge polarization which causes physisorption of the second H\({}_{2}\) molecule. Further addition of H\({}_{2}\) molecules to PCP222-Ti, the region over Ti atom turns to bluish-green and then to green inferring further charge transfer (depletion of electron density near Ti ) and the yellow region over the adsorbed H\({}_{2}\) represents a little accumulation of electron density at hydrogen molecules[26]. Figure 7: Electrostatics potential map of (a) PCP222-Ti, (b) PCP222-Ti-2H, (c) PCP222-Ti-2H\({}_{2}\), (d) PCP222-Ti-3H\({}_{2}\), (e) PCP222-Ti-4H\({}_{2}\), (f) PCP222-Ti-5H\({}_{2}\), (f) PCP222-Ti-6H\({}_{2}\). Figure 8 shows the average Hirshfeld charges on the Ti atom, the adsorbed H\({}_{2}\) molecules, and the C atoms of the benzene ring (Ti functionalized site) as a function of the number of H\({}_{2}\) adsorbed on the host. The average charges on the C atom of the benzene ring are initially computed to be -0.031 e which then raises to -0.121 e with the functionalization of the Ti atom. The charge on the Ti atom of PCP222-Ti is found to be +0.511 e, indicating the transfer of electronic charges from the Ti atom to the C atom of the benzene ring. On chemisorption of the first hydrogen on PCP222-Ti, the electronic charges on the Ti and H atoms are +0.41 a.u and -0.24 a.u implying a strong attractive interaction between them as discussed above. Adding more H\({}_{2}\) molecules gradually lessen the Hirshfeld charges over the Ti and H atoms implying polarization induced weak interaction between them. (Figure 8). #### 3.3.3 Bader's topological analysis The topological analysis at the bond critical point (BCP) is used to investigate the nature of interactions between the Ti-functionalized PCP222 and the adsorbed H\({}_{2}\) molecules employing Bader's quantum theory of atoms in molecules (QTAIM). The topological descriptors associated with the electronic distribution, such as electron density (\(\rho\)), Laplacian (\(\nabla^{2}\rho\)), and total energy density (\(\mathfrak{H}\)) (calculated as the sum of local kinetic G(\(\rho\)) potential energy density \(V(\rho)\) ), at BCPs Figure 8: Hirshfeld charges before and after hydrogen adsorption on PCP222-Ti are presented in Table S1. Kumar _et al._ reported that the positive value of the Laplacian of electron density (\(\nabla^{2}\rho\)\(>\)0) at BCP indicates a decrease in \(\rho\) at the bonding region, suggesting an interaction of closed-shell (non-covalent) type [56]. For PCP222-Ti-6H\({}_{2}\), the value of \(\rho\) and \(\nabla^{2}\rho\) at BCP of Ti and adsorbed H\({}_{2}\) are found to be 0.057 a.u and 0.208 a.u, respectively which infers a closed-shell interaction between Ti and H\({}_{2}\). Moreover, the negative value of \(\Re\)\({}_{BCP}\) and \(-\frac{\text{G}(\rho)}{\text{V}(\rho)}\)\(>\) 1 at BCP of Ti and H\({}_{2}\) confirm the closed-shell interaction among sorption center and H\({}_{2}\) as proposed by Koch _et al._ (Table S1) [57]. For C-C and C-Ti bond, the average \(\rho\) value shows very nominal changes after the hydrogen adsorption which suggests the post-adsorption chemical stability of the host material. Additionally, the average \(\rho\) on BCP of the H-H bond in PCP222-Ti-6H\({}_{2}\) is 0.231 a.u which is almost the same as on isolated bare H\({}_{2}\) molecule (0.263 a.u). This implies that the adsorbed hydrogens are in quasi-molecular form during the adsorption which also reflected in H-H bond elongation by 0.06-0.14 A. ### Thermodynamically usable H\({}_{2}\) capacity #### 3.4.1 Storage capacity To examine the maximum H\({}_{2}\) gravimetric storage capacity of the system, we have functionalized the Ti atom on each benzene ring of PCP222 resulting in the structure of PCP222-3Ti as shown in Figure 9 and S3. Further, we added H\({}_{2}\) molecules to each Ti Figure 9: Optimized geometry of hydrogen saturated 3Ti functionalized PCP222 atom functionalized on PCP222 sequentially as discussed in previous section (3.3). The calculated average H\({}_{2}\) adsorption energy and the change in geometrical parameters are presented in Table 2. The adsorption of H\({}_{2}\) on PCP222-3Ti is observed to behave similar to that of on single Ti atom on PCP222. On saturation of the H\({}_{2}\) uptake capacity of PCP222-3Ti, each sorption center is found holding a maximum of 6H\({}_{2}\) molecules with a gravimetric storage capacity of 7.37 wt%. Since the first H\({}_{2}\) molecule on each Ti atom dissociate into two H atom and bonded strongly with Ti atoms, 1.31 wt% of hydrogen adsorbed via the chemisorption process is difficult to desorb. However, the concurrent addition of two or more H\({}_{2}\) molecules to each Ti atom over PCP222, results in physisorption kind of adsorption. Further, to confirm the stability of maximum hydrogenated systems, the energy gap (Eg) (gap between HOMO-LUMO) and global reactivity parameters such as \(\eta,\gamma,\) and \(\omega\) were estimated using the Koopmans theorem[58]. Notwithstanding, the studied system follow the "_maximum hardness and minimum electrophilicity principle_," ensuring their chemical stability (Figure S4)[59]. \begin{table} \begin{tabular}{c c c c c c c} & \multicolumn{4}{c}{nH\({}_{2}\) (n=3,6,9,12,15,18)} \\ \hline **Name of complex** & **Bridge C-C** & **Rc-Ti** & **Ti-H** & **H-H** & **Eads (eV)** & **Edes (eV)** \\ \hline PCP222\_3Ti & 1.543 & 1.590 & - & - & - & - \\ PCP222\_3Ti-3H\({}_{2}\) & 1.537 & 1.799 & 1.747 & 2.824 & 1.824 & 1.824 \\ PCP222\_3Ti-6H\({}_{2}\) & 1.537 & 1.756 & 1.776 & 0.880 & 0.988 & 0.152 \\ PCP222\_3Ti-9H\({}_{2}\) & 1.537 & 1.790 & 1.832 & 0.849 & 0.813 & 0.464 \\ PCP222\_3Ti-12H\({}_{2}\) & 1.536 & 1.824 & 1.801 & 0.821 & 0.700 & 0.360 \\ PCP222\_3Ti-15H\({}_{2}\) & 1.535 & 1.825 & 2.332 & 0.806 & 0.570 & 0.050 \\ PCP222\_3Ti-18H\({}_{2}\) & 1.536 & 1.838 & 2.622 & 0.803 & 0.482 & 0.043 \\ \end{tabular} \end{table} Table 2: Average bond distance between carbon bridge (C-C), center of PCP222 benzene ring (R\({}_{\rm c}\)) and Titanium atom (R\({}_{\rm c}\)-Ti), Titanium and hydrogen molecules (Ti-H\({}_{2}\)), and hydrogen-hydrogen (H-H) in Å. Average adsorption energy and successive desorption energy of PCP222-3Ti- adsorbed by the host material at attainable adsorption conditions and the adsorbed H\({}_{2}\) molecules should be desorbed effectively at a suitable temperature (T) and pressure (P). Thus, it is essential to estimate the number of hydrogen molecules usable at a wide variety of T and P. We have estimated the usable hydrogen gravimetric density of the studied system by calculating the number of H\({}_{2}\) molecules stored in PCP222-3Ti at different T and P using the empirical value of H\({}_{2}\) gas chemical potential (\(\mu\)). The H\({}_{2}\) gravimetric density is estimated from the occupation number (N) by the following equation and plotted with various T and P in Figure 10[60]. \[N=\frac{\Sigma_{n=0}^{Nmax}ng_{n}e^{[n(\mu-E_{ads})/K_{B}T]}}{\Sigma_{n=0}^{n max}g_{n}e^{[n(\mu-E_{ads})/K_{B}T]}} \tag{4}\] Here N\({}_{\max}\) is the maximum number of H\({}_{2}\) molecules adsorbed on each Ti atom on PCP222, n and g\({}_{n}\) represents the number of H\({}_{2}\) molecules adsorbed and configurational degeneracy for a \(n\) respectively. \(k_{B}\) is the Boltzmann constant and -\(E_{ads}\) (\(>\)0) indicates the average adsorption energy of H\({}_{2}\) molecules over PCP222-3Ti. \(\mu\) is the empirical value of chemical potential of H\({}_{2}\) gas at specific T and P, obtained by using the following expression [61]. \[\mu=H^{0}(T)-H^{0}(0)-TS^{0}(T)+K_{B}T\ln\left(\frac{P}{P_{0}}\right) \tag{5}\] Here H\({}^{0}\)(T), S\({}^{0}\)(T) are the enthalpy and entropy of H\({}_{2}\) at pressure P\({}_{0}\) (1 bar). Figure 10: Hydrogen occupation number for PCP222-3Ti at various T and P. From the Figure 10 it is clear that, the PCP222-3Ti can store 18H\({}_{2}\) molecules at temperatures up to 80 K and 10-60 bar pressure. Up-to these thermodynamic conditions, the maximum H\({}_{2}\) storage capacity of the studied system is estimated as 7.37 wt%, which is consistent the experimentally reported value for Pd functionalized carbon nanotubes [62] and is fairly above the target set by US-DOE (5.5 wt% by 2025). On raising the temperature above 80 K, the H\({}_{2}\) molecules start to desorb from the PCP222-3Ti and retain \(>\)5.5 wt% of H\({}_{2}\) till the temperature of 120 K under 30-60 bar. Further, rise in temperature, the system maintains an H\({}_{2}\) gravimetric density of 5 wt% (close to the target of US-DOE) throughout a temperature range of 120-300 K and a pressure range of 3-60. This thermodynamic condition may be treated as an ideal storage condition for H\({}_{2}\) on PCP222-3Ti. At the temperature of 400 K and pressure of 1-10 bar, the system retains 1.31 wt% of hydrogen, that are adsorbed via the chemisorption process and may be desorbed at very high temperatures. Thus, a total gravimetric density of 6.06 wt% (difference in G.D at 80 K and 400 K) H\({}_{2}\) molecules are usable under ambient conditions, which is fairly higher than the US-DOE target. This result justifies that the Ti functionalization over PCP222 can be used as a potential reversible hydrogen storage material. ### 3.5 Molecular dynamics simulations Figure 11: (a) Potential energy trajectories of hydrogenated PCP222-3Ti and (b) Time evolution trajectory of average bond length between the Ti atom and C atoms of PCP222 at 300K and 500K. We have performed molecular dynamic (MD) simulations using the atom-centered density matrix propagation (ADMP) to check the desorption of hydrogen from the PCP222-3Ti-nH\({}_{2}\)and the structural integrity of the host. During the simulations, the temperature was maintained by the velocity scaling method, and the temperature was checked and corrected at every time step of 10 fs. Figure 11(a) and S5, show the time variation potential energy trajectories and system snapshots, respectively. The MD simulations at 300K and 1 ps reveal that 2H\({}_{2}\) molecules from each Ti atom fly away, and each Ti continues to hold three physisorbed H\({}_{2}\) molecules and two chemisorbed hydrogen atoms. When the temperature is elevated to 500 K, almost all the H\({}_{2}\) molecules get desorbed and each sorption center hold one physisorbed H\({}_{2}\) and two chemisorbed H atoms. Since the first physisorbed H\({}_{2}\) is bound strongly with the host material, it may desorb at a higher temperature and time scale. This indicates that the system PCP222-3Ti is not complete reversible at normal temperatures and may show 100% desorption at a higher temperature. For a practical hydrogen storage material, it is necessary that the host material must keep the structural integrity above the average desorption temperature. To examine the structural integrity of the host material (PCP222-3Ti), we carried out the MD simulations with the host material at 300 K and significantly above the room temperature (500 K) using ADMP. With a time step of 1 fs, the ADMP-MD simulations are carried out for 1 ps. Figure 11(b) depicts the time variation trajectory of the average distance between the Ti atom and the carbon atoms of PCP222 benzene rings. We observe that the PCP222-3Ti maintains its structural stability at 500 K, and the distances between the C-C and C-H bonds essentially remain unchanged. The time evolution trajectories of the average distance between the Ti and C atom of PCP222 were noticed to oscillate about the mean value (2.32 A) with little variance. This illustrates that the host material's structural stability is maintained significantly above room temperature. In light of this, we believe that PCP222-3Ti can be a viable option for hydrogen storage material. ## 4 Conclusion In this study, we investigated the thermodynamical stability and hydrogen storage properties of Ti-functionalized [2,2,2]paracyclophane, using the density functional theory. The Ti atoms are strongly bonded to the PCP222 via Dewar mechanism, and no clustering of Ti atoms over PCP222 was noticed. The first H\({}_{2}\) molecule is chemisorbed with binding energy of 1.797 eV, while the
2310.04766
Quantifying Independence Redundancy in Systems: Measurement, Factors, and Impact Analysis
Redundancy represents a strategy for achieving high availability. However, various factors, known as singleness factors, necessitate corresponding redundancy measures. The absence of a systematic approach for identifying these singleness factors and the lack of a quantifiable method to assess system redundancy degrees are notable challenges. In this paper, we initially present methodologies to evaluate system redundancy, specifically quantifying independent redundancy in complex systems. This approach considers the interactions among various factors that influence redundancy, treating different factors as distinct dimensions to comprehensively account for all potential impact factors. Additionally, we propose methodologies to calculate the Independent Redundancy Degree (IRD) when combining or removing system components, offering insights into system resilience during integration or separation. Furthermore, we broaden the scope of known singleness factors by exploring time and space dimensions, aiming to identify additional related singleness factors. This process helps us pinpoint critical system aspects that necessitate redundancy for enhanced fault-tolerance and reliability. The verification results underscore the influence of different dimensions and reveal the significance of addressing weak dimensions for enhancing system reliability.
Hong Su
2023-10-07T10:00:56Z
http://arxiv.org/abs/2310.04766v1
# Quantifying Independence Redundancy in Systems: Measurement, Factors, and Impact Analysis ###### Abstract Redundancy represents a strategy for achieving high availability. However, various factors, known as singleness factors, necessitate corresponding redundancy measures. The absence of a systematic approach for identifying these singeless factors and the lack of a quantifiable method to assess system redundancy degrees are notable challenges. In this paper, we initially present methodologies to evaluate system redundancy, specifically quantifying independent redundancy in complex systems. This approach considers the interactions among various factors that influence redundancy, treating different factors as distinct dimensions to comprehensively account for all potential impact factors. Additionally, we propose methodologies to calculate the Independent Redundancy Degree (IRD) when combining or removing system components, offering insights into system resilience during integration or separation. Furthermore, we broaden the scope of known singeless factors by exploring time and space dimensions, aiming to identify additional related singeless factors. This process helps us pinpoint critical system aspects that necessitate redundancy for enhanced fault-tolerance and reliability. The verification results underscore the influence of different dimensions and reveal the significance of addressing weak dimensions for enhancing system reliability. Independence Redundancy, System Reliability, Singeless Factors, Independence Redundancy Degree ## I Introduction The evolution of computing technologies has led to an increasing demand for highly reliable and resilient systems that can provide uninterrupted services [1][2]. In the early days of computing, a single hardware component and software program were sufficient to execute tasks. However, the susceptibility of both hardware and software to errors and failures posed significant challenges to system reliability. A single hardware or software outage could render the entire system incapable of providing services. To address this issue, redundancy techniques were introduced to enhance system robustness. Redundancy strategies have become instrumental in improving system reliability. One common approach is the use of P2P (peer-to-peer) hardware or network, where multiple redundant hardware components are employed to mitigate random errors. While this approach offers a level of fault tolerance, it is not foolproof. For example, if all redundant hardware components are placed in the same physical location and a catastrophic event such as an earthquake occurs, the entire system may still fail. To address this limitation, the concept of geographical redundancy emerged, where redundant hardware is distributed across different cities or even countries to ensure system resilience [3]. Similarly, redundant software solutions have been proposed to address issues related to single software failures. In cases where a software contains a critical bug that causes it to crash when processing specific data, deploying multiple copies of the software on separate hardware does not solve the problem, as all copies will encounter the same error. To overcome this, independent redundant software instances are used to process user services. If one software encounters a fatal bug, the other independent instances can continue to function normally. While redundancy techniques have made significant progress, several crucial questions remain unanswered. These inquiries encompass comprehending the interconnections between various redundancy methods, assessing the attainable extent of system redundancy. It's imperative to ascertain the feasibility of establishing a completely redundant system that can deliver services seamlessly under various conditions, encompassing hardware, software, and communication network failures. Moreover, there is a need to develop robust metrics for quantifying the degree of redundancy within a system and to formulate methodologies to effectively compare redundancy levels across different systems. In this paper, we aim to address these questions and provide a formal definition of redundancy and independent redundancy. We will analyze the relationships among various redundant methods and their corresponding impact factors. Additionally, we will develop methods to quantify redundancy and propose techniques to compare the redundancy degrees of different systems. By delving into these aspects, we seek to enhance the understanding of redundancy in complex systems Fig. 1: Components of a non-redundant system. In this configuration, software, hardware, and other elements (on the left) are interconnected on a one-to-one basis to constitute a system (on the right). and offer insights into designing highly reliable and resilient systems that can sustain continuous service provision under diverse conditions. The main contributions of this paper are as follows: (1) We present a novel method to quantitatively measure independent redundancy in complex systems, taking into account the interrelations between different factors affecting redundancy. This method enables a precise evaluation of a system's ability to maintain functionality even when some of its components fail. (2) We propose a method to calculate the Independent Redundancy Degree (IRD) when combining two systems, taking into account the independence of redundancy paths. This method allows us to determine the overall redundancy of a combined system. Meanwhile this methods can also be applied to access the IRD when remove some redundant components from a system. By utilizing this approach, we gain valuable insights into the system's resilience during system combination or separation (3) We introduce a novel method to extend the scope of singleness factors in complex systems. These singleness factors represent factors that form a single processing path and necessitate redundant paths for improved reliability. By expanding these factors in both time and space dimensions, we can identify more critical aspects of the system that require redundancy. Addressing these extended singleness factors enables us to enhance the system's fault-tolerance and reliability by providing appropriate redundant paths. The remaining sections of this paper are organized as follows. In Section II, we provide an overview of related work in the field of redundancy and fault tolerance. Section III introduces the concept of Independent Redundancy Degree (IRD) and presents our method for calculating IRD when combining two systems. In Section IV, we explore the impact factors of system redundancy. Section V discusses how to measure the independent redundancy degree of a system. In Section VI, we present the verification results and conduct corresponding analyses. Finally, Section VII concludes this paper with a summary of our contributions and potential future research directions. ## II Related Work The exploration of system reliability and redundancy has captivated the attention of researchers and engineers across diverse domains, encompassing reliability engineering, fault tolerance, and system design. Throughout time, numerous methodologies have emerged to appraise and bolster the reliability of intricate systems. In this segment, we delve into pivotal contributions within the realm of redundancy analysis, surveying the strides taken to appraise system reliability advancements, encompassing aspects like hardware, software, and different layers of redundancy. ### _Redundancy at Hardware Levels: P2P Technology and its Scope_ Redundancy strategies implemented at the hardware level have witnessed advancements, notably with the utilization of Peer-to-Peer (P2P) technology [4]. Within the realm of hardware redundancy, P2P systems have emerged as a prominent approach. In these systems, computational tasks are distributed across peer nodes, strategically designed to reduce reliance on centralized nodes [5]. This decentralized distribution of tasks among peers aims to mitigate the potential vulnerabilities associated with a single point of failure. Each individual peer node is equipped to function as both a server and a client, ensuring that even if a particular node encounters failure, other functional peers can seamlessly take on similar tasks. The P2P paradigm extends its utility to various domains, such as load distribution among peers [6] and the facilitation of resource sharing among participants [7], encompassing activities like sharing multimedia files [8]. However, it's important to note that the existing P2P approach primarily manifests as a hardware-level paradigm. While peer nodes collectively execute tasks in a distributed manner, the underlying software infrastructure often exhibits centralized features. For instance, certain P2P systems incorporate centralized software components to coordinate the functions of other software instances or to manage task scheduling, resembling traits of centralized control. Moreover, certain software implementations in P2P systems involve identical or similar software copies, such as those seen in protocols like BitTorrent [9]. These characteristics collectively contribute to the persistence of centralized attributes within the software layer, even within hardware-level redundancy strategies. ### _Redundancy at the Software Level: Enhancing Fault Tolerance_ In the domain of software-level redundancy, conventional strategies have primarily focused on integrating duplicate components into critical sections of a system. The central objective of these methods is to mitigate the potential impact of failures by ensuring the availability of backup components. Notable techniques include N-version programming (NVP) [10] and Triple Modular Redundancy (TMR) [11], both widely adopted to enhance fault tolerance in safety-critical systems. These approaches typically entail creating multiple redundant instances of the same software component, with their collective outputs influencing decision-making. It's worth noting that approaches like NVP and TMR often incorporate a central entity responsible for task scheduling or final computation adjustment, and the number of redundant software instances remains fixed (with N representing a fixed value, such as 2 or 3). In more recent advancements, Su et al. [12] introduced an innovative perspective by proposing independent software that operates without requiring central coordination. Furthermore, the number of redundant software instances is not predetermined, allowing new instances to join or depart, resembling a P2P software model. This independent software paradigm diverges from traditional methods, relinquishing centralized control and thereby enhancing the system's resilience to failures. These approaches have shown significant potential in bolstering system reliability. However, they may have limitations in comprehensively assessing various dimensions of redundancy, encompassing critical components like software and communication layers, which play pivotal roles in modern intricate systems. While significant progress has been made in enhancing reliability, there often remains a need for a more holistic evaluation that considers diverse facets of redundancy to ensure a robust system architecture. ### _Diverse Redundancy Strategies Across Different Dimensions_ In the context of redundancy, various dimensions of a system's architecture come into play. Complex systems consist of multiple components, each susceptible to unique types of failures. Consequently, diverse redundancy technologies have emerged to address specific aspects of reliability enhancement. For instance, to fortify power supply redundancy, the use of dual power sources has been suggested as a solution [13]. This strategy ensures uninterrupted operation even if one power source fails. Similarly, to mitigate the risks posed by localized disasters, distributing machines across different geographical locations has been proposed [14]. By dispersing components, the system's resilience to regional failures is amplified. To combat potential service disruptions caused by a single cloud server supplier failure, experts recommend employing multiple cloud providers [15]. This strategy minimizes the impact of supplier-related outages. Furthermore, the importance of communication redundancy has been highlighted. Proposals advocate for the incorporation of redundant communication networks to ensure seamless data transmission. Despite the progress in various redundancy dimensions, there remains a gap in understanding the interconnectedness of these strategies and their collective impact on overall system reliability. The relationship between these factors lacks comprehensive analysis, and methods to identify additional such factors from existing knowledge are currently absent. As systems span multiple aspects, a deeper exploration is warranted to uncover their interdependencies and to develop effective redundancy strategies that holistically address diverse dimensions of potential failure. ### _Comparison_ These prior works have introduced a range of redundant methods to enhance system reliability, forming the foundation for the analysis in this paper. However, effective redundancy strategies must encompass a systematic approach to prevent potential single processing paths. These studies often overlook the exploration of interrelationships between different redundancy methods. Furthermore, assessing and comparing the quality of redundancy among various systems holds significance for guiding system design, a crucial aspect not extensively addressed in existing research. Contrarily, our approach distinguishes itself by elucidating the interplay between different redundancy aspects and striving to identify additional factors that impact redundancy. Moreover, we introduce the concept of independent redundancy degrees to comprehensively assess system reliability. By dissecting the reliability assessment into distinct dimensions, our method facilitates a more detailed analysis of the system's performance across a spectrum of failure scenarios. ## III Independent Redundancy in Systems In this section, we explore the concept of independent redundancy in systems, a critical factor in ensuring system robustness and fault-tolerance. Here, a system refers to a combination of hardware, software, and other essential components responsible for processing user tasks. The term "components" is used to emphasize their role as integral parts of the entire system. Despite the efforts to reduce errors, nearly all components in a system may be susceptible to failures, leading to potential system outages. To mitigate such risks, redundancy can be employed by incorporating multiple backup components. By doing so, even if one or some processing paths fail, the system can continue to function. Redundancy can be achieved through diverse hardware configurations or software arrangements, creating multiple redundant paths to ensure operational reliability. For instance, in a P2P network, hardware layer redundancy can be achieved by employing different nodes, while software layers might lack redundancy. ### _Definition of Independent Redundancy_ **Redundancy** refers to a system providing multiple paths to process a task. We can formally describe a redundant system using (1), where \(n\) (greater than 1) represents the total number of processing paths, and each path is capable of completing the task. These paths collectively form a set denoted as \(PATH\). \[PATH=\{path_{1},...,path_{n}\} \tag{1}\] , where \(path_{i}\) represents different processing paths, and \(n\) is the total number of processing paths. **Independent redundancy** entails the existence of various processing paths, each uniquely processing the task, rather than being copies or similar versions. As these paths are independent, they encounter errors autonomously. We can formally express this through the addition of the independent condition (2) to the definition of redundancy (1). Redundancy is utilized to mitigate the impact of random errors that may occur in individual components or methods, as such errors cannot always be predicted. However, if redundant components are similar or identical copies of each other, they may encounter the same error. Therefore, the concept of independent redundancy focuses on avoiding similar errors among identical copies. By ensuring that redundant components are independent, the system becomes more resilient and less susceptible to widespread failures. \[\begin{array}{l}\exists path_{i},path_{j}\inPATH\\ p_{error}(path_{i})\cap p_{error}(path_{j})=0\end{array} \tag{2}\] , where \(p_{error}(path_{i})\) represents the probability of \(path_{i}\) encountering an error. The condition asserts that for any two independent processing paths \(path_{i}\) and \(path_{j}\) in \(PATH\), their error occurrences do not overlap, ensuring autonomous error behavior. Determining the independence of two processing paths can indeed be a challenging task. To simplify the assessment of independent redundancy, we shift the focus from independent processing paths to examining the independence of individual components. Although this method is not a necessary and sufficient condition for independent processing path(referring to section III-A1), it is easier to implement and provides valuable insights (referring to section III-A2). This approach introduce the concept of **independent processing components**, where each component is produced independently, meaning it is designed, implemented, and verified separately. In the case of redundant hardware, for instance, achieving hardware independence involves utilizing CPUs of different architectures for different nodes and employing software from different manufacturers. This approach ensures that even if one hardware component fails, the redundancy provided by independent hardware elements allows the system to continue functioning. By diversifying the hardware components, the system becomes more resilient and less susceptible to widespread outages. Consequently, we can assess the independence of two components through the correlation relationship between them. When two modules (\(module_{i}\), \(module_{j}\)) are independent, their correlation is 0, as expressed in (3). \[corr(module_{i},module_{j})=0 \tag{3}\] This correlation measure provides valuable information about the redundancy in the system, helping us understand the level of independence between different components and processing paths. While it may not be the sole determinant of independence, it aids in quantifying the degree of independent redundancy and its impact on system performance. #### Iii-A1 Proof for the Challenging Nature of Determining Independence of Processing Paths (1). Different Kinds of Tasks: Consider a system with a diverse set of tasks, each with its unique requirements and processing demands. For every task, there can be numerous possible processing paths to achieve the desired outcome. Proving that two processing paths are entirely different across all possible tasks becomes infeasible due to the vast number of potential scenarios. (2). NP-Hard Problem: The problem of determining whether two processing paths are independent for all possible tasks is classified as an NP-hard problem. NP-hard problems are known for their computational complexity, and as the number of tasks and processing paths increases, the difficulty of proving independence grows exponentially. #### Iii-A2 Explanation for Shifting Focus to Independent Components (1). Parameters and Characteristics: Instead of attempting to prove independence for entire processing paths, we can focus on examining the parameters and characteristics of individual components involved in the system. Components, such as software binaries or design graphs of hardware, have distinct attributes that can be comparatively analyzed. (2). Easier Implementation: Assessing the independence of individual components is relatively more manageable than trying to prove the independence of entire processing paths. This approach allows for a more practical implementation and reduces the computational complexity associated with the task. (3). Valuable Insights: Although proving the independence of components does not guarantee independence of processing paths in all cases, it provides valuable insights into the system's redundancy and fault-tolerance capabilities. Ensuring independence at the component level enhances the system's resilience and contributes to its overall robustness. ### _Redundancy and Singleness_ Redundancy involves the use of multiple components or methods to accomplish a task, ensuring a backup mechanism is available in case of failure. In contrast, singleness refers to situations where a task is achieved using only a single component or method, leaving no alternative paths in case of failure. Various manifestations of singleness can be observed: (1). Singleness Across Different Layers: Singleness can exist at both the hardware and software layers. For hardware, having a single computer or server represents a hardware singleness scenario. Similarly, using the same or similar software across the system creates a singleness situation at the software level. (2). Singleness within Parts of the Path: In certain cases, even when multiple hardware components are utilized, the scheduling of these components might be centralized, leading to a singleness scenario within parts of the processing path. (3). Singleness throughout the Entire Path: Running a C program on a single computer exemplifies singleness across both hardware and software layers. Any failure, such as a hardware shutdown or software malfunction (e.g., a null pointer), will result in a complete system outage. All contributing factors to singleness scenarios are termed **singleness factors**. Figure 2 illustrates different singleness situations. In part "a," two paths exhibit coherence as they share a segment of the processing path (in the yellow box). In part "b," while no segment is identical, they follow a similar design trend, resulting in some coherence. Part "c" displays even less coherence, with no shared processing segments or significant design similarities. Singleness factors have the following features: (1) Singleness factors can persist even when a system is composed of different nodes (i.e., different hardware). For instance, consider a blockchain [16] comprised of different nodes. Despite having multiple nodes, the system's smart contract (software on the blockchain) may operate in a singleness Fig. 2: Different coherence of processing paths format, with only one path for processing tasks. As a consequence, any code bug occurring in the smart contract would affect all nodes simultaneously, rendering the application non-functional. In such cases, there are no redundant paths to switch to, leading to a complete failure. (2) Certain factors may exhibit redundancy at one scope while presenting singleness at another scope, such as a larger or smaller scope. The scope can refer to the space or time frame. For example, an application may utilize 100 machines to achieve hardware redundancy within the same laboratory. Despite the redundancy of multiple machines, a major natural disaster, such as a big earthquake, could damage all machines simultaneously, causing the entire application to cease functioning. In this scenario, the hardware exhibits redundancy within the laboratory scope but becomes a singleness factor when considering the broader scope of the natural disaster's impact. ## IV Impact Factors on Independence Redundancy Our aim is to achieve as high redundancy as possible. In this section, as try to analyze the factors that affect the redundancy, and how to achieve high redundancy as possible. ### _Not Possible for a Full Redundancy_ A fully redundant system is one that can consistently provide at least one processing path, regardless of the condition or failures that may occur in its peers (e.g., components going out of work unintentionally or being deliberately shut down). However, achieving a truly fully redundant system is practically unattainable due to various singleness factors, some of which are unknown or unpredictable. For instance, a hacker might discover a novel method to compromise the entire system, or an unexpected earthquake could lead to the destruction of critical hardware components. These unforeseen factors introduce vulnerabilities that can undermine the attainment of complete redundancy in a system. As a result, while striving for redundancy is essential for system robustness, achieving absolute full redundancy remains a challenging goal. #### Iv-A1 proof that achieving full redundancy in a system is an NP-hard problem To prove that the problem is NP-hard, we need to demonstrate a polynomial-time reduction from another known NP-hard problem to the full redundancy problem. The proof is to use reduction from Subset Sum Problem method. (1). Subset Sum Problem: The Subset Sum problem is a well-known NP-hard problem. Given a set of positive integers and a target sum, the task is to determine if there exists a subset of the integers whose sum equals the target sum. (2). Formulation of Full Redundancy Problem: We can formulate the full redundancy problem as follows: Given a system configuration consisting of components and their interconnections, we want to determine if there exists at least one processing path that remains operational under any possible failure scenario, including peer failures and shutdowns. (3). Construction of the Reduction: To prove that the full redundancy problem is NP-hard, we will construct a polynomial-time reduction from an instance of the Subset Sum problem to an instance of the full redundancy problem. (4). Reduction Construction: Given an instance of the Subset Sum problem with a set of positive integers and a target sum, we create a corresponding system configuration for the full redundancy problem. We create a set of components, each representing one integer from the Subset Sum problem. We also create a target processing path that corresponds to the target sum. (5). Completeness and Correctness: The reduction ensures that the original Subset Sum problem has a solution if and only if the constructed system configuration in the full redundancy problem can provide at least one processing path in all conditions. If there exists a subset of integers whose sum equals the target sum in the Subset Sum problem, it implies that the corresponding components in the full redundancy problem can form a processing path that remains operational under any failure scenario. Conversely, if there is no such subset sum in the Subset Sum problem, it means that there is no way to create a processing path in the full redundancy problem that can provide full redundancy in all conditions. (6). Polynomial Time Complexity: The reduction from Subset Sum to full redundancy can be performed in polynomial time. Constructing the system configuration and checking if there exists a valid processing path can be done in polynomial time with respect to the size of the input. (7). Conclusion: Since the Subset Sum problem is NP-hard and we have shown a polynomial-time reduction from Subset Sum to the full redundancy problem, the full redundancy problem is also NP-hard. In the above, we have provided a formal proof that achieving full redundancy in a system is an NP-hard problem using the method of reduction from the Subset Sum problem. This establishes the computational complexity of the full redundancy problem and highlights its infeasibility for finding efficient solutions for all possible system configurations and failure scenarios. Although a full independent redundancy cannot achieved, we can achieve a currently available full redundancy system. A **currently available full redundancy system** is a system with redundency for all known singleness factors. Our goal is then to find as many singleness factors as possible. Although, it is difficult to find unknown factors which may be singles in a system, we can find more singleness factors from already known singleness facotrs and take according redundant methods. The methods is to use the scope related analyses method to extend more singleness facotrs from already known facotrs. ### _Scope-Related Analysis of Singleness Factors_ The scope analysis involves examining factors in both the space and time dimensions. #### Iv-B1 Factors Related to Different Space Scores Suppose we identify that the singleness factor is caused by using a single computer. We can introduce redundancy by using several computers, but these computers might be located in a small area. We can then extend the scope by using several computers in a specified area. Each computer with its location can be regarded as an area, and these areas may be located in a small region that could be affected by a natural disaster, such as an earthquake. We can further extend the scope to different cities, and even consider different countries or planets. This illustrates that the scope can vary widely and must be carefully analyzed to ensure redundancy is effective. The space-related singleness factors encompass the concept of different scope, which can be adjusted to identify more singleness factors. When one singleness factor is identified, its scope can be expanded or reduced to explore additional singleness scenarios. We define the concepts of **scale big** and **scale small** related to scope. If we identify that the singleness factor is caused by using only one computer, one approach is to introduce several computers for redundancy. However, if these computers are located in a small space, such as in a laboratory, it may still result in a singleness scenario within this limited scope. To address this, we can distribute the machines to different locations, such as in different cities, to increase the scope and achieve greater redundancy. However, the scope cannot be extended indefinitely, as there should be a practical limit where the loss possibility becomes sufficiently small. The concept of scale big can be expressed as shown in (4), where \(SF\) represents the set of singleness factors. If factor \(f\) is identified as a singleness factor, and there exists a broader factor \(bf\) that includes \(f\) within its scope, then \(f\) belongs to the set \(SF\) of singleness factors. \[\begin{split} f\in SF\\ \exists bf\to f\in bf\text{, }bf\in SF\end{split} \tag{4}\] The concept of scale small can be expressed as shown in (5), where \(SF\) represents the set of singleness factors. If factor \(f\) is identified as a singleness factor, and there exists a narrower factor \(sf\) that is included within \(f\)'s scope, then \(sf\) also belongs to the set \(SF\) of singleness factors. For example, a computer can be considered a larger scope compared to its individual components, such as CPUs and memories. If all machines use the same CPU, they can encounter the same faults simultaneously, resulting in singleness issues at the CPU level. \[\begin{split} f\in SF\\ \exists sf\to sf\in f\text{, }sf\in SF\end{split} \tag{5}\] By exploring different scopes, we can uncover various singleness factors that might be present at different spatial levels. Understanding these scope-related singleness factors is crucial for designing effective redundancy measures and fault-tolerant systems. Two typical space-related singleness factors are: (1) Layer-Related Singleness Factors: This is a special scope-related factor where a module can be divided into several layers, and each layer may introduce singleness. For example, an application can be divided into software layer and hardware layer. Redundancy in the hardware layer does not automatically provide redundancy at the software layers unless the copies of each hardware are independent. (2) Dependence-Related Singleness Factors: Instead of altering the scope, some factors are dependence-related, meaning they depend on another known factor. For example, software identified as a singleness factor may depend on specific libraries, running virtual machines, or operating systems, which can also contribute to singleness. Similarly, in hardware, if a machine is identified as a singleness factor, its power supply could be considered a dependence-related factor. These factors exist in parallel and are not directly related to each other. This kind of related factor can be expressed as in (6). \[\begin{split}\text{when }f\in SF\\ \exists df\to f\text{ }\text{ {\bf dep} }df\text{, and }df\in SF\end{split} \tag{6}\] #### Iii-B2 Factors Related to Different Time Scopes Time-related singleness factors pertain to issues that can cause all redundant processing paths to stop at the same time. One such example is when the licenses of software or hardware components expire simultaneously. Formally, we can describe time-related factors as shown in (7). In this equation, \(Redun\) represents the set of redundant components from \(r_{1}\) to \(r_{n}\). If all these factors have the same fault time (\(fault^{time}\)), they become coherent rather than independent. For instance, if we identify that the currently used software may cause singleness, we can introduce several independent software to achieve redundancy. However, we must also consider whether the licenses of those software will expire simultaneously, potentially leading to simultaneous outages. In such cases, even with redundancy, all the redundant paths might be rendered unusable at the same time, jeopardizing the system's fault tolerance. This highlights the importance of considering time-related factors in redundancy design. \[\begin{split}\text{when }Redun=\{r_{1},...,r_{i},...,r_{n}\}\\ fault^{time}_{r_{1}}=...=fault^{time}_{r_{i}}=...=fault^{time}_{r_{n}} \end{split} \tag{7}\] ### _Redundancy Implementation Design Example: An Independent Redundancy Hardware-Software System_ In this section, we present an example of the design consideration of independent redundancy for a system composed of both hardware and software components. Independence design of computing hardware: To achieve independent redundancy in the hardware, we need to consider the following basic requirements: (1). Use different computing nodes with diverse types of CPUs and memories to avoid similarity in the hardware components. (2). Ensure that computing nodes are not located on a single server to prevent a single point of failure. (3). Ensure that the power supply of computing nodes does not depend on a single power supply line to avoid power-related singleness factors. (4). Place the computing nodes in different cities to prevent potential simultaneous regional outages. (5). Be cautious about the hardware's expiration date, especially in rental scenarios, to avoid all hardware becoming non-functional at the same time. This is an example of time-related factors. Independence design of software: Achieving independent redundancy in software involves the following considerations: (1). Use different software solutions capable of processing user tasks independently and sourced from different software companies to avoid using similar software binaries. (2). Avoid using software that relies on the same libraries to prevent potential singleness factors at the software level. (3). Avoid software that depends on the same runtime virtual machine to ensure independence in the software execution environment. (4). Prefer using different operating systems for various software components to avoid operating system-related singleness factors. (5). Be mindful of software license deadlines to prevent multiple software instances from expiring simultaneously. This is another example of time-related factors. Independence design of communication: For communication, the following steps are crucial: (1). Avoid using the same communication network to ensure redundancy. If one network faces an outage, the system can switch to an alternative network. The network failure of Guangdong Province for about several hours is an example. (2) To avoid using the same network devices for all hardware or software components. For instance, if all hardware nodes are connected to a single router, a failure or shutdown of that router could lead to a complete outage of the entire system. To prevent this singleness factor, we should employ multiple routers or network devices and distribute the connections of hardware or software components across them. (3). Use diverse types of networks, such as wired and wireless, to avoid singleness factors related to network type. (4). Be cautious about the expiration dates of network bandwidth purchases to prevent all networks from expiring simultaneously. This is an example of time-related factors. The above design considerations aim to address known singleness factors. However, it is essential to acknowledge that unknown factors (either yet to be discovered or unforeseen) may still exist, and achieving full independent redundancy may be challenging. Nevertheless, by implementing the described design, we can achieve a currently available independent redundancy system, which enhances system robustness and fault-tolerance. ## V Measurement Metric: Independence Redundancy Degree (IRD) To quantify the degree of independence redundancy in a system, we propose the Independence Redundancy Degree (IRD) as a measurement metric. The IRD takes into account different singleness factors and their corresponding redundancy methods. ### _Independence Redundancy Degree (IRD)_ The factors that cause a system to have a single processing path can vary, and each of these factors requires different methods to establish redundant processing paths. Therefore, we introduce dimensions (\(d_{i}\)) to differentiate different singleness factors. For the redundancy method, the value of each dimension (\(v^{d_{i}}\)) represents the probability that all redundant paths of this dimension experience an outage. The outage probability of each redundancy path (\(p_{outage}\)) can be determined from actual outage occurrences. If these redundancy paths are independent, the value of the dimension (\(v^{d_{i}}\)) can be expressed as shown in (8). Thus, one dimension can be described as \(v^{d_{i}}\times d_{i}\). \[v^{d_{i}}=1-\prod_{j=1}^{n}p_{outage\_j}^{d_{i}} \tag{8}\] The IRD of a system can then be represented as the combination of each dimension. Equation (9) illustrates the IRD of system \(k\) with \(m\) dimensions. The IRD represents the probability that there is at least one available redundant path to process a user task for each dimension, and we denote this as \(d_{i}\), with its value referred to as the live probability of this dimension. \[IRD_{k}=\sum_{i=1}^{m}v^{d_{i}}\times\overline{d_{i}} \tag{9}\] In case there is no available redundant path to process a user task (referred as outage) in dimension \(k\), we can use Equation (10), with the value in it referred to as the outage probability. The relationship between the outage probability (\(\overline{v}^{d_{i}}\)) and live probability (\(v^{d_{i}}\)) for dimension \(d_{i}\) is shown in Equation (11). \[IRD_{k}=\sum_{i=1}^{m}\overline{v}^{d_{i}}\times\overline{d_{i}} \tag{10}\] \[\overline{v}^{d_{i}}+v^{d_{i}}=1 \tag{11}\] ### _Combination of Independent Systems_ The previous analysis focused on individual independent redundancy systems. However, there are situations where it becomes necessary to combine two or more independent redundancy systems. In such cases, calculating the Independence Redundancy Degree (IRD) of the combined system becomes crucial. Combining the values of one dimension from two IRDs poses a challenge since the paths in individual redundancy systems are independent, but when combined, their paths could become not independent. For example, let's consider the two systems \(IRD_{1}\) and \(IRD_{2}\); if they place their computer hardware in the same city, the dimensions related to different cities become coherent, indicating dependency. In such cases, the combination function **cf**(0) can be employed to calculate the value of a dimension. The use of the combination function allows us to properly handle the dependencies and accurately determine the IRD of the combined system. To accommodate the combination of independent redundancy systems and handle their potential interdependencies, we introduce an additional component in the IRD calculation. This component is the combination function, denoted as **cf**(0). The **cf**(0) function has the following requirements: (1) To ensure the correctness of the **cf(0)** function, it must take sufficient parameters. For instance, for geographical redundancy factors, it should accept geographical parameters as inputs. (2) The value of the dimension is the first parameter passed to **cf**(0). In (12), it is denoted as \(n\). With the introduction of the **cf**(0) function, the format of the IRD changes from (9) to (12). In other words, (9) can be considered the simplified format of (12). The parameters used in the **cf0** function can vary depending on the specific combination of redundancy factors. For instance, when combining redundancy factors related to geographical locations, we need to determine whether they are in the same place. On the other hand, when combining two hardware redundancy factors, their similarity should be taken into account. The new expression for the IRD, considering the **cf0** function, is given by: \[IRD=\sum_{i=1}^{m}\textbf{cf}^{i}(v^{d_{i}},\{p_{1}^{i},...,p_{i}^{i},...,p_{n} ^{i}\})\times d^{i} \tag{12}\] , where \(d_{i}\) represents the dimension, \(\textbf{cf}^{i}\) is the combination function for dimension \(d_{i}\), and \(v^{d_{i}}\) is the current value of this dimension. Let's discuss the operations for independence redundancy degree, specifically the add and subtract operations. #### Iii-B1 Add Operation of Independence Redundancy Degree When we combine two or more systems into a larger system, the corresponding independence redundancy degree should be adjusted accordingly. This operation is known as the add operation. The add operation is performed one dimension at a time. The **cf0** function is utilized to calculate the final value of each dimension. In the equation (13), we show the process of adding two IRDs, \(IRD_{1}\) and \(IRD_{2}\), to obtain the combined IRD \(IRD_{3}\). In this process, both the values of each dimension and the parameters will be adjusted. \[\begin{split} IRD_{1}&=\sum_{i=1}^{m}\textbf{cf}^{ i}(n_{1}^{i},\{p_{1}^{i},...,p_{i}^{i},...,p_{r}^{i}\})\times d^{i}\\ IRD_{2}&=\sum_{i=1}^{m}\textbf{cf}^{i}(n_{2}^{i},\{ p^{\prime i}_{1},...,p^{\prime i}_{i},...,p^{\prime i}_{s}\})\times d^{i}\\ IRD_{3}&=IRD_{1}+IRD_{2}\\ &=\sum_{i=1}^{m}\textbf{cf}^{i}(n_{3}^{i},\{p\prime i_{1}^{i},..., p\prime i_{i}^{i},...,p\prime i_{t}^{i}\})\times d^{i}\end{split} \tag{13}\] In the equation above, the value of each dimension in the combined IRD, \(IRD_{3}\), is calculated using the combination function **cf0**(), rather than being directly added. The parameters of the **cf0** function can be a collection of the parameters from \(IRD_{1}\) and \(IRD_{2}\), and some parameters may be merged. This is why the length of \(IRD_{3}\) is denoted as \(t\), which can be less than the sum of the lengths of \(IRD_{1}\) and \(IRD_{2}\) (\(r+s\)). This process ensures that the combined IRD accounts for the interdependencies and interactions between the redundancy systems being added. The parameters used in the **cf0** function may vary depending on the specific combination of redundancy factors being considered. These parameters help identify and handle the potential dependencies between the redundancy paths, ensuring an accurate calculation of the combined IRD. #### Iii-B2 Subtraction Operation of Independence Redundancy Degree Similarly, the subtraction operation of independence redundancy degree involves removing redundancy factors from the system, and it also utilizes the **cf0** function. When we remove some redundancy factors from the system, such as for cost-saving purposes, the independent redundancy degree needs to be adjusted accordingly. This operation is known as the subtract operation. Similar to the add operation, the subtract operation is carried out one dimension at a time. The **cf0** function is used to calculate the final value of each dimension in the adjusted IRD. While space constraints prevent us from presenting the detailed equations for the subtract operation here, it follows a similar approach to the add operation. In the subtract operation, some parameters and values of dimensions may be modified or merged, depending on the specific redundancy factors being removed. The **cf0** function helps handle the interactions and dependencies between redundancy paths, ensuring an accurate calculation of the adjusted IRD after removing specific redundancy factors. The combination and subtraction operations of independence redundancy degree are essential for assessing and optimizing redundancy strategies in complex systems. These operations allow us to compare different redundant system configurations, evaluate their effectiveness, and make informed decisions about redundancy measures based on the resulting IRD values. Lastly, the question is how to compare two independent redundancy degrees. The comparison is based on the probability that there is at least one available redundant path for each dimension. ### _Comparing Independence Redundancy Degrees_ When we have two IRDs, it is crucial to compare which one has a higher redundancy level. For a system, if any dimension experiences an outage, the entire system may fail. Therefore, we aim to determine which IRD has the **minimum dimension**, rather than simply averaging the maximum dimension. The process of comparing \(IRD_{1}\) and \(IRD_{2}\) can be described as follows: (1) Initialize a variable \(i\) to keep track of the comparison number and set it to 1 initially. Also, initialize a variable \(max\) to record the comparison result and set it to empty initially. (2) Check whether only one of \(IRD_{1}\) and \(IRD_{2}\) lacks the \(i_{th}\) dimension. If so, set \(max\) to the IRD that does not have the \(i_{th}\) dimension. If both \(IRD_{1}\) and \(IRD_{2}\) lack the \(i_{th}\) dimension, set \(max\) to empty and proceed to step (5). (3) If both \(IRD_{1}\) and \(IRD_{2}\) have the \(i_{th}\) dimension, choose the dimension with the \(i_{th}\) minimum value. Suppose \(min_{1}\) corresponds to \(IRD_{1}\), and \(min_{2}\) corresponds to \(IRD_{2}\). If there are multiple values with the \(i_{th}\) minimum value, choose any one of them for comparison, as the other values will be used in subsequent iterations. (3a) If \(min_{1}<min_{2}\), it indicates that \(IRD_{2}\) has a higher level of independent redundancy, and consequently, set \(max\) to \(IRD_{2}\). The comparison is completed, and proceed to step (5). (3b) If \(min_{1}>min_{2}\), it implies that \(IRD_{1}\) has a higher level of independent redundancy, and thus, set \(max\) to \(IRD_{1}\). The comparison is completed, and proceed to step (5). (3c) If \(min_{1}\) and \(min_{2}\) are equal, then choose the second minimum value and perform the comparison until either (1) a different value is found, or (2) all values are equal. (4) Increment \(i\) by one and go back to step (2). (5) The comparison process is concluded, and the variable \(max\) contains the IRD with the higher level of independent redundancy. ``` 1:Initialize \(i\gets 1\) and \(max\leftarrow\) empty 2:while Not all dimensions compared do 3:if Only one of \(IRD_{1}\) and \(IRD_{2}\) lacks \(i_{th}\) dimension then 4: Set \(max\) to the IRD without the \(i_{th}\) dimension 5:elseif Both \(IRD_{1}\) and \(IRD_{2}\) lack \(i_{th}\) dimension then 6: Set \(max\) to empty 7:else 8: Find \(i_{th}\) minimum values \(min_{1}\) and \(min_{2}\) for \(IRD_{1}\) and \(IRD_{2}\), respectively 9:if\(min_{1}<min_{2}\)then 10: Set \(max\) to \(IRD_{2}\) 11:elseif\(min_{1}>min_{2}\)then 12: Set \(max\) to \(IRD_{1}\) 13:else 14: continue 15:endif 16:endif 17: Increase \(i\) by one 18:endwhile 19:End ``` **Algorithm 1** Comparison of \(IRD_{1}\) and \(IRD_{2}\) This algorithm allows us to determine which system has a higher level of independent redundancy, aiding in decision-making when designing and optimizing redundant systems. ## VI Verification In this section, we try to simulate a system with hardware, software and communication redundancy to verify results of different redundency methods. ### _Outage affected By different Dimensions_ #### Vi-A1 Environment 1) Hardware Simulation In this system, there are 30 computers distributed across 3 labs in a city. The entire system fails to function properly if all the computers are out of order. The system outage is influenced by three distinct layers: Layer 1: Each individual computer has a probability (\(p_{single}\), 0.5) of experiencing an outage, which could be due to reasons like hardware faults or being intentionally shut down. If all 30 computers have been affected by an outage, the probability of which is given by \(p_{single}^{30}\), then the entire system will not work. Layer 2: Similarly, each lab has a probability (\(p_{lab}\), 0.1) of not working, which could be due to reasons like lab-wide technical issues or hazards like fires. If all 3 labs are affected by non-functionality, the probability of which is given by \(p_{lab}^{3}\), then the entire system will not work. Layer 3: Beyond individual labs, the entire city also has a probability (\(p_{city}\), 0.01) of being in a state of disaster, which could involve events like earthquakes. If the city is in a disaster, the probability of which is given by \(p_{city}\), then the entire system will not work. To summarize, the system's functionality relies on the successful operation of its individual computers (Layer 1), the proper functioning of each lab (Layer 2), and the absence of a city-wide disaster (Layer 3). If any of these layers fail, the whole system will not work. 2) Software Simulation Due to cost considerations in each project, only three independent redundant software solutions are employed. The system outage is influenced by two distinct layers: Layer 1: Each individual software has a probability (\(p_{logic}\), 0.1) of experiencing an outage, which could result from reasons such as code errors or code vulnerabilities. If all three software instances are affected by an outage, with a combined probability of \(p_{logic}^{3}\), then the entire system will fail to operate. Layer 2: Additionally, each individual software has a probability (\(p_{time}\), 0.01) of outage due to time-related factors, like software license expiration. If all three software instances have expired, with a combined probability of \(p_{time}^{3}\), then the entire system will fail to function. 3) Communication Simulation The computers are distributed across 3 labs, and the system outage is influenced by three distinct communication-related layers: Layer 1: Assuming each lab has an outgoing router, each router has a probability (\(p_{router}\), 0.05) of experiencing an outage, which could result from reasons like hardware faults or intentional shutdowns. If all 3 routers are affected by an outage, with a combined probability of \(p_{router}^{3}\), then the entire system will not work. Layer 2: Since the labs are located within one city, the entire system has the potential to experience an outage if the network in the whole city is affected, with a probability of \(p_{city}\) (0.001). For instance, the outage of China Telegram in Guangdong Province serves as an example. Layer 3: Additionally, each individual lab has a probability (\(p_{time\_network}\), 0.01) of experiencing an outage due to time-related factors, such as software license expiration. If all 3 network instances have expired, with a combined probability of \(p_{time\_network}^{3}\), then the entire system will fail to function. These probabilities cannot always maintain at the same value, it may varies. To simulate this, we give a variety on the probability within the range of 0.1. #### Vi-A2 Results and Analysis We conducted four different test scenarios to explore the impact of different dimensions on system outages. The objective was to observe how the presence or absence of outages in specific layers affects the overall system's performance and reliability. By analyzing these scenarios, we gained insights into the contribution of individual dimensions to the system's resilience and availability. Case 1 ('All Possible Outage'): In this scenario, outages occurred in all aspects, including hardware, software, and communication layers. Case 2 ('No Hardware Outage'): This scenario simulated the absence of hardware outages. Outages occurred in the software and communication layers, but there were no hardware outages. Case 3 ('No Software Outage'): This scenario simulated the absence of software outages. Outages occurred in the hardware and communication layers, but there were no software outages. Case 4 ('No Communication Outage'): This scenario simulated the absence of communication outages. Outages occurred in the hardware and software layers, but there were no communication outages. We performed 100,000 tests, i.e., running the system 100,000 times. In each test round, each component generated a random number (\(p_{component}\), between 0 and 1). If \(p_{component}\) was less than the outage probability as described in (VI-A1), the corresponding component experienced an outage. The total number of outages for the entire system is shown in Figure 3. From Figure 3, we observed that with an increase in the number of test rounds, the number of outages also increased. Despite some fluctuations, the overall trend appeared linear. The "All Possible Outage" case had the highest outage count. For example, there were 144 system outages, whereas there were only 17, 102, and 122 outages for the "No Hardware Outage", "No Software Outage", and "No Communication Outage" cases, respectively, in the 100,000th test round. Moreover, the sum of the outage counts for the individual cases was more than twice the outage count of the "All Possible Outage" case. This was because multiple outages in different aspects (e.g., hardware and software) occurred during a single test round, resulting in only one system outage. The "No Hardware Outage" line consistently lay below the other three lines, indicating that the system outages were mainly caused by hardware issues. For instance, in the 60,000th test round, there were only 8 outages in the "No Hardware Outage" case, whereas there were 54, 69, and 81 outages in the "No Software Outage", "No Hardware Outage", and "No Communication Outage" cases, respectively. If one layer is not considered, the number of simulated outages is bigger than the actual outage counts. Thus, it is crucial to consider as many outage factors as possible. Conversely, if there are more system outages observed than expected, it may indicate the existence of unknown aspects that are not redundantly addressed. To further demonstrate the impact of different dimensions, we show the total outage counts in Figure 4. It is evident that the "No Hardware Outage" case has significantly fewer outages compared to the other cases. This case corresponds to the dimension with the least IRD, and applying the redundancy method to minimize this IRD can greatly improve the system's resilience to outages. ### _Analyzing Dimensional Weakness_ In this section, we aim to demonstrate the concept that the first dimension to be broken is the one that minimizes the Independent Redundancy Degree (IRD). To verify this, we conducted several test scenarios using different IRD instances. These instances, denoted as \(IRD_{1}\), \(IRD_{2}\), \(IRD_{3}\), up to \(IRD_{6}\), are shown in (14), with variations in dimension values \(\overline{d_{1}}\), \(\overline{d_{2}}\), and \(\overline{d_{3}}\). \[\begin{split} IRD_{1}&=0.001*\overline{d_{1}}+0.005* \overline{d_{2}}+0.009*\overline{d_{3}}\\ IRD_{2}&=0.002*\overline{d_{1}}+0.005*\overline{d_{2 }}+0.008*\overline{d_{3}}\\ IRD_{3}&=0.003*\overline{d_{1}}+0.005*\overline{d_{2 }}+0.007*\overline{d_{3}}\\ IRD_{4}&=0.004*\overline{d_{1}}+0.005*\overline{d_{2 }}+0.006*\overline{d_{3}}\\ IRD_{5}&=0.0049*\overline{d_{1}}+0.005*\overline{d_{2 }}+0.0051*\overline{d_{3}}\\ IRD_{6}&=0.005*\overline{d_{1}}+0.005*\overline{d_{2 }}+0.005*\overline{d_{3}}\\ \end{split} \tag{14}\] , where \(\overline{d_{1}}\), \(\overline{d_{2}}\), and \(\overline{d_{3}}\) are three dimensions, and the dimensions with the overlines indicate their values represent the outage probabilities. To simulate the system's operation, we ran it 10000 times until a dimension fails. The simulation involved generating random numbers (ranging from 0 to 1) using Python's 'random' function, representing the real outage probabilities. (1) If the real probability is within the outage range (e.g., for Fig. 4: The average outage comparison. Fig. 3: The outage comparison. an outage probability of 0.005, the real outage probability is less than 0.005), the corresponding dimension experiences an outage, and the value of 'runAccount' is logged into a separate file (the filename is related to the name of each IRD). The result data is obtained from this file. (2) Otherwise, the system continues to run normally, and 'runAccount' increases by 1. The results of the first 50 test rounds are shown in Figure 5. From this Figure, we observe that \(IRD_{6}\), \(IRD_{5}\), and \(IRD_{4}\) have the highest peaks. For instance, in the 24\({}_{th}\) test round, the system with \(IRD_{6}\) method ran successfully 1306 times, and in the 10\({}_{th}\) test round, the system with \(IRD_{5}\) method ran successfully 1065 times. In contrast, the remaining three IRDs did not exhibit high values of successful runs. From Figure 5, we can observe that as the difference between IRD values decreases, the number of successful runs increases. To further illustrate this trend, we show the average values in Figure 6. This figure clearly depicts the trend of successful probability. As the difference between dimensions decreases (even though the sum of their outage probabilities remains the same at 0.15), the successful probability increases. This suggests that we should strive to equalize the values of each dimension to improve system reliability. #### Vi-B1 Probability to Reach Enough Low Outage Probabilities In this section, our objective is to explore the impact of increasing the number of independent redundancy modules on the system's outage probability in one dimension. We aim to determine how the outage probability of the entire system changes with varying numbers of independent redundancy modules. The probabilities of normal operation (\(p_{\text{success}}\)) range from 0.9 to 0.1 in steps of 0.1. The number of independent redundancy modules (number) varies from 1 to 70. The success probability of the entire dimension (\(p_{\text{success}}^{\text{dimension}}\)) is the complement of the probability that all independent redundancy modules function incorrectly, as represented in (15). \[p_{\text{success}}^{\text{dimension}}=1-\prod_{i=1}^{\text{number}}\left(1-p_{ \text{success}}\right) \tag{15}\] The equation calculates the outage probability by taking the product of the probabilities of individual modules working normally (each \((1-p_{\text{success}})\) represents the probability of one module failing) and then subtracting this value from 1 to get the overall outage probability. The results are presented in Figure 7. From this figure, we observe that there are distinct trends with the increase in the number of independent redundancy modules. Initially, as the number of independent redundancy modules increases, the probability of the entire system operating successfully experiences a significant improvement. However, in the later stages, further increasing the number of independent redundancy modules does not lead to substantial gains in the probability of the entire system operating successfully. For instance, when the individual module success probability (\(p_{\text{success}}\)) is 0.5, the system success probability (\(p_{\text{success}}^{\text{system}}\)) increases to 0.992 with 6 independent redundancy modules, and further increases only slightly to 0.999 with 9 independent redundancy modules. Furthermore, even for the probability of success is only 0.1 for a single module, when there are 30 independent redundancy modules, the success probability of the system is more than 90%. Figure 7 indicates that deploying an excessive number of independent redundancy modules may not be necessary. By strategically selecting a certain number of redundancy modules, it is possible to achieve a sufficiently high probability of success while saving resources on deploying additional modules that do not significantly impact the success probability. This approach helps strike a balance between the desired probability of success and the associated cost. ## VII Conclusion This paper addresses the concept of systematic redundancy within application systems. We formally introduce the concept of Independent Redundancy Degree (IRD) as a means to quantify the redundancy levels of diverse systems, taking into account various impact factors, spanning hardware, software, and communication domains. We further delve into singleness Fig. 5: The different success probability of different IRD values Fig. 6: The different success probability of different IRD values factors, which necessitate redundancy processing paths, and propose a method to extend the scope of these factors, allowing the identification of additional singleness factors using temporal and spatial perspectives. Moreover, our study encompasses the analysis of IRD operations and introduces a methodology for comparing two IRDs. Through verification, we present the results that illustrate the effects of different dimensions and the implications of weakness dimensions. This research contributes to enhancing our understanding of redundancy design, system resilience, and fault-tolerance by providing a comprehensive framework for evaluating and quantifying redundancy in complex systems. ## Acknowledgment The authors thank the anonymous reviewers for their constructive comments, which help us to improve the quality of this paper. This work was supported in part by the National Natural Science Foundation of China under Grant No. 61772352; the Science and Technology Planning Project of Sichuan Province under Grant No. 2019YFG0400, 2018GZDZX0031, 2018GZDZX0004, 2017GZDZX0003, 2018JY0182, 19ZDYF1286.
2305.11863
Scaling laws for language encoding models in fMRI
Representations from transformer-based unidirectional language models are known to be effective at predicting brain responses to natural language. However, most studies comparing language models to brains have used GPT-2 or similarly sized language models. Here we tested whether larger open-source models such as those from the OPT and LLaMA families are better at predicting brain responses recorded using fMRI. Mirroring scaling results from other contexts, we found that brain prediction performance scales logarithmically with model size from 125M to 30B parameter models, with ~15% increased encoding performance as measured by correlation with a held-out test set across 3 subjects. Similar logarithmic behavior was observed when scaling the size of the fMRI training set. We also characterized scaling for acoustic encoding models that use HuBERT, WavLM, and Whisper, and we found comparable improvements with model size. A noise ceiling analysis of these large, high-performance encoding models showed that performance is nearing the theoretical maximum for brain areas such as the precuneus and higher auditory cortex. These results suggest that increasing scale in both models and data will yield incredibly effective models of language processing in the brain, enabling better scientific understanding as well as applications such as decoding.
Richard Antonello, Aditya Vaidya, Alexander G. Huth
2023-05-19T17:53:03Z
http://arxiv.org/abs/2305.11863v4
# Scaling laws for language encoding models in fMRI ###### Abstract Representations from transformer-based unidirectional language models are known to be effective at predicting brain responses to natural language. However, most studies comparing language models to brains have used GPT-2 or similarly sized language models. Here we tested whether larger open-source models such as those from the OPT and LLaMA families are better at predicting brain responses recorded using fMRI. Mirroring scaling results from other contexts, we found that brain prediction performance scales log-linearly with model size from 125M to 30B parameter models, with \(\sim\)15% increased encoding performance as measured by correlation with a held-out test set across 3 subjects. Similar log-linear behavior was observed when scaling the size of the fMRI training set. We also characterized scaling for acoustic encoding models that use HuBERT, WavLM, and Whisper, and we found comparable improvements with model size. A noise ceiling analysis of these large, high-performance encoding models showed that performance is nearing the theoretical maximum for brain areas such as the precuneus and higher auditory cortex. These results suggest that increasing scale in both models and data will yield incredibly effective models of language processing in the brain, enabling better scientific understanding as well as applications such as decoding. Large language models have come to dominate the field of AI due to incredible capabilities that range from reasoning [1] to code generation [2] to even predicting how a human brain would respond to language [3]. Rapid improvement in these abilities has largely been driven by _scale_: the most capable models today use nearly identical architectures to early transformer language models [4], but have orders of magnitude more parameters and larger training data [5]. Overall, model capabilities-often measured as zero-shot performance across a range of language tasks-tend to scale log-linearly with the number of model parameters [6; 7], suggesting that improvements will continue as model scale increases. Here we test whether these scaling "laws" hold for the task of modeling the human brain. The human brain is the quintessential language processing system, but there is still much to learn about how it processes and represents language. One paradigm used for this purpose is the _encoding model_: given measured brain responses to natural language, construct a model that predicts those responses from the natural language stimulus [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. If an encoding model is able to accurately predict brain responses across a range of new stimuli, then that model must use similar representations to the brain. High-performing encoding models can then be interpreted to gain insight into the brain's computations [20; 21; 22] or the function of different brain areas [23; 24; 11; 25]. The highest performance is currently offered by encoding models that are based on large language models such as GPT-2 XL [26]. To build an encoding model, a language model is fed the same language stimuli that the human subject is hearing or reading. The internal states at each layer of the language model are then extracted, yielding _contextual embeddings_ that capture semantic and syntactic properties of the stimuli [27]. These embeddings are then entered into a linear regression model that predicts the human brain responses, often measured using functional magnetic resonance imaging (fMRI). Though text-based language models are the norm, language encoding models have increasingly been trained with acoustic features derived from audio-based neural networks [28; 29; 30; 31; 32; 33; 34]. Models like HuBERT [35] are able to derive phonetic, lexical, and semantic properties by learning from unlabeled waveforms or annotated transcripts [36]. Even when trained with human-plausible amounts of training data, these models can be more effective than language models in predicting brain responses in _low-level_ speech processing areas such as the auditory cortex [31]. While earlier works examined the utility of several self-supervised audio models in brain encoding, newer models have since been released with substantially increased training data and speech recognition performance. In this paper, we study whether encoding models for fMRI benefit from scaling neural network model parameters and datasets to the same degree as other tasks. We show that using contextual embeddings from larger language models can increase the prediction performance of encoding models by 15% over smaller counterparts. Larger acoustic models improve similarly with model size, showing largest improvements in auditory cortex and in higher-level areas. Finally, encoding performance for both model types scales log-linearly with the amount of fMRI training data from each subject, demonstrating an increasing need for very large fMRI datasets. These new state-of-the-art encoding models may enable a new frontier in the study of biological language comprehension and may provide deeper insight into the mechanisms that the brain uses to reason about and employ natural language. ## 2 Methods ### Language models and speech audio models Decoder-only transformer architectures have become dominant in recent years for language modeling [37]. For semantic encoding, we used representations from two families of large decoder-only Transformer language models, OPT [38] and LLaMA [39]. From the OPT family we used the pretrained 125 million, 1.3 billion, 13 billion, 30 billion, 66 billion, and 175 billion parameter models. From the LLaMA family, we used the pretrained 30 billion and 66 billion parameter models. HuBERT and wav2vec 2.0 [35; 40] have been previously used to study auditory perception in the brain [29; 31; 33]. Both are trained to learn representations from unlabeled audio. WavLM [41] extends the HuBERT paradigm with data augmentation and also adds new data sources to increase the total training dataset size. Whisper [42] is a family of encoder-decoder models that use 680,000 hours of weakly-labeled audio - an order of magnitude larger than previous datasets - to reach state-of-the-art speech recognition performance. In this work, we used the pretrained Base, Large, and X-Large variants of HuBERT; the Base+ and Large variants of WavLM; and multilingual variants of the Tiny, Base, Small, Medium, and Large Whisper models. Table 1 shows the architecture details for all neural network models used in this work. ### MRI data We used publicly available functional magnetic resonance imaging (fMRI) data collected from 3 human subjects as they listened to 20 hours of English language podcast stories over Sensimetrics S14 headphones [43; 44]. Stories came from podcasts such as _The Math Radio Hour_, _Modern Love_, and _The Anthropocene Reviewed_. Each 10-15 minute story was played during a separate scan. Subjects were not asked to make any responses, but simply to listen attentively to the stories. For encoding model training, each subject listened to roughly 95 different stories, giving 20 hours of data across 20 scanning sessions, or a total of \(\sim\)33,000 datapoints for each voxel across the whole brain. For model testing, the subjects listened to two test stories 5 times each, and one test story 10 times, at a rate of 1 test story per session. These test responses were averaged across repetitions. Details of the MRI methods can be found in the original publications [43; 44], but important points are summarized here. MRI data were collected on a 3T Siemens Skyra scanner at The University of Texas at Austin Biomedical Imaging Center using a 64-channel Siemens volume coil. Functional scans were collected using a gradient echo EPI sequence with repetition time (TR) = 2.00 s, echo time (TE) = 30.8 ms, flip angle = 71\({}^{\circ}\), multi-band factor (simultaneous multi-slice) = 2, voxel size = 2.6mm x 2.6mm x 2.6mm (slice thickness = 2.6mm), matrix size = 84x84, and field of view = 220 mm. Anatomical data were collected using a T1-weighted multi-echo MP-RAGE sequence with voxel size = 1mm x 1mm x 1mm. All subjects were healthy and had normal hearing. The experimental protocol used by [43; 44] was approved by the Institutional Review Board at The University of Texas at Austin. Written informed consent was obtained from all subjects. In addition to motion correction and coregistration [43], low frequency voxel response drift was identified using a 2nd order Savitzky-Golay filter with a 120 second window and then subtracted from the signal. To avoid onset artifacts and poor detrending performance near each end of the scan, responses for training data were trimmed by removing 20 seconds (10 volumes) at the beginning and end of each scan, which removed the 10-second silent period and the first and last 10 seconds of each story. Test responses were trimmed by an additional 80 seconds (40 volumes) to account for an fMRI artifact (see Section 3.5). The mean response for each voxel was subtracted and the remaining response was scaled to have unit variance. ### Encoding model construction We used the fMRI data to estimate voxelwise brain encoding models for natural language using the intermediate hidden states of the various language and speech models discussed in Section 2.1. First, activations for each word in the stimulus text were extracted from each layer of each LM. In order to temporally align word times with TR times, we applied Lanczos interpolation together with a finite impulse response model [43]. Previous hidden state extraction methods (e.g. [23]) involved extracting the hidden state at the last token of the final word from a fresh context of fixed length of \(N\) tokens. This method requires \(N\) forward passes through the model in order to compute the hidden state for a single word. As this is impractical for models over a certain size, we improved computational efficiency here using a dynamically-sized context window. For a given story, contexts were grown until they reached 512 tokens, then reset to a new context of 256 tokens. More formally, the hidden state for token \(i\), \(H(i)\) is defined as \[H(i)=\begin{cases}\theta\left(X_{(0,i)}\right)&i\leq 512\\ \theta\left(X_{\left(256\left\lfloor\frac{i}{256}\right\rfloor-256,i\right)} \right)&i>512\end{cases}\] where \(X_{(j,k)}\) is the context of the tokenized story \(X\) from the token at index \(j\) to the token at index \(k\) and \(\theta\) is the function parameterized by the language model. This allowed hidden state extraction for most tokens to be completed with a single forward pass per token, rather than \(N\) forward passes as in previous methods. Differing tokenization schemes for handling whitespace across language models presented a challenge for consistent evaluation and were handled on a case-by-case basis. Unlike the analyzed language models, the audio models used are bi-directional, so we must use a fresh context to preserve the causality of the extracted features. We windowed the stimulus waveform with a sliding window of size \(16\,\mathrm{s}\) and stride \(100\,\mathrm{ms}\) before feeding it into model. At each layer, we \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{3}{c}{Language models} \\ Family & Layers & Width & Parameters \\ \hline \multirow{4}{*}{OPT [38]} & 12 & 768 & 125M \\ & 24 & 2048 & 1.3B \\ & 40 & 5120 & 13B \\ & 48 & 7168 & 30B \\ & 64 & 9216 & 66B \\ & 96 & 12288 & 175B \\ \hline \multirow{4}{*}{LLaMA [39]} & 60 & 6656 & 30B \\ & 80 & 8192 & 66B \\ \hline \multirow{4}{*}{Whisper counts include only the encoder.} & \multirow{4}{*}{ \begin{tabular}{c} \({}^{a}\)Whisper counts include only the encoder. \\ \end{tabular} } \\ & & & \\ \cline{1-1} \cline{3-4} & & & \\ \cline{1-1} \cline{3-4} & & & \\ \cline{1-1} \cline{3-4} & & & \\ \cline{1-1} \cline{3-4} & & & \\ \cline{1-1} \cline{3-4} & & & \\ \cline{1-1} \cline{3-4} & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Model architecture summary. use the hidden state of the final token as the model's representation at the for the window. As Whisper follows an encoder-decoder architecture, we only use states from the encoder, since it operates only on the waveform. We then downsample the features as before with Lanczos interpolation. Let \(f(H(\mathcal{S}))\) indicate a linearized ridge regression model that uses a temporally transformed version of the language model hidden states \(H(\mathcal{S})\) as predictors. The temporal transformation accounts for the lag in the hemodynamic response function [9; 45]. We use time delays of 2, 4, 6, and 8 seconds of the representation to generate this temporal transformation. For each subject \(s\), voxel \(v\), and language model layer \(h_{i}\), we fit a separate encoding model \(f^{v,s}_{h_{i}}\) to predict the BOLD response \(\hat{B}\) from our embedded stimulus, i.e. \(\hat{B}_{(x,v,h_{i})}=f^{v,s}_{h_{i}}(H_{i}(\mathcal{S}))\). Encoding model performance for a given layer was computed as the average voxelwise performance of that layer's hidden states across of all of cortex for all of our 3 subjects. For all figures with cortical flatmaps, we present the flatmap for one subject. Cortical flatmaps showing results for the other two subjects are shown in Section E of the supplement. ### Stacked regression A unified "optimized" encoding model combining the LLaMA language model and Whisper audio model was computed using an adaptation of the stacked regression approach from Lin et al. [46]. For every even-numbered non-embedding layer \(l\) in the Whisper model, as well as the 18th layer of the 30 billion LLaMA model, we held-out \(\sim\)20% of the training data and built an encoding model using the remaining \(\sim\)80% of the training data. This was repeated for each of 5 folds. The predictions of these encoding models on the 5 folds of held-out training data were concatenated to generate full held-out predictions of the training data, \(f^{v,s}_{h_{i}}\left(\mathbf{x}^{(i)}_{h_{i}}\right)\). After this cross validation procedure, we build a covariance matrix for each voxel \(v\) and subject \(s\), \(\mathbf{R}^{v,s}\) of the residuals such that \[\mathbf{R}^{v,s}_{p,q}=\sum_{i=1}^{n}\left(y^{v,s}-f^{v,s}_{h_{p}}\left(\mathbf{x}^{(i )}_{h_{p}}\right)\right)\left(y^{v,s}-f^{v,s}_{h_{q}}\left(\mathbf{x}^{(i)}_{h_{q} }\right)\right)\] where \(n\) is the total number of time points and \(y^{v,s}\) is the ground truth BOLD response for voxel \(v\) on subject \(s\). We then optimize the quadratic problem \(\text{min}_{\mathbf{\alpha}^{v,s}}\mathbf{\alpha}^{v,s}\mathbf{\Gamma}^{v}\mathbf{R}^{v}\mathbf{ \alpha}^{v,s}\) such that \(\alpha^{v,s}_{h_{j}}>0\) and \(\sum_{j=1}^{k}\alpha^{v,s}_{h_{j}}=1\) with a quadratic program solver [47] to get a convex set of attributions \(\mathbf{\alpha}^{v,s}\) which serve as weights for each feature space in the joint encoding model. This yields the final encoding model \[\hat{y}^{v,s}=\sum_{j=1}^{k}\alpha^{v,s}_{h_{j}}f^{v,s}_{h_{j}}\left(\mathbf{x}_{ j}\right)\] where \(k\) is the number of feature spaces. As a final step, we validate this stacked encoding model independently using a held-out validation set and build a final encoding model that uses the stacked prediction for voxels where the stacked approach is significantly better on this validation set and uses the prediction from the 18th layer of LLaMA otherwise. To determine which layers of the model are used to model each voxel, we computed voxelwise attribution center-of-mass. For each of the \(\mathbf{\alpha}^{v,s}\), the center-of-mass attribution \(\mathcal{C}(\mathbf{\alpha}^{v,s})\) is computed as \[\mathcal{C}(\mathbf{\alpha}^{v,s})=\sum_{i=1}^{m}i\alpha^{v,s}_{h_{i}},\] where \(m\) is the total number of Whisper layers used in the stacked attribution. This allows us to summarize whether the attributions are primarily weighted on the earlier or later layers of the network for that voxel. ### Noise ceiling computation Data from brain recordings such as fMRI are inherently noisy, so it is useful to distinguish response variance that could potentially be explained by some model from noise variance that cannot be explained. We estimated the amount of explainable variance, or _noise ceiling_, using the averaged responses from the test story with 10 repeats and the method of Schoppe et al. [48]. The maximum correlation coefficient of the ideal encoding model is estimated as \(CC_{max}=\left(\sqrt{1+\frac{1}{N}\times\frac{NP}{SP}}\right)^{-1}\) where \(N\) is the number of repeats of our test data, \(NP\) is the noise power or unexplainable variance, and \(SP\) is the signal power or the amount of variance that could in principle be explained by the ideal predictive model. Using these estimates, we can then extract a normalized correlation coefficient \(CC_{norm}=\frac{CC_{abs}}{CC_{max}}\), where \(CC_{abs}\) is the product-moment correlation coefficient of the model's predictions against the ground truth fMRI responses. In some voxels, random noise can cause \(CC_{abs}>CC_{max}\), leading to \(CC_{norm}\) estimates greater than one. To regularize \(CC_{norm}\) estimates for noisy voxels we set \(CC_{max}\) values smaller than 0.25 to 0.25. The normalized correlations \(CC_{norm}\) are only used for the noise ceiling analysis in Figure 3. All other reported correlations are uncorrected (\(CC_{abs}\)). For brain map visualizations we only show voxels with \(CC_{max}>0.35\). ### Compute specifications The generation of the encoding models presented in this paper required significant computational resources. Ridge regression was performed using compute nodes with 128 cores (2 AMD EPYC 7763 64-core processors) and 256GB of RAM. In total, roughly 4,000 node-hours of compute was expended. Feature extraction from language and speech models was performed on specialized GPU nodes that were the same as the previously-described compute nodes but with 3 NVIDIA A100 40GB cards. Feature extraction required roughly 200 node-hours of compute on these GPU nodes. ## 3 Results ### Scaling laws for semantic encoding models Encoding models were fit for each of three subjects using roughly 20 hours of fMRI training data. For the 125 million parameter OPT model we also fit encoding models using varying amounts of training data in order to study the effect of training data size on encoding model performance. To capture encoding model performance, we compute the average prediction performance across all voxels in the cortex of each subject. **Figure 0(a)** shows the relationship between language model size, measured as number of parameters, and encoding performance, measured as percent change in average prediction performance across all voxels in cortex relative to the smallest model. For consistent comparison, we only compare between the six model sizes from the OPT family. The layer that performed best for each model size was used. The result shows approximately log-linear scaling of encoding performance with model size. For each order of magnitude increase in the number of parameters in the language, the encoding performance of the average subject increases by roughly 4.4%. However, this log-linear relationship (\(r=0.91\)) tapers off to a plateau for models in excess of 30 billion model parameters. We hypothesize this is an effect of the increased number of layers that larger models possess. Each encoding model was fit using only a single model layer due to memory constraints. But as the models get deeper, information can be spread out more evenly across layers, so single-layer encoding models may not be able to capture the increase in total information represented by the model. The "double bump" that appears prominently in the layerwise model performance curves gives some evidence in support of this hypothesis. **Figure 0(b)** shows the relationship between the number of training stories (roughly proportional to total training data) and encoding performance on OPT 125M (layer 9). Here we see a strong log-linear relationship between training data size and encoding performance. Each time the number of training stories increases by an order of magnitude, the encoding performance of the average subject increases by 122%. This strong relationship (\(r=0.989\)) gives compelling support to the usefulness of collecting "deep" datasets that focus on collecting a greater amount of data from a few subjects rather than a smaller amount of data from many subjects. **Figure 0(c)** shows the encoding performance for each layer of each LM. The LLaMA models are marginally better at encoding than the OPT models, and also have a different pattern, with peak performance in relatively early layers followed by slow decay. In contrast, the OPT models have maximum performance with layers that are roughly 3/4 into the model. This mirrors results in other GPT-like models [3, 49]. This divergence from the typical pattern may warrant further study into the underlying mechanisms that define these trendlines. ### Scaling laws for speech audio encoding models We trained encoding models of increasing sizes from three families of audio models: HuBERT, WavLM, and Whisper. Encoding models were fit using an identical procedure as with the LMs in Section 3.1 - individually for three subjects, with roughly 20 hours of training data. We repeat the analyses from Section 3.1 on the audio models to examine the importance of model size and training dataset size on encoding performance. **Figure 1d** shows how audio model size affects encoding performance. We use the Whisper model family for this analysis, since it has the most models of different sizes. Again, the best performing layer for each size was used. As before, there is a log-linear relationship (\(r=0.991\)) between model size and encoding performance; performance for the average subject increases roughly \(32.2\%\) for every additional order of magnitude increase in model size. Though the scaling improvements are greater overall than with OPT, it should be noted that the smallest Whisper models are substantially smaller than the OPT models, and have lower baseline performance, which exaggerates the difference. Additionally, within auditory cortex, we observe that encoding performance does _not_ plateau with model size (see Section A.1), suggesting that improvements in AC are complemented by reductions in performance elsewhere. **Figure 1e** shows how additional training data improves the encoding performance of Whisper Large (636 million parameters, layer 30). As before, we fit separate encoding models on increasing amounts of training data. Additional training data for Whisper has an effect that is comparable to OPT: Encoding performance is linearly related to log-dataset size (\(r=0.988\)), and increasing the training dataset by an order of magnitude increases performance by \(144\%\). **Figure 1f** shows the performance of each layer of every Whisper and WavLM model. For legibility, HuBERT results are omitted from this plot and are included in the supplement (Figure A.2). The upper-middle and uppermost layers of each model tend to have the best performance, aligning with previous results on acoustic encoding models [29; 31]. In contrast with WavLM, the Whisper models increase in performance with layer depth; this can likely be attributed to our choice of only using the encoder module from the network. ### Large-scale encoding models After characterizing these scaling laws, we next visualized the performance of one of the top-performing semantic encoding models 1. Footnote 1: Keeping with the scaling results from Section 1, we chose to demonstrate this using the best model from the OPT family, however it should be noted that the best model from the LLaMA family is about 5% more performant as measured by correlation. This LLaMA model is further explored in Section 3.6. **Figure 2** shows the encoding performance of the best OPT model, which uses the 33rd layer of OPT-30B, as measured on the test story with 10 repeats. For several voxels from different areas of cortex we show the encoding model predicted timecourse and ground truth BOLD response. We see strong prediction performance across cortex, with "classical" language regions like Broca's area and Figure 1: _Scaling laws of Semantic and Speech Audio Encoding Models_ - **Figures 1a** and 1b show log-linear scaling of semantic encoding model performance with number of parameters and number of stories. **Figure 1c** shows encoding performance for each layer of all tested models averaged across 3 subjects. **Figures 1d**, 1e, and **1f** show analogous results for speech audio models. auditory cortex being well explained, as well as areas that are typically considered to be more "amodal" in nature, like prefrontal cortex. Voxelwise correlations for this subject are as high as \(r=0.82\). A similar map showing the change in encoding performance from OPT-125M (comparable to GPT models used in earlier papers) to OPT-30B is given in the supplemental material (see Figure B.1). ### Noise ceiling analysis We further investigated the degree to which encoding models can be improved past this point. To do this, we performed a noise ceiling analysis whereby for each voxel, we estimated its \(CC_{max}\) (see Section 2.5). This gave us an approximation of the degree to which an ideal encoding model could explain the response variance in each voxel. We then renormalized the correlations from Figure 2 to compute a normalized correlation coefficient \(CC_{norm}\). **Figure 3a** shows the _room for improvement_, or the difference between the correlation coefficients measured in Figure 2 and their \(CC_{max}\). Voxels are yellow if there is significant room for improvement, and purple if the model for that voxel is already close to optimal. Regions that are typically believed to contain high-level representations of language such as angular gyrus (AG) [50, 51, 52] still have the potential for substantial modeling improvement, while some areas in temporal cortex (near AC), prefrontal cortex (PFC), and the precuneus (PrCu) are nearly optimal. **Figure 3b** shows a histogram of absolute correlation coefficients (\(CC_{abs}\)), and **Figure 3c** shows the normalized correlations \(CC_{norm}\). ### Long context artifacts Granting encoding models access to contexts as long as 512 tokens implicitly gives them access to the information that the fMRI scan has started recently. For instance, if the input context has only 64 tokens, this implies that the context is occurring at the 64th token in the story. In parallel, responses in some voxels tend to rise or fall gradually over the first minute of each scan (potentially due to underconstrained detrending at scan edges, MRI magnetization reaching steady state, or neural adaptation). The combination of these two effects can have unintended effects on the fair evaluation of these models by artificially inflating measured performance, as encoding models are adept at capturing this early slow drift. We found that long context effects exist up to roughly 100 seconds Figure 2: _Large-scale encoding models_ - Performance of an encoding model built using OPT-30B on 20 hours of training data from a single subject. Surrounding plots show model predictions (_red_) against the average response (_dashed black_) over 10 separate trials (_gray_) on a held-out natural language test stimulus for selected voxels (_Clockwise from bottom left_: Well-predicted voxels from fusiform body area (FBA), Broca’s area, precuneus, prefrontal cortex, and secondary auditory cortex.) Only voxels with \(CC_{max}>0.35\) are shown. (PFC = prefrontal cortex, PrCu = precuneus, AC = auditory cortex, AG = angular gyrus) into a story, so to mitigate this issue we simply exclude the first 100 seconds of predicted and actual responses from each test story when measuring encoding model prediction performance. Figure C.1 in the supplement gives a map of the effect of long-context artifacts on measured encoding model performance. Long-context artifacts can inflate measured performance by up to 20%, but the effects are mostly localized to areas typically associated with low-level speech processing such as early auditory cortex. This effect is most prominent for encoding models using early LM layers and speech models, and tends to not be as significant for later LM layers. ### Unifying semantic and speech encoding models with stacked regression We used stacked regression (see Section 2.4) to augment our best semantic model with the Whisper speech model representations. **Figure 3(a)** shows the regions that benefit from this augmentation, blended with a flatmap showing the overall semantic encoding model performance. We observe that these benefits are highly localized to auditory cortex and mouth motor cortex. The butterfly plot in **Figure 3(b)** shows the effect on voxels modified by this augmentation. We see that the auditory cortex voxels that are best predicted by the semantic model are also those that are most improved by this augmentation. **Figure 3(c)** plots the center of mass of the attribution weights \(\mathbf{\alpha}^{v,s}\). For voxels where the attribution weights favored the later layers of the Whisper model, the voxel is plotted in a brighter hue. We see that this attribution plot demonstrates a clear progression of auditory information from primary AC to secondary AC coinciding with layer depth. **Figure 3(d)** shows the benefits of this stacked regression augmentation. We see that the lion's share of the improvements happen in primary AC and early secondary AC. ## 4 Discussion & conclusions These results suggest the existence of two major effects on the capacity of encoding models to predict BOLD response given finite brain data. First, LM changes that correspond to downstream task performance improvement tend to also improve encoding performance, such as when moving from a LM trained on little data to one trained on more data. Second, increasing hidden state size while keeping other metrics fixed tends to lower encoding performance, as it worsens the conditioning of the encoding model regression problem without a corresponding benefit to model effectiveness. The conflict between these two effects has led to a scenario where the largest model is not necessarily the best for predicting BOLD responses, as we have seen for both the OPT and LLaMA LMs where encoding model performance peaks at about 30B parameters. Rather, a careful balance must be struck between model size and model efficacy in order to maximize encoding performance. Audio models, on the other hand, have not yet seemed to reach this plateau. Figure 3: _Noise Ceiling Analysis -_ **Figure 3(a)**: A two channel flatmap showing which ROIs remain poorly explained by an encoding model built from the 33rd layer of OPT30B. Voxels are less transparent if they have a higher idealized encoding performance (\(CC_{max}\)). Voxels are more yellow if they have high _room for improvement_, defined as the difference between the best possible encoding model and this model. Angular gyrus and some parts of prefrontal cortex are still poorly explained, while precuneus and higher auditory cortex are close to optimal. **Figure 3(b):** A histogram of voxel correlations (\(CC_{abs}\)). **Figure 3(c):** A histogram of normalized voxel correlations (\(CC_{norm}\)). (PFC = prefrontal cortex, PrCu = precuneus, AC = auditory cortex, AG = angular gyrus) What are the use cases for better encoding models? One promising application is the use of encoding models to supplement more classical experimentation techniques, as suggested by Jain et al. [20]. Higher encoding performance leads to more trustworthy model predictions and more accurate conclusions. Another use case of effective encoding models is language decoding, or predicting the language stimulus from the BOLD response. Recent work has shown that effective language decoding models can be built from encoding models by applying Bayesian techniques [53, 45], so it is likely that the performance of such decoders will improve along with the performance of encoding models [44, 33]. Finally, improved encoding performance could enable fine-grained control over voxel activation through stimulus generation, as demonstrated by Tuckute et al. [54]. Given our results, what can computational neuroscientists do to improve the performance of their own encoding models? One potential observation is that _deep_ datasets [55, 56, 43, 57] -- those that focus on collecting many samples from a few subjects, rather than a little data from many subjects -- are more useful for modelling brain activity. Encoding performance improvements scale well with both model size and dataset size, and large datasets will no doubt be necessary in producing useful encoding models. Another straightforward adjustment that can be performed is to simply use larger, more performant LMs for building encoding models. To the authors' knowledge, no other natural language encoding models paper at the time of this writing has used models larger than GPT-2 XL, which is a 1.5B parameter model with performance far below the best 30B parameter models. This could be due to valid concerns that the amount of natural language brain data available is insufficient to train effective encoding models on such a scale. However, we found that even in low data cases, such as with as little as an hour's worth of data, encoding models built from larger models tend to outperform their smaller counterparts, as seen in Figure D.1 of the supplement. We hope this paper encourages the use of more performant encoding models in natural language computational neuroscience. Figure 4: _Stacked Regression -_ **Figure 4a:** A flatmap shows which regions of cortex improve when augmenting a semantic encoding model built from the 18th layer of LLaMA with the layers of Whisper using stacked regression. Voxels used the stacked regression if the stacked regression performed better on a validation set. The effect is highly localized to auditory cortex. **Figure 4b:** A butterfly plot comparing the voxelwise encoding performance of the stacked regression encoding model to the baseline semantic model. **Figure 4c:** The center-of-mass of the stacked regression attributions, \(\mathcal{C}(\boldsymbol{\alpha}^{v,s})\) are visualized in auditory cortex. **Figure 4d:** The improvement in encoding performance of the stacked regression model over the baseline is visualized in auditory cortex. ## Acknowledgements The authors acknowledge and thank the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have significantly contributed to the research results reported within this paper. This work was funded by grants from the NIDCD and NSF (1R01DC020088- 001), the Burroughs-Wellcome Foundation, and a gift from Intel Inc. We thank Ruogu Lin, Leila Wehbe, and Javier Turek for their aid and thoughtful suggestions in assisting with this work.
2301.06630
Waveform uncertainty quantification and interpretation for gravitational-wave astronomy
We demonstrate how to quantify the frequency-domain amplitude and phase accuracy of waveform models, $\delta A$ and $\delta \phi$, in a form that could be marginalized over in gravitational-wave inference using techniques currently applied for quantifying calibration uncertainty. For concreteness, waveform uncertainties affecting neutron-star inspiral measurements are considered, and post-hoc error estimates from a variety of waveform models are made by comparing time-domain and frequency-domain analytic models with multiple-resolution numerical simulations. These waveform uncertainty estimates can be compared to GW170817 calibration envelopes or to Advanced LIGO and Virgo calibration goals. Signal-specific calibration and waveform uncertainties are compared to statistical fluctuations in gravitational-wave observatories, giving frequency-dependent modeling requirements for detectors such as Advanced LIGO Plus, Cosmic Explorer, or Einstein Telescope. Finally, the distribution of waveform error for the GW170817 posterior is computed from tidal models and compared to the constraints on $\delta \phi$ or $\delta A$ from GWTC-1 by Edelman et. al. In general, $\delta \phi$ and $\delta A$ can also be interpreted in terms of unmodeled astrophysical energy transfer within or from the source system.
Jocelyn S. Read
2023-01-16T22:55:53Z
http://arxiv.org/abs/2301.06630v2
# Waveform uncertainty quantification and interpretation for gravitational-wave astronomy ###### Abstract We demonstrate how to quantify the frequency-domain amplitude and phase accuracy of waveform models, \(\delta A\) and \(\delta\phi\), in a form that could be marginalized over in gravitational-wave inference using techniques currently applied for quantifying calibration uncertainty. For concreteness, waveform uncertainties affecting neutron-star inspiral measurements are considered, and post-hoc error estimates from a variety of waveform models are made by comparing time-domain and frequency-domain analytic models with multiple-resolution numerical simulations. These waveform uncertainty estimates can be compared to GW170817 calibration envelopes or to Advanced LIGO and Virgo calibration goals. Signal-specific calibration and waveform uncertainties are compared to statistical fluctuations in gravitational-wave observatories, giving frequency-dependent modeling requirements for detectors such as Advanced LIGO Plus, Cosmic Explorer, or Einstein Telescope. Finally, the distribution of waveform error for GW170817 over the parameters of the low-spin posterior is computed from tidal models and compared to the constraints on \(\delta\phi\) or \(\delta A\) from GWTC-1 by Edelman et. al. In general, \(\delta\phi\) and \(\delta A\) can also be interpreted in terms of unmodeled astrophysical energy transfer within or from the source system. ## 1 Introduction Waveform models are critical for estimating the properties of source systems in gravitational-wave astronomy. Quantifying the current levels of uncertainty in modeling the physics of waveform generation would allow marginalization over current model choices and an explicit estimate of current levels of systematic uncertainty. Waveform uncertainty estimates can also be used to guide analysis efforts by providing an indication of how accurately the waveform needs to be modeled when detecting new sources. In this work, we present a frequency-dependent characterization of waveform error in amplitude and phase, \(\delta A\) and \(\delta\phi\), and demonstrate its application to existing waveform models of neutron star inspirals with tidal contributions, as neutron-star modeling uncertainties will be potentially significant for tidal inference in coming observation runs [1, 2]. We compare multiple waveform approximants with each other and with a high-resolution numerical simulation from the CoreDB waveform library in Section 4. The model differences are compared to reference noise spectral densities, anticipating the requirements of future observations. We show a frequency-dependent sufficient condition for the effect of waveform modeling uncertainty to be smaller than statistical fluctuation, given a candidate signal amplitude and a noise power spectral density. Finally, we show two implications of this result for the interpretation of the double neutron star merger GW170817 [3]. First, we estimate the size of current model uncertainties by evaluating the differences between IMRPhenomD_NRT[4, 5, 6] and SEOBNRv4T[7, 8] waveform families over the prior range and then in the posterior region of parameter space. We compare with work by Edelman et. al. [9] which constrained how the amplitude and phase of GWTC-1 signals can depart from that assumed by the waveform model families used for analysis. Second, observed departures from the waveform models can be mapped to physical effects: unmodeled luminosity or internal energy contributions for the source system. This allows the largest median \(\delta\phi\) at \(\sim 60\,\mathrm{Hz}\) to be interpreted as an possible unmodeled energy transfer of \(\delta E=\Delta E/E\sim 0.001\) relative to the orbital binding energy of \(E\simeq-0.006M_{\odot}c^{2}\). In general, the recovered bounds on \(\delta A\) and \(\delta\phi\) limit the amount of unmodeled energy transfer compatible with the observed signals. ## 2 Background To interpret the observations of gravitational-wave astronomy, interferometer strain data \(\mathbf{d}=\mathbf{h}+\mathbf{n}\) is assumed to be generated by an astrophysical strain signal \(\mathbf{h}\) and background noise fluctuations \(\mathbf{n}\). The likelihood of a detector measurement \(\mathbf{d}\), given a particular incident wave \(\mathbf{h}\), derives from the likelihood function for \(\mathbf{n}=\mathbf{d}-\mathbf{h}\). In practice, it will be computed from the Fourier transforms of the data and signal time series, evaluated a set of discrete frequencies: \[p(\mathbf{d}|\mathbf{h})\propto\exp\left(-\sum_{i}2\Delta f\frac{|d_{i}-h_{i}|^{2}}{S_ {n}(f_{i})}\right) \tag{1}\] where \(S_{n}(f)\) is the power spectral density of the noise [10, 11]. This motivates the definition of a noise-weighted inner product [12] in either discrete or continuous form, \[\langle\mathbf{h_{1}},\mathbf{h_{2}}\rangle=2\Delta f\sum_{i}\frac{h_{1i}^{*}h_{2i}+h _{1i}h_{2i}^{*}}{S_{n}(f_{i})}=4\Re\int_{0}^{f_{\mathrm{max}}}df\frac{\tilde{ h}_{1}(f)^{*}\tilde{h}_{2}(f)}{S_{n}(f)} \tag{2}\] This inner product definition sets \(\langle\mathbf{n},\mathbf{n}\rangle=1\). The expected signal-to-noise ratio \(\varrho\) of an incident astrophysical signal \(\mathbf{h}\) is given by \(\varrho^{2}=\langle\mathbf{h},\mathbf{h}\rangle\). We can then re-write the likelihood of the data \(\mathbf{d}\) given an incident wave \(\mathbf{h}\) as \[p(\mathbf{d}|\mathbf{h})\propto\exp\left(-\left\langle\mathbf{d}-\mathbf{h},\mathbf{d}-\mathbf{h} \right\rangle\right) \tag{3}\] Two waveforms are considered "indistinguishable" in a given detector if the difference between them is smaller than the noise: \(\delta\mathbf{h}=\mathbf{h}_{1}-\mathbf{h}_{2}\) satisfies \(\langle\delta\mathbf{h},\delta\mathbf{h}\rangle<1\). Often when comparing waveform families, especially those which agree at leading post-Newtonian order, we will have \(\varrho_{1}\simeq\varrho_{2}:=\varrho\), and indistinguishability is estimated by the "mismatch" after waveform differences are minimized over relative shifts in time and phase: \[\min_{\Delta t_{e},\Delta\phi_{c}}\left[\left<\delta\mathbf{h},\delta\mathbf{h}\right> \right]\gtrsim 2\varrho^{2}\left(1-\max_{\Delta t_{e},\Delta\phi_{c}}\left[ \left<\mathbf{h_{1}},\mathbf{h_{2}}\right>\right]/\sqrt{\left<\mathbf{h_{1}}, \mathbf{h_{1}}\right>\left<\mathbf{h_{2}},\mathbf{h_{2}}\right>}\right), \tag{4}\] where the mismatch is in brackets on the RHS. Gravitational waveform uncertainty requirements have thus been presented in terms of the mismatch between the true waveform and the model used when inferring source properties at a given signal-to-noise ratio \(\varrho\)[13, 14]. This is an integrated quantity over all frequencies. In contrast, calibration uncertainties that affect the inferred \(\mathbf{h}\) are explicitly computed as functions of frequency [15, 16, 17] and can be marginalized over for gravitational-wave inference [18, 10, 19]. To do this, calibration uncertainty is cast in terms of amplitude and phase errors in the inferred detector strain compared to the true conditions of the local spacetime \[\tilde{h}_{\rm meas}=\tilde{h}_{\rm true}(f)(1+\delta A_{\rm cal}(f))\exp(i \delta\phi_{\rm cal}(f)) \tag{5}\] which differentiate the measured \(\tilde{h}_{\rm meas}\) from the true strain \(\tilde{h}_{\rm true}(f)\) generated by an incident astrophysical wave. These corrections arise from the error budget of mapping between astrophysical and instrumentally measured strain [20, 21]. An approach similar to the calibration framework has been developed to constrain differences between the true signal and the waveform model used for inference: Edelman et al [9] estimated the size of unmodeled signal contributions for the observations of GWTC-1 [22]. To do this, coherent deviations across all detectors are also expressed in the form \[\tilde{h}_{\rm astr}(f)=\tilde{h}_{\rm model}(f)(1+\delta A(f))\exp(i\delta \phi(f)) \tag{6}\] where we are again using \(\phi\) for the frequency-domain phase. In that work, corrections were modeled as splines for \(\delta A(f)\) and \(\delta\phi(f)\), and frequency-dependent departures from the baseline \(\mathbf{h}\) model were constrained for signals observed by the LIGO [23] and Virgo [24] observatories. In this work, we demonstrate the frequency-dependent uncertainty in amplitude and phase \(\delta A(f)\) and \(\delta\phi(f)\) coming from existing waveform models. These uncertainties are explicitly connected to the underlying time-domain model uncertainties and the implications for the underlying physics of the source system. To do this, we follow standard procedures in the literature, but avoid common choices that assume a post-Newtonian framework with infinite coalescence frequency. We define all source characteristics in terms of weak-field gravitational-wave observables. Expressing model uncertainty in this form will allow explicit marginalization during the inference of source properties, and also allows the interpretation of waveform differences in terms of far-field source energetics. ## 3 Waveform assumptions and model implications Waveform models come from time-domain physics of the emitting source and the response of the observatory's interferometer. Here, we review the mapping from the source properties to the signal prediction, to demonstrate how error intrinsic to the source physics can be disentangled from the reference time and phase \(t_{c}\) and \(\phi_{c}\) of a specific signal. The fundamental differences coming from varying model families are those that are independent of overall shifts in signal time and phase, and lead to a residual waveform model error that can be additionally marginalized over in inference. We adopt a description of the time-domain waveform, similar to that in [25], with the assumption that we can write the source emission as a multipolar expansion of oscillatory mode functions. Specifically, we assume the \(h_{+}\) and \(h_{\times}\) emitted along a line of observation from a given source can be characterized by an expansion in spin-weighted spherical harmonics [26, 27]: \[h_{+}(t)-ih_{\times}(t)=\sum_{\ell=2}^{\infty}\sum_{m=-\ell}^{\ell}h_{\ell m}( t)Y_{-2}^{\ell m}(\iota,\varphi) \tag{7}\] where \(\iota\) is the inclination angle and \(\varphi\) is the azimuthal angle of the line of sight from the source. Each \(h_{\ell m}\) component is decomposed into a real time-domain amplitude and phase as \[h_{\ell m}(t)=\mathcal{A}_{\ell m}(t)e^{-i\psi_{\ell m}(t)} \tag{8}\] in the usual form for characterizing gravitational radiation in numerical simulations, described for example for the lalsimulation numerical relativity injection infrastructure [28] following numerical data formats of [29]. We assume the time-domain amplitude scales as \(\mathcal{A}(t)=\mathcal{A}_{0}(t)\left(d_{0}/d\right)\) for a source at luminosity distance \(d\) in terms of \(\mathcal{A}_{0}(t)\) at a reference distance \(d_{0}\)[30]. The astrophysical strain \(\mathbf{h}\) measured by a single detector is a projection on to the detector frame of the incident time domain polarizations \(h_{+}(t)\) and \(h_{\times}(t)\), \[h(t)=F_{+}(\alpha,\delta,\psi_{p})h_{+}(t)+F_{\times}(\alpha,\delta,\psi_{p}) h_{\times}(t) \tag{9}\] with the specific detector's antenna response functions \(F_{+,\times}\) that depend on the source's sky location (right ascension \(\alpha\) and declination \(\delta\)) and a polarization orientation angle \(\psi_{p}\) of the source relative to the interferometer. In a single detector observation, the polarization angle is degenerate with phase. A sky location directly above or below an interferometer with orthogonal arms gives \(F_{+}^{2}+F_{\times}^{2}=1\). The incident \(\mathbf{h}\) can be written as the real part of a sum over the spherical harmonic mode amplitudes. \[h(t)=\sum_{\ell m}Q_{\ell m}h_{\ell m} \tag{10}\] where \(Q_{\ell m}\) captures the sky-location-dependent detector response to each mode given the source orientation. \(Q_{\ell m}\) will be constant for short-duration transients. The leading-order quadrupole modes \(\ell,m=2,\pm 2\) are dominant for many gravitational-wave sources. We consider circular, non-precessing, near-equal-mass binaries as the sources in this demonstration, so will restrict to this case. For an optimal sky location directly above or below the detector, with detector arms aligned with the plus polarization, the sum of the \(\ell,m=2,\pm 2\) modes gives \[h_{+}(t)-ih_{\times}(t)=\sqrt{5/16\pi}\left(\left(1+\cos^{2}\iota\right)\cos 2 \varphi+2i\cos\iota\sin 2\varphi\right)h(t) \tag{11}\] which yields \(h(t)\) from \(h_{22}(t)\) with the conversion factor \(Q_{22}=\sqrt{5/4\pi}\) for an optimally oriented, face-on source. If a general sky location and inclination are considered, \(Q_{22}=\sqrt{5/4\pi}\left(F_{+}^{2}\left(1+\cos^{2}\iota\right)^{2}/4+F_{ \times}\cos^{2}\iota\right)^{1/2}\). Overall, measured signal amplitude for quadrupole sources will be scaled relative to the optimal face-on and overhead configuration by the effective distance \(d_{\rm eff}=d\left(F_{+}^{2}\left(1+\cos^{2}\iota\right)^{2}/4+F_{\times}\cos ^{2}\iota\right)^{-1/2}\) which combines the effects of luminosity distance, sky location, and the source's orientation angles [31]. We focus now on signals from inspiraling compact binaries, where the frequency of the signal sweeps slowly upward as the binary evolves over multiple cycles. We are interested in modeling the uncertainty given known physical properties of the source -- for example, those characterizing the masses, spins, eccentricity, and tides -- which we call intrinsic properties. These properties will determine the amplitude of gravitational waves as function of the emission frequency, and the gravitational-wave emission also drives a change in the emission frequency. We show how to compute \(\tilde{h}(f)\) from integrations of the instantaneous frequency \(F=\dot{\psi}/2\pi\) of the time-domain phase, which results in the integration constants \(t_{c}\) and \(\phi_{c}\) that fix the arrival time and phase of a specific source. The intrinsic properties of the source model generate the characteristic \(\dot{F}(F)\). A relatively slowly-varying amplitude allows the use of the stationary phase approximation (SPA) in determining the frequency-domain Fourier transform for each \(m>0\) and \(f>0\)[12, 32, 25] when the two conditions \[\left|\frac{d}{dt}\ln\mathcal{A}_{\ell m}(t)\right|\ll\left|\dot{\psi}(t) \right| |\ddot{\psi}|\ll\dot{\psi}^{2} \tag{12}\] are satisfied. Overdots denote a time derivative. Error from the SPA is expected to be smaller than windowing artifacts for gravitational-wave inspirals [32]. For the reference neutron-star binary system used later in this work, all waveforms including those from numerical simulation will satisfy LHS\(<0.1\times\)RHS for both conditions, only reaching \(0.1\) as the stars collide. This supports the use of instantaneous frequency as a characteristic of the time-domain system. To apply the SPA, we expand the transform integral around time where the instantaneous frequency \(F\) of the incident wave matches the frequency \(f\) of interest, specifically \(T_{\ell m}\) defined by \[\dot{\psi}_{lm}\left(T_{\ell m}\right)=2\pi f \tag{13}\] The resulting frequency domain waveform is \(\tilde{h}(f)=A(f)\exp(-i\phi(f))\), with \[A(f) =\sum_{\ell m}Q_{\ell m}\left(\frac{2\pi}{\tilde{\psi}_{\ell m}(T_{ \ell m})}\right)^{1/2}\mathcal{A}_{\ell m}(T_{\ell m}) \tag{14a}\] \[\phi(f) =\frac{\pi}{4}+\psi_{lm}(T_{\ell m})-2\pi fT_{\ell m} \tag{14b}\] The SPA allows direct use of model predictions for the time-domain amplitude and phase \(\mathcal{A}_{\ell m}(t)\) and \(\psi_{\ell m}(t)\) to calculate the corresponding frequency-domain waveform. The functions that enter into \(\tilde{h}\) are relative to an explicit coalescence time \(t_{c}\), which is traditionally defined in the \(f\to\infty\) limit following Cutler and Flanagan [12]. Since \(f\to\infty\) does not happen in all models, we here define this as the time at which the chosen waveform model reaches a specific reference coalescence frequency \(f_{c}\). The LIGO/Virgo software lalsimulation instead uses a convention where \(t_{c}=t_{\mathrm{peak}}\) is defined by the maximum amplitude of the waveform [28]. This choice will imply a reference \(f_{c}=F(t_{\mathrm{peak}})\) which varies between waveform models. For fixed \(t_{c}\) and \(\phi_{c}\), differences between waveform models will depend on the \(f_{c}\) assumed in the parameter estimation, so will be sensitive to the definition of \(t_{c}\). However, the residual phase error as defined below will be independent of \(f_{c}\). For each time-domain waveform model, the following time and phase observables are generated from the wave's instantaneous frequency \(F\) and its time derivative \(\dot{F}\), \[T(f) =t_{c}-\int_{f}^{f_{c}}dF\,T^{\prime}(F) \tag{15a}\] \[\psi(f) =\psi_{c}-2\pi\int_{f}^{f_{c}}dF\,F\,T^{\prime}(F)\] (15b) \[=\psi_{c}-2\pi\left(f_{c}t_{c}-fT(f)-\int_{f}^{f_{c}}T(F)dF\right) \tag{15c}\] defining the function \(T^{\prime}(F):=\left(\dot{F}(F)\right)^{-1}=d\,T(F)/dF\) to streamline notation. These observables are entirely defined from the far-field wave emission, and can be interpreted as \(t_{c}-\)(time to \(f_{c}\)) and \(\psi_{c}-\)(time-domain phase accumulation remaining before \(f_{c}\)). After the instantaneous frequency is calculated, \((T-t_{c})\) and \((\psi-\psi_{c})\) can be read in directly from numerical simulation data. Substituting \(T(f)\) and \(\psi(f)\) into Eqs. 14 yields the frequency-domain form for the quadrupole waveform in terms of the characteristic functions, as we begin to write \(\mathcal{A}(f)\) for \(\mathcal{A}(T(f))\): \[A(f) =Q(\boldsymbol{\theta}_{\mathrm{ext}})\left(T^{\prime}(f)\right) ^{1/2}\mathcal{A}(f)/2 \tag{16a}\] \[\phi(f) =\frac{\pi}{4}+\psi(f)-2\pi fT(f)\] (16b) \[=\frac{\pi}{4}+\psi_{c}-2\pi\left(f_{c}t_{c}-\int_{f}^{f_{c}}dF \,T(F)\right)\] (16c) \[=\phi_{c}-2\pi ft_{c}+2\pi\int_{f}^{f_{c}}d\tilde{f}\int_{\tilde{ f}}^{f_{c}}dF\,T^{\prime}(F) \tag{16d}\] Setting \(f=f_{c}\) in Eq. 16c defines the signal coalescence phase \(\phi_{c}=\frac{\pi}{4}+\psi_{c}-2\pi f_{c}t_{c}\). Note that \(\phi_{c}=0\) does not correspond to \(\psi_{c}=0\). The explicit second integration over frequency \(\tilde{f}\) demonstrates how \(\phi(f)\) depends on the underlying model's \(T^{\prime}(F)\) and the two integration constants \(t_{c}\) and \(\phi_{c}\) in Eq. 16c. The \(t_{c}\) and \(\phi_{c}\) that characterize an observed signal emerge as integration constants in Eq. 16d, a form that allows marginalization over reference phase \(\phi_{c}\) and time \(t_{c}\) in gravitational-wave inference [33, 10]. The remaining intrinsic parameter contribution is entirely from \(T^{\prime}(F)\). Following the form used in calibration marginalization, we write frequency-domain signal uncertainty terms as \(\delta A(f)\) and \(\delta\phi(f)\) \[\tilde{h}_{\rm true}(f)=\tilde{h}_{\rm model}(f)(1+\delta A(f))\exp(i\delta \phi(f)) \tag{17}\] The intrinsic model dependence of \(\tilde{h}\) will be derived from the characteristic \(\mathcal{A}\) and \(T^{\prime}=1/\dot{F}\) functions of the waveform family in question. We can therefore express the impact of modeling differences on the signal in terms of these functions, specifically in the form \[\mathcal{A}_{\rm true}(f) =\mathcal{A}_{\rm model}(f)\left(1+\delta\mathcal{A}(f)\right), \tag{18a}\] \[T^{\prime}_{\rm true}(f) =T^{\prime}_{\rm model}(f)(1+\delta T^{\prime}(f)) \tag{18b}\] It is computationally useful to track differences between models through the intermediate functions \(T(f)=T_{\rm model}(f)+\delta T(f)\) and \(\psi(f)=\psi_{\rm model}(f)+\delta\psi(f)\). As the contribution to \(T(f)\) and \(\psi(f)\) are defined relative to \(f_{c}\) in Eqs. 15, the differences \(\delta T(f)\) and \(\delta\psi(f)\) will also be relative to \(f_{c}\). The signal uncertainty at fixed \(t_{c}\) and \(\phi_{c}\) is \[1+\delta A(f) =\left(1+\delta T^{\prime}(f)\right)^{1/2}\left(1+\delta\mathcal{ A}(f)\right) \tag{19a}\] \[\delta\phi(f) =\delta\psi(f)-2\pi f\delta T(f)=2\pi\int_{f}^{f_{c}}d\tilde{f} \int_{\tilde{f}}^{f_{c}}dF\,T^{\prime}(F)\delta T^{\prime}(F) \tag{19b}\] The historic choice of \(f_{c}\to\infty\) results in unmod (slowly diverging) modeling uncertainties at fixed \(t_{c}\) and \(\phi_{c}\) for post-Newtonian expansions. To determine the impact of model uncertainties on inference, it's important to consider that the integration constants \(\phi_{c}\) and \(t_{c}\) will be searched over when identifying incident waves and marginalized over when determining the intrinsic source properties [10, 15, 33]. The impact of waveform uncertainty on measurements will be reduced by this marginalization. For example, a toy modified-GR waveform model that produced only an overall constant shift \(\delta\phi(f)=\delta\phi_{0}\) across all frequencies would not change any recovered parameters other than \(\phi_{c}\). Similarly, an overall shift of the form \(\delta\phi(f)=2\pi ft_{0}\) would be entirely absorbed by marginalization over \(t_{c}\). In general, any model uncertainty described by \(\phi_{0}+2\pi ft_{0}\) will be absorbed by marginalization over shifts in time and phase during inference. To find the residual phase error for a measurement scenario, consider a model \(\tilde{h}(f)\) with error \(\delta\phi(f)\). When inferring the properties of this model from a signal that differs by \(\delta\phi(f)\), in the absence of noise, the maximum likelihood value of \(\phi_{0}\) and \(t_{0}\) will be at the minimum of \[\left\langle h-h_{\mathrm{model}}|h-h_{\mathrm{model}}\right\rangle \propto\sum_{i}\frac{\left|A(f_{i})\right|^{2}\left|1-\exp i\left( \delta\phi(f_{i})-\phi_{0}-2\pi f_{i}t_{0}\right)\right|^{2}}{S_{n}(f_{i})} \tag{20a}\] \[\simeq\sum_{i}\frac{\left|A(f_{i})\right|^{2}}{S_{n}(f_{i})}\left| \delta\phi(f_{i})-\phi_{0}-2\pi f_{i}t_{0}\right|^{2} \tag{20b}\] where the approximation applies when residual model error is small enough that that a small angle approximation can apply across relevant frequencies. The maximum likelihood condition becomes equivalent to a weighted least squares fit of \(\delta\phi(f)\) to the linear function \(\phi_{0}+2\pi ft_{0}\), with weights \(|A(f)|^{2}/S_{n}(f)\) corresponding to the expected variance at each frequency. Subtracting off the fit leaves a residual model error \(\delta\phi_{\mathrm{res}}\). This characterizes the frequency-dependent waveform phase uncertainty that should be marginalized over in addition the existing time and phase marginalization. ## 4 Uncertainty estimates for current models A proper error estimate should rise from careful analysis of the range of viable analytic choices, universal relation uncertainties, numerical resolution error, neglected effects in modeling, or other choices made by the waveform modelers. The effect of a particular range of "reasonable" choices will determine the frequency-dependent error distribution of the model-characteristic functions \(T^{\prime}(F)=1/\dot{F}(F)\) and \(\mathcal{A}(F)\), as in Eqs. 18, where \(F\) is defined as instantaneous frequency from the phase evolution of the waveform model. These distributions will best be estimated directly by those creating state-of-the-art models, and could be released in concert with a waveform implementation. To illustrate the size of current modeling uncertainties, however, we show an example comparing waveform models from lalsimulation[34] and simulations from the CoRe library [27]. This illustrates both the current model differences as well as a possible application of this error budget calculation \(A(f)\) and \(\phi(f)\). We start with a high-resolution CoreDB numerical simulation waveform from Dietrich et al 2017 BAM:0095 [35], which has masses \(m_{1}=m_{2}=1.349998\), zero spin, and the SLy equation of state. This equation of state has \(\Lambda_{1}=\Lambda_{2}=390.1104\) at this mass, and we generate tidal waveform models with matching \(\boldsymbol{\theta}_{\mathrm{int}}\) parameters using TEOBResumS[36, 37] and SEOBNRv4T[7, 8]. We calculate instantaneous frequency with second order accurate central differences using numpy.gradient for each model. For the numerical simulation data, we fit \(f(t)\) with a B-spline using scipy.interpolate with smoothing to reduce sub-orbital oscillations in the derivative quantity. Another gradient gives \(\dot{F}\). We show also the equivalent model characteristics of the IMRPhenomP model [4] with numerical tidal contributions NRTidal[5, 6] as well as the TaylorF2[38] waveforms with post-Newtonian tides [39] using the analytic stationary phase approximation inversion \[T(f) =\phi^{\prime}(f)/(2\pi)-t^{\prime}_{c} \tag{21a}\] \[\psi(f) =\phi(f)+2\pi f\,T(f)-\phi^{\prime}_{c}\] (21b) \[\dot{F}(F) =\frac{1}{T^{\prime}(F)}=\frac{2\pi}{\phi^{\prime\prime}(F)}\] (21c) \[\mathcal{A}(F) =\frac{1}{Q}\sqrt{\dot{F}(F)}\left|\tilde{h}(F)\right| \tag{21d}\] where we choose \(\phi^{\prime}_{c}\) and \(t^{\prime}_{c}\) to set the \(T\) and \(\psi\) functions to zero at our chosen \(f_{c}\). Fig. 1 shows the resulting \(\dot{F}\) and \(\mathcal{A}\) for all models. As will be discussed in Section 7, differences in \(\dot{F}\) can be interpreted in terms of additional energy losses as a function Figure 1: The characteristic functions \(\dot{F}(F)\) and the time-domain strain amplitude \(\mathcal{A}(F)\) as a function of instantaneous frequency. These functions characterize the waveform predictions of each model independent of overall shifts in time and phase. Results are shown for a set of time-domain models, frequency-domain models, and two resolutions of numerical simulation data. All simulate a zero-spin double neutron star system with \(m_{1}=m_{2}=1.35M_{\odot}\) and the SLy equation of state, except for the binary black hole (BBH) which uses SEOBNRv4[40] for the same masses. The difference in time-domain amplitude \(\mathcal{A}\) for TaylorF2 at these frequencies is because it includes only the leading-order amplitude term. The two light-shaded NR simulation resolutions demonstrate consistent frequency-domain amplitude and phase predictions from early tidal departure through to merger frequencies. of frequency, for example those that rise from tidal contributions relative to the BBH model. The amplitude \(\mathcal{A}\) is connected to the gravitational-wave luminosity at a give frequency, with differences arising for example from changes in the source quadrupole moment. We note that this comparison of numerical data and semi-analytic models is independent of any waveform alignment choices such as shifts in time and phase; it is closely related to comparisons of orbital frequency derivative used in numerical simulation analyses (e.g. [41]) cast in terms of waveform characteristics. Fitting these functions directly from a combination of semi-analytic and numerical-simulation information could therefore inform a numerical-relativity calibrated waveform model which is independent of hybridization choices and extends through to the post-merger phase. For this set of models, we note that TEOBResumS terminates at its \(t_{c}\) with a lower \(f_{c}\) than other models. For the first phase comparison, we therefore choose TEOBResumS as our baseline and set \(f_{c}=f_{\rm peak,TEOB}\simeq 1787\,\rm Hz\) as a reference frequency which can be chosen to be common for all models. We compute \(T(f)\) and \(\psi(f)\) relative to the common reference frequency and regenerate the Fourier-domain phase. Although smoothing was require to reduce differentiation noise seen in Fig. 1, the integrated \(\psi(f)\) relative to the reference frequency can be read directly from numerical data. This gives from each waveform the amplitude and phase of a Fourier-domain signal \(\tilde{h}(f)\) with a consistent \(t_{c}\) and \(\phi_{c}\), using the stationary phase approximation of Sec. 3. Once the signal-domain functions are defined, we can compute relative amplitude error \(\delta A\) and the phase shift \(\delta\psi\) relative to the baseline TEOBResumS model that set \(f_{c}\). The phase error resulting from waveform differences is relative to the reference frequency, and naturally goes to zero at that frequency. Reults are shown in Fig. 2. To determine measurement impact here, we calculate the residual phase differences Figure 2: Phase differences of the predicted waveform \(\tilde{h}\), relative to the TEOBResumS model. Phase is measured relative to \(f_{c}=1783\) as defined for TEOBResumS, so all models have \(T(f_{c})=0\) and \(\phi(f_{c})=0\) at this frequency. All simulations and models have the same mass and tide parameters, except for BBH which sets the tides to 0. that must remain after marginalization over time and phase. We calculate residual phase differences by removing a weighted least squares fit of \(\delta\phi(f)\) to \(\phi_{0}+2\pi ft_{0}\) with variance \(S_{n}(f)/A^{2}(f)\), and show the resulting differences with the "APLUS" \(S_{n}\)[42] in Fig. 3. The residual phase error is not very sensitive to the specific ground-based detector noise spectrum used to fit \(\phi_{0}\) and \(t_{0}\); very similar residual phase is found using \(S_{n}(f)\) from other ground-based spectra. As a cross-check, recalling that most of the signal-to-noise accumulates through the early inspiral [43] and would "anchor" the \(t_{c}\) and \(\phi_{c}\) used to describe the models, we verified that similar residual error can also be found by subtracting a linear fit to \(\delta\phi\) in the "bucket" of ground-based detector sensitivities, between 50 and 150 Hz. Calibration envelopes from O4 [20] give \(\delta A\) better than 11.29% and \(\delta\phi\) better than 9.18 deg or 0.160 radians from 20-2000 Hz. Comparing the results of Fig. 5, we see that waveform uncertainties for these neutron-star models will be smaller than calibration uncertainties at low frequency. Current waveform differences for \(\delta\phi\) between TEOBResumS and SEOBNRv4 may exceed calibration error above 500 Hz and between IMRPhenomD_NRTidalv2 and TEOBResumS above 1 kHz. Waveform differences from \(\delta A\) remain below calibration uncertainty past 1 kHz for tidal models that go beyond leading order. ## 5 Detector-dependent accuracy requirements for calibration and modeling For our reference \(\delta A(f)\) and \(\delta\phi(f)\), we can compute the resulting waveform differences and estimate whether they have the potential to affect measurements in a detector with a given noise power spectral density \(S_{n}(f)\) and signal amplitude \(A(f)\). This motivates frequency-dependent modeling requirements to bring systematic uncertainties below the Figure 3: Relative signal amplitude differences \(\delta A(f)\) and residual signal phase differences \(\delta\phi_{\rm res}(f)\) of waveform models relative to the reference model TEOBResumS. Residual phase is that remaining after removing a least-squares fit of \(\delta\phi(f)=\phi_{0}+2\pi ft_{0}\) weighted by the expected variance \(\propto A(f)^{2}/S_{n}(f)\) in A+; the choice of ground-based detector has minimal impact on the result. The estimated equivalent contribution of noise fluctuations in current and next-generation detectors for a signal at \(d_{\rm eff}=100\,\)Mpc is shown for reference following discussion of Section 5. level of frequency-dependent statistical fluctuations implied by a future detector's power spectral density \(S_{n}(f)\) for a given signal. These criteria will apply equally well to the \(\delta A(f)\) and \(\delta\phi(f)\) of calibration envelopes, similarly applied to the amplitude of a specific signal. We choose three reference noise PSDs in this work. First, we denote as "17.08.17" the LIGO Livingston power spectral density during the observation of GW170817 [44], as documented in [45], to set the scale of background noise in 2017, and compare with assessments of waveform error in determining GW170817's source properties. Second, we show as "APLUS" the A+ detector design of [42], as a reference for the goal sensitivities of LIGO/Virgo/Kagra's O5 observing run in the later 2020s [46, 23, 24]. Finally, as a reference for what might be anticipated for next-generation gravitational-wave observatories taking data in the 2030s, we show results using a wide-band Cosmic Explorer noise configuration "CE" [47, 48, 49], which has comparable sensitivity to the Einstein Telescope [50]. Recall that the signal coming from the difference between two waveforms is \(\delta\mathbf{h}=\mathbf{h_{1}}-\mathbf{h_{2}}\). In general, we are interested in the magnitude of the waveform difference \(\sqrt{\left\langle\delta\mathbf{h},\delta\mathbf{h}\right\rangle}\), which is the signal-to-noise ratio of the waveform differences. For a relative difference in amplitude \(\delta A(f)\), this is given by \[\left\langle\delta\mathbf{h},\delta\mathbf{h}\right\rangle=4\int_{0}^{f_{\rm max}}df \,\frac{|\delta\tilde{h}(f)|^{2}}{S_{n}(f)}=4\int_{0}^{f_{\rm max}}df\,\frac {A^{2}(f)\delta A^{2}(f)}{S_{n}(f)} \tag{22}\] and for a frequency-dependent phase difference \(\delta\phi(f)\) we have \[\left\langle\delta\mathbf{h},\delta\mathbf{h}\right\rangle =4\int_{0}^{f_{\rm max}}df\,\frac{A^{2}(f)\left|1-\exp\left(i \delta\phi(f)\right)\right|^{2}}{S_{n}(f)} \tag{23a}\] \[=\int_{0}^{f_{\rm max}}df\,\frac{2A^{2}(f)\left(1-\cos\delta\phi \right)}{S_{n}(f)} \tag{23b}\] Figure 4: Waveform differences of semi-analytic waveform models from phase differences relative to TEORResumS for a signal at \(d_{\rm eff}=100\,\)Mpc. This uses the same \(\delta A(f)\) and \(\delta\phi(f)\) as Fig. 3, with its each effect propagated through to \(|\delta h(f)|\), including for \(\delta\phi(f)\) the motion in and out of phase in \(A(f)\sqrt{2(1-\cos\delta\phi(f))}\) which gives the bumpy structure. A similar plot style was used in [51] to compare waveform differences to detector noise. A ratio of the noise and signal quantities squared is integrated to determine whether waveforms are ‘distinguishable’ by the criteria of Ref. [13]. Requirements for \(\delta\phi(f)\) and \(\delta A(f)\) are set from a particular level of background noise fluctuations \(S_{n}(f)\). The frequency-bin variance \(fS_{n}(f)\) comes from the power spectral density \(S_{n}(f)\). The characteristic strain squared of the signal in the frequency bin is \(4f^{2}|\delta\tilde{h}(f)|^{2}\), and is defined so that the signal-to-noise of the waveform difference \(\langle\delta\mathbf{h}|\delta\mathbf{h}\rangle\) will be less than one, and the waveforms are indistinguishable as defined in Section 2, if the characteristic strain \(4f^{2}|\delta\tilde{h}(f)|^{2}<fS_{n}(f)\) at all frequencies - see [52, 53] for explanations of characteristic strain. We show this comparison recast in terms of the more commonly plotted amplitude spectral density \(\sqrt{S_{n}(f)}\) and the effective strain amplitude \(2\sqrt{f}|\delta\tilde{h}|\) in Figure 4 for a TEOBResumS reference signal at \(d_{\rm eff}=100\,\)Mpc. All waveform parameters are set following the reference numerical simulation of Section 4. While the IMRPhenomPv4_NRT waveform model remains nearly indistinguishable from TEOBResumS even for A+ sensitivites, other waveform models have differences with a potential impact on the likelihood. However, as explored in [14], the actual impact on the inferred parameter posterior distributions depends additionally on whether or not the model differences (or calibration errors) are orthogonal to the waveform differences introduced by a parameter variation. The result is similar to the waveform difference estimates of [51], but uses the maximum-likelihood phase and amplitude method of Section 3. In this work, we can recover a similar estimate of systematic uncertainty for state-of-the-art analytic model variants that do not share a common \(f_{\rm peak}\). For \(\delta A(f)\), and for sufficiently small \(\delta\phi(f)\) that \(\sqrt{2(1-\cos\delta\phi)}\simeq\delta\phi\), this comparison motivates a goal for the allowable waveform and calibration errors by again rearranging the frequency-by-frequency indistinguishability criterion to \[|\delta A(f)|<\frac{\sqrt{S_{n}(f)}}{2A(f)\sqrt{f}}\qquad\qquad\mbox{and} \qquad\qquad|\delta\phi(f)|<\frac{\sqrt{S_{n}(f)}}{2A(f)\sqrt{f}} \tag{24}\] For both waveform and calibration uncertainties, then we have a goal frequency-dependent tolerance that is set by the noise spectral density and waveform amplitude. We show the frequency-dependent indistinguishable error level for \(\delta\phi(f)\) or \(\delta A(f)\) for the three reference detector \(S_{n}(f)\) and our reference neutron-star \(A(f)\) at \(d_{\rm eff}=100\,\)Mpc in Fig. 5. These are translated into the shaded detector bounds in Figs. 3. Figure 5 shows that waveform and calibration error of \(\sim 3\%\) in amplitude (2 Deg of phase) through the tens of Hz range would be required for calibration to remain subdominant to stochastic noise for a single-detector \(d_{\rm eff}=100\) Mpc BNS with signal-to-noise ratio approximately 60 in the A+ detector. In Cosmic Explorer, the same system would have a signal-to-noise ratio of \(\simeq\)1158 above 10 Hz and require waveform and calibration errors as small as \(10^{-3}\) for the impact of those uncertainties to be smaller than the statistical uncertainty. ## 6 Range of waveform uncertainties for tidal inference Recent discussions have raised the question of whether the analysis of gravitational-wave signals should be truncated at a specific frequency to limit systematic uncertainty in the inference of tidal properties, as those increase due to modeling differences at high frequencies (see for example [54]). We propose instead that a more appropriate treatment of increasing waveform uncertainty is to explicitly marginalize over the high-frequency \(\delta\phi(f)\) range implied by those model differences. Marginalization procedures like those developed by Essick [19] could make this more tractable. As a first estimate, we characterize the distribution \(\delta\phi(f)\) determined by generating an equivalent of the results of Fig. 3 for the IMRPhenomPv2NRT_lowSpin_prior samples in the GW170817 PE release of GWTC-1 [22], and find the distribution of \(\delta A(f)\) and \(\delta\phi(f)\) over the prior and posterior distributions of \(\theta\). Waveform amplitudes are calculated using the sampled distance, sky position and inclination, the LIGO Livingston (L1) detector antenna pattern at the signal GPS time, and released L1 noise curve [45]. Residual \(\delta\phi\) is calculated as described in Section 3, taking the maximum likelihood value of an overall shift in time and phase. The posterior distributions of \(\delta A(f)\) and \(\delta\phi(f)\) are shown in Figure 6. In the figure, differences from each sample are truncated at merger. Differences in modeling give a median error that exceeds statistical error above \(\sim 1\,\mathrm{kHz}\). The waveform differences shown in Fig.6 come from a combination of different spin models, different tidal models, and differences rising from a re-summed effective-one-body waveform model vs. a frequency-domain phenomenological waveform generation. Amplitude uncertainty is not significant for GW170817. The results confirm that systematic error can be avoided by restricting analysis to frequencies below \(\sim 1\mathrm{kHz}\), as \(\delta\phi\) is smaller than the statistical background in that range. However, higher frequency Figure 5: The size of of \(\delta\phi\) (in radians) or \(\delta A\) (as a fraction) that gives signal modifications with the same characteristic strain as that from the power spectral density for the labeled detectors, for an incident BNS signal from \(d_{\mathrm{eff}}=100\,\mathrm{Mpc}\). The model uncertainty and calibration requirements set by the power spectral density are specific to the signal under consideration; the rapid rise at 2 kHz for CE reflects the termination frequency of the inspiral model being applied here. The important region for measurement and calibration will be set by the frequency range of the desired signal information, as could be determined for example in Figure 2 of Ref. [43]. For signals of this amplitude, systematic uncertainties from calibration and waveform modeling seem likely to dominate the error budget at \(\sim 10-100\) Hz in next-generation observatories. analyses could also explicitly marginalize over this residual waveform uncertainty using a \(\delta\phi\) spline that encompasses these waveform differences. A similar distribution can be computed for the prior samples, and gives a wider distribution of both \(\delta\phi(f)\) and \(\delta A(f)\) at several hundred Hz; the 2-\(\sigma\) range exceeds 50 Deg/% between 200 Hz and 1 kHz. However, most systems in the prior bank give zero contributions above \(\sim 1\) kHz as the stars have merged at low frequencies due to large size and tides. We note that the blue shaded regions are only a rough estimate of realistic uncertainties; recovered credible intervals for \(\delta A\) and \(\delta\phi\) can be seen in Fig. 12 of Edelman et al [9]. In part, differences may arise as the Edelman result includes marginalization over amplitude and phase calibration errors from the individual detectors. The one-sigma ranges of the calibration-marginalized spline-recovery result are generally broader for \(\delta\phi\) and tighter for \(\delta A\) than the statistical estimate of this work, and like calibration uncertainties they are quite flat over frequencies at the 1-\(\sigma\) level. However, at the 2-\(\sigma\) level the credible intervals increase with increasing frequency in a similar way to the noise-based estimates shown here. This type of error estimate could be used to generate a spline-fit envelope for \(\delta\phi\) and \(\delta A\) for coherent waveform marginalization in proposed future event analyses. Ideally, the implementation would require a prescription for the distribution of \(\delta\phi(f)\) as it depends on \(\theta_{\rm int}\), so that marginalization can reflect increased modeling uncertainties away from well-calibrated regions of mass ratio and other parameters. It may also be better derived directly from theoretical estimates of model uncertainties as discussed in Figure 6: Amplitude and residual phase differences between optimized versions of the waveform models IMRPhenomPv2_NRTidal_INTERP [55] and SEODNRv4T_surrogate [56] over the posterior samples of GW170817 in the GWTC-1 release [22]. The distribution of \(\delta A(f)\) and \(\delta\phi(f)\) is described by the median value at each frequency in red, and 1-\(\sigma\), and 2-\(\sigma\) ranges around the median in grey. As a comparison, the estimated impact of noise fluctuations for the median amplitude reconstructed in the LIGO Livingston observatory are shaded in blue. Section 3 instead of as a post-hoc estimate made by comparing waveform models. ## 7 Astrophysical interpretation of \(\delta\phi\) and \(\delta A\) Considering the size of the energy transfers required to modify the characteristic functions \(\dot{F}(f)\) and \(\mathcal{A}(F)\) will allow us to generally interpret constraints on \(\delta\phi(f)\) and \(\delta A(f)\), as might be recovered from a coherent analysis like that of Edelman et. al. [9]. Consider the usual energy balance equation for the evolution of a source system, with a total system energy \(E(F)\) (for orbits: negative relative to \(F\to 0\)) that can be written as a function of the instantaneous gravitational-wave frequency \(F\). For example, two masses \(m_{1}\) and \(m_{2}\) in circular orbits emitting gravitational waves at \(F=\Omega_{\rm orb}/\pi\) for orbital angular frequency \(\Omega_{\rm orb}\) have a leading order orbital binding energy of \[E_{\rm orb}=-\frac{m_{1}m_{2}}{m_{1}+m_{2}}\frac{c^{2}}{2}\left(\frac{G(m_{1}+ m_{2})(\pi F)}{c^{3}}\right)^{2/3} \tag{25}\] where the masses combine to a reduced mass and the term to the power of \(2/3\) is a dimensionless frequency parameter often denoted in post-Newtonian expansions as \(x\)[57]. The gravitational-wave energy loss or luminosity \(\mathcal{L}_{\rm GW}\) from the same orbit is \[\mathcal{L}_{\rm GW}=\frac{32}{5}\frac{c^{5}}{G}\frac{m_{1}^{2}m_{2}^{2}}{ \left(m_{1}+m_{2}\right)^{4}}\left(\frac{G(m_{1}+m_{2})(\pi F)}{c^{3}}\right)^ {10/3} \tag{26}\] using the same dimensionless frequency term. Orbital evolution is driven by the energy balance at each characteristic emission frequency, which results in the characteristic \(\dot{F}\) function in terms of the energy gradient \(E^{\prime}(F)=dE(F)/dF\) and the total rate of energy loss or luminosity \(\mathcal{L}(F)\): \[\frac{dE(F)}{dt}=-\mathcal{L}(F)\qquad\qquad\qquad\mathrm{and}\qquad\qquad \qquad\dot{F}=-\frac{\mathcal{L}(F)}{E^{\prime}(F)} \tag{27a}\] The gravitational-wave luminosity is determined by the multipole decomposition of strain emitted by the source [58, 59]: \[\mathcal{L}_{\rm GW} =\frac{1}{4\pi}\int d\Omega\sum_{\ell m}Y_{-2}^{\ell m}(\Omega) \left|d\dot{h}_{\ell m}(f)\right|^{2} \tag{28}\] \[=\frac{1}{16\pi}\sum_{\ell m}d^{2}\left(\dot{\mathcal{A}}_{\ell m }^{2}+(2\pi)^{2}\mathcal{A}_{\ell m}^{2}F^{2}\right) \tag{29}\] where we use again \(F=\dot{\psi}/2\pi\) and where \(d\) the luminosity distance at which \(h\) is measured. When the SPA applies (following Eq. 12) we can neglect the \(\dot{\mathcal{A}}^{2}\) term, so gravitational-wave luminosity in each mode determines the waveform amplitude through \[\mathcal{A}^{2}=\frac{4}{\pi}\frac{\mathcal{L}_{\rm GW}}{d^{2}F^{2}} \tag{30}\] The characteristic time-domain amplitude is directly connected to the total luminosity of the sytem. The energy balance implied by a waveform model could be modified by any unmodeled physical effect which increases gravitational-wave luminosity \(\mathcal{L}_{\rm GW}(1+\delta\mathcal{L}_{\rm GW})\), such as the coherent excess quadrupole moment induced by tides on each component neutron star. Unmodeled non-gravitational-wave energy losses (including neutrino, electromagnetic,...) could also contribute to the system's evolution through an additional \(\mathcal{L}_{\rm tot}(1+\delta\mathcal{L}_{\rm MM})\) that drives the energy balance without affecting the gravitational-wave amplitude. Finally, models derived from or calibrated to numerical relativity simulations might include unphysical excess energy loss due to numerical dissipation. The true waveform would then have \(\mathcal{L}_{\rm tot}(1-\delta\mathcal{L}_{\rm num})\); this is equivalent to a negative \(\delta\mathcal{L}_{\rm MM}\) needed to generate the true model. ### Internal energy transfers: adiabatic vs. dynamic The waveform model may also neglect effects that change the system energy associated with a characteristic gravitational-wave emission frequency, such as high-order corrections to the orbital binding energy or a missing energy reservoir. For example, there may be changes in the energy stored in a component neutron star's internal modes. Consider some effect that changes total energy \(E\) as a function of \(F\), for example changing the leading order energy of Eq. 25 to \(E(1+\delta E)\). A total derivative with respect to \(F\) gives the correction to the energy gradient term in the energy balance equation, \(dE/dF=E^{\prime}\), as \[E^{\prime}(1+\delta E^{\prime})=E^{\prime}\left(1+\delta E+\frac{E}{E^{\prime }}\left(\delta E\right)^{\prime}\right) \tag{31}\] Drawing from the extensive discussion of potential energy transfer impacts in the literature (such as in [60, 61, 62, 8]), we interpret the first term as a correction from adiabatic effects: those where the change in \(\delta E\) with frequency is slow. In this case, noting each \(\delta\) term here represents the size of _relative_ corrections to the original system \(E^{\prime}\) and \(E\), the correction \(\delta E^{\prime}_{A}=\delta E_{A}\). For adiabatic tides, the correction \(\delta E_{A}\) is negative [61]: energy going into the stellar deformation implies a less negative total energy at a given frequency. In contrast, the second term characterizes dynamic effects, coming from a rapid energy change. For example, consider a resonant transfer of energy \(\Delta E_{D}\) from the orbit to a stellar mode, starting at a characteristic gravitational-wave frequency \(F_{0}\) over a bandwidth \(\Delta F\). Around \(F=F_{0}\), we have a change in the gradient \[E^{\prime}(1+\delta E^{\prime}) \simeq E^{\prime}-\frac{\Delta E_{D}}{\Delta F} \tag{32}\] \[= E^{\prime}\left(1-\frac{E}{E^{\prime}}\frac{\delta E_{D}}{ \Delta F}\right) \tag{33}\] where we introduce the relative energy transfer \(\delta E_{D}=\Delta E/E_{\rm orb}\). Define the energy transfer timescale \(t_{D}=\Delta F/\dot{F}\) and the adiabatic decay timescale or gravitational-wave emission timescale \(t_{\rm A}=|E/{\cal L}|\simeq 2F/3\dot{F}\). Using the energy balance relationship \(E^{\prime}=-{\cal L}/\dot{F}\), we find as has previously been shown [62] that the impact of a dynamic energy transfer compared to adiabatic energy transfer is amplified: \[\delta E^{\prime}_{D}=-\frac{E}{E^{\prime}}\frac{\delta E_{D}}{\Delta F}=\frac {E}{{\cal L}}\frac{\dot{F}}{\Delta F}\delta E_{D}\simeq\frac{3}{2}\frac{t_{A}} {t_{D}}\delta E_{D} \tag{34}\] Generally, then, internal energy transfers lead to a gradient term \[\delta E^{\prime}\sim\delta E_{A}+\frac{t_{A}}{t_{D}}\delta E_{D} \tag{35}\] where \(\delta E_{A}E_{\rm orb}\) and \(\delta E_{D}E_{\rm orb}\) characterize the amounts of energy transferred adiabatically or dynamically. ### Signal implications Assume that all unmodeled effects are very small corrections to the baseline energy and luminosity, so that their effects can be linearized. We find the impact on the characteristic functions \[{\cal A}_{\rm true} = {\cal A}_{\rm model}\left(1+\delta{\cal L}_{\rm GW}\right) \tag{36a}\] \[\dot{F}_{\rm true} = \dot{F}_{\rm model}\left(1-\delta E^{\prime}+\delta{\cal L}_{\rm GW }+\delta{\cal L}_{\rm MM}\right)\] (36b) \[T^{\prime}_{\rm true} = T^{\prime}+T^{\prime}\left(\delta E^{\prime}-\delta{\cal L}_{ \rm GW}-\delta{\cal L}_{\rm MM}\right) \tag{36c}\] This is consistent with tidally driven increases in the \(\dot{F}\) function that can be seen at moderate frequencies for all binary neutron star models relative to the binary black hole model in Fig. 1. To confirm the interpretation of these energy transfers, we work out the impact on the time-domain waveform following Eqs. 15 and using the energy balance relation \(T^{\prime}=-E^{\prime}/{\cal L}\), \[\delta T(f) = \int_{f}^{f_{c}}dF\frac{E^{\prime}}{{\cal L}}\left(-\delta E^{ \prime}+\delta{\cal L}_{\rm GW}+\delta{\cal L}_{\rm MM}\right) \tag{37a}\] \[\delta\psi(f) = 2\pi\int_{f}^{f_{c}}dFF\frac{E^{\prime}}{{\cal L}}\left(-\delta E ^{\prime}+\delta{\cal L}_{\rm GW}+\delta{\cal L}_{\rm MM}\right) \tag{37b}\] To interpret them physically: since orbital \(E^{\prime}<0\), this shows that both the time to coalescence and the number of cycles before coalescence can be shortened by additional luminosity or by a less rapid decrease of total energy of the system with increasing \(F\). After propagation through to the signal domain, again linearizing the different corrections, we will find from Eqs. 19 the energetics implications for the frequency domain signal: \[\delta A(f) =\frac{1}{2}\left(\delta E^{\prime}-\delta\mathcal{L}_{\mathrm{GW}}- \delta\mathcal{L}_{\mathrm{MM}}\right)+\delta\mathcal{L}_{\mathrm{GW}} \tag{38a}\] \[=\frac{1}{2}\left(\delta E^{\prime}+\delta\mathcal{L}_{\mathrm{GW}}- \delta\mathcal{L}_{\mathrm{MM}}\right)\] (38b) \[\delta\phi(f) =2\pi f\delta T(f)-\delta\psi(f)\] (38c) \[=-2\pi\int_{f}^{f_{c}}d\tilde{f}\int_{\tilde{f}}^{f_{c}}dF\,\frac {E^{\prime}}{\mathcal{L}}\left(\delta E^{\prime}-\delta\mathcal{L}_{\mathrm{ GW}}-\delta\mathcal{L}_{\mathrm{MM}}\right) \tag{38d}\] Note that any energy losses that are not in gravitational waves lead to a decrease in gravitational-wave amplitude at the corresponding frequency - the orbit sweeps through more quickly and fewer cycles contribute. Phase accumulates more rapidly with any additional energy losses from the system. The relative change in \(\delta A\) vs \(\delta\phi\) depends on the type of energy transfer that is added to the source model. ### Example: dynamic energy transfer and GW170817 Applying the energetics framework above allows the interpretation of Edelman et al [9] results for GW170817 in terms of a possible augmentation of the inspiral waveform due to neutron-star energy transfers. While no confident identification of a departure from the signal model was identified, the posterior distribution of \(\delta\phi(f)\) in Figure 12 in [9] shows a 1-\(\sigma\) excess at around 60 Hz of perhaps 5 Deg or order 0.1 radians. At the same frequency, no significant \(\delta A\) change is observed. One candidate for a short-duration \(\delta\phi(f)\) at a specific inspiral frequency is dynamic energy transfer into a neutron-star mode, for example as explored in [63, 64]. As the net impact on the Fourier-domain phase comes from a difference between \(2\pi f\delta T\) and \(\delta\phi\) in Eq. 38c, a completely instantaneous energy loss only affects amplitude at the specific emission frequency. Consider instead the transfer of \(\Delta E\) at \(F_{0}\) over a small range \(\Delta F\). If the excitation does not induce any coherent quadrupole, we have a corresponding short-duration relative decrease in the signal amplitude from Eq. 34 of \[\delta A(F_{0})=\frac{1}{2}\delta E^{\prime}=\frac{1}{2}\frac{\Delta E}{t_{D}} \frac{1}{\mathcal{L}}=\frac{1}{2}\frac{t_{A}}{t_{D}}\delta E_{D} \tag{39}\] using the timescale notation of Sec. 7.1. If amplitude variation is not observed, this limits the total unmodeled energy transfer rate to be small compared to the luminosity at 60 Hz, which is \(\sim 2\times 10^{-4}M_{\odot}c^{2}/\mathrm{s}\) or \(-3\times 10^{43}\) joules/s at 60 Hz. At the same time, there is an increase in accumulated Fourier-domain phase \(\delta\phi\) which will be nonzero for frequencies between \(F_{0}\) and \(F_{0}+\Delta F\). We estimate the nonzero values as \[\delta\phi \simeq 2\pi\delta E^{\prime}\int_{F_{0}}^{F_{0}+\Delta F}d\tilde{F} \int_{\tilde{F}}^{F_{0}+\Delta F}dF\,T^{\prime}(F) \tag{40}\] \[\simeq \pi\delta E^{\prime}\,\frac{\left(\Delta F\right)^{2}}{\dot{F}} \simeq\pi\Delta E\frac{\dot{F}}{\mathcal{L}}t_{D}=\frac{2\pi}{3}t_{D}F\delta E _{D} \tag{41}\] where \(t_{D}F\) is the number of cycles over which the energy is transferred. A resonant energy transfer has a dynamic timescale of \(t_{D}\sim\sqrt{t_{A}/F}\)[62], so that both \(t_{A}/t_{D}\) and \(t_{D}F\sim\sqrt{t_{A}F}\). At 60 Hz, \(\sqrt{t_{A}F}\simeq 40\), and the orbital binding energy \(E\simeq-0.006M_{\odot}c^{2}\) or \(-1.1\times 10^{45}\) joules, and a energy transfer of \(\delta E=\Delta E/E\sim 0.001\) at 60 Hz corresponds to roughly \(\delta A=2\%\) and \(\delta\phi=5\) Deg, compatible with the credible intervals recovered for GW170817 [9]. We leave more realistic explorations of astrophysical interpretations to future work. ## 8 Conclusion An amplitude-phase decomposition of time-domain gravitational-wave emission enables the use of instantaneous frequency in characterizing the underlying model physics in terms of characteristic functions for the signal evolution \(\dot{F}\) and the time-domain amplitude \(\cal A\). Resulting model uncertainties can be propagated from the time-domain wave to the frequency-domain signal for comparison with gravitational-wave observations. The differences in model waveform predictions that propagate from the underlying physics can be decoupled from the time and phase constants \(t_{c}\) and \(\phi_{c}\) that characterize a specific observation by finding maximum-likelihood values. Writing waveform uncertainties in the same form as calibration uncertainties -- namely, as frequency-dependent signal amplitude and phase error terms \(\delta A(f)\) and \(\delta\phi(f)\) -- would then characterize model-dependent waveform uncertainty suitable for marginalization in gravitational-wave inference. Especially for observations of early inspiral, energy-balance arguments can aid in the interpretation of coherent waveform deviations for binary signals. We have illustrated how a common astrophysical energy transfer scenarios can be used to interpret recovered values of \(\delta\phi(f)\) and \(\delta A(f)\) compared to a waveform model used for analysis; recovered bounds on these functions limit the size of unmodeled energy transfers in the source system. ## 9 Acknowledgements Many thanks to the IGWN Conda Distribution, numpy, pandas, watpy [27], lalsimulation [34], pycbc [55], and the work of those that support shared computational infrastructure for the LIGO-Virgo-Kagra collaborations and the gravitational-wave community. I thank David Radice, Wynn Ho, Sanjay Reddy, Josh Smith, Nils Andersson and the Caltech LIGO group for discussion, and Aaron Zimmerman for detailed feedback and corrections. Read was supported during the development of these ideas by funding from NSF-1307545 and NSF-1806962, the LIGO Lab, the Carnegie Observatories, and the Nicholas and Lee Begovich Center for Gravitational-Wave Astronomy. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation.
2305.05816
Best-Effort Adaptation
We study a problem of best-effort adaptation motivated by several applications and considerations, which consists of determining an accurate predictor for a target domain, for which a moderate amount of labeled samples are available, while leveraging information from another domain for which substantially more labeled samples are at one's disposal. We present a new and general discrepancy-based theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights. We show how these bounds can guide the design of learning algorithms that we discuss in detail. We further show that our learning guarantees and algorithms provide improved solutions for standard domain adaptation problems, for which few labeled data or none are available from the target domain. We finally report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms, as well as comparisons with several baselines. We also discuss how our analysis can benefit the design of principled solutions for fine-tuning.
Pranjal Awasthi, Corinna Cortes, Mehryar Mohri
2023-05-10T00:09:07Z
http://arxiv.org/abs/2305.05816v1
# Best-Effort Adaptation ###### Abstract We study a problem of _best-effort adaptation_ motivated by several applications and considerations, which consists of determining an accurate predictor for a target domain, for which a moderate amount of labeled samples are available, while leveraging information from another domain for which substantially more labeled samples are at one's disposal. We present a new and general discrepancy-based theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights. We show how these bounds can guide the design of learning algorithms that we discuss in detail. We further show that our learning guarantees and algorithms provide improved solutions for standard domain adaptation problems, for which few labeled data or none are available from the target domain. We finally report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms, as well as comparisons with several baselines. We also discuss how our analysis can benefit the design of principled solutions for _fine-tuning_. Domain adaptation, Distribution shift, ML fairness. ## 1 Introduction Consider the following adaptation problem that frequently arises in applications. Suppose we have access to a fair amount of labeled data from a target domain \(\mathcal{P}\) and to a significantly larger amount of labeled data from a different domain \(\mathcal{Q}\). How can we best exploit both collections of labeled data to come up with as accurate a predictor as possible for the target domain \(\mathcal{P}\)? We will refer to this problem as the _best-effort adaptation problem_ since we seek the best method to leverage the additional labeled data from \(\mathcal{Q}\) to come up with a best predictor for \(\mathcal{P}\). One would imagine that the data from \(\mathcal{Q}\) should be helpful in improving upon the performance obtained by training only on the \(\mathcal{P}\) data, if \(\mathcal{Q}\) is not too different from \(\mathcal{P}\). The question is how to measure this difference and account for it in the learning algorithm. This best-effort problem differs from standard domain adaptation problems where typically very few or no labeled data from the target is at one's disposal. Best-effort adaptation can also be motivated by fairness considerations, such as racial disparities in automated speech recognition (Koenecke et al., 2020). A significant gap has been reported for the accuracy of speech recognition systems when tested on speakers of vernacular English versus non-vernacular English speakers. In practice, there is a substantially larger amount of labeled data available for the non-vernacular domain since it represents a larger population of English speakers. As a result, it might not be possible, with the training data in hand, to achieve an accuracy for vernacular speech similar to the one achieved for non-vernacular speech. Such a recognition system might
2306.03937
Guiding The Last Layer in Federated Learning with Pre-Trained Models
Federated Learning (FL) is an emerging paradigm that allows a model to be trained across a number of participants without sharing data. Recent works have begun to consider the effects of using pre-trained models as an initialization point for existing FL algorithms; however, these approaches ignore the vast body of efficient transfer learning literature from the centralized learning setting. Here we revisit the problem of FL from a pre-trained model considered in prior work and expand it to a set of computer vision transfer learning problems. We first observe that simply fitting a linear classification head can be efficient and effective in many cases. We then show that in the FL setting, fitting a classifier using the Nearest Class Means (NCM) can be done exactly and orders of magnitude more efficiently than existing proposals, while obtaining strong performance. Finally, we demonstrate that using a two-phase approach of obtaining the classifier and then fine-tuning the model can yield rapid convergence and improved generalization in the federated setting. We demonstrate the potential our method has to reduce communication and compute costs while achieving better model performance.
Gwen Legate, Nicolas Bernier, Lucas Caccia, Edouard Oyallon, Eugene Belilovsky
2023-06-06T18:02:02Z
http://arxiv.org/abs/2306.03937v2
# Guiding The Last Layer in Federated Learning with Pre-Trained Models ###### Abstract Federated Learning (FL) is an emerging paradigm that allows a model to be trained across a number of participants without sharing data. Recent works have begun to consider the effects of using pre-trained models as an initialization point for existing FL algorithms; however, these approaches ignore the vast body of efficient transfer learning literature from the centralized learning setting. Here we revisit the problem of FL from a pre-trained model considered in prior work and expand it to a set of computer vision transfer learning problems. We first observe that simply fitting a linear classification head can be efficient and effective in many cases. We then show that in the FL setting, fitting a classifier using the Nearest Class Means (NCM) can be done exactly and orders of magnitude more efficiently than existing proposals, while obtaining strong performance. Finally, we demonstrate that using a two-phase approach of obtaining the classifier and then fine-tuning the model can yield rapid convergence and improved generalization in the federated setting. We demonstrate the potential our method has to reduce communication and compute costs while achieving better model performance. Code for our experiments is available.1 Footnote 1: [https://github.com/GwenLegate/GuidingLastLayerFLPretrain](https://github.com/GwenLegate/GuidingLastLayerFLPretrain) ## 1 Introduction In recent years an increased focus on data privacy has attracted significant research interest in Federated Learning (FL), an approach in which a common global model is trained by aggregating model updates computed across a set of decentralized edge devices whose data is kept private. We desire to train a federated global model capable of equivalent performance to that of one trained on the same set of data in the centralized setting; however, FedAvg (McMahan et al., 2017) the most commonly used FL baseline, has been shown to suffer from performance degradation when the distribution of data between clients is not i.i.d. (Li et al., 2020; Acar et al., 2021; Karimireddy et al., 2020). This condition leads to a global model that performs worse than its centrally trained counterpart. Transfer learning from pre-trained models trained on sufficiently abundant and diverse data is well known to produce state-of-the-art results in tasks related to vision (He et al., 2019; Girshick et al., 2014), Natural Language Processing (NLP) (Radford et al., 2019), and other domains. Indeed, pre-training combined with fine tuning to specialize the model for a specific downstream task often leads to better generalization and faster model convergence in the centralized setting (Weiss et al., 2016; Patel et al., 2015). FL literature on the other hand, has been largely focused on models trained from scratch (McMahan et al., 2017; Karimireddy et al., 2020; Li et al., 2020) and the impact of heterogeneity on algorithmic convergence. Recently, several studies have been conducted on the effect of pre-training on the performance of standard FL algorithms, see e.g., Chen et al. (2023); Nguyen et al. (2023). Here, it was found that besides improving performance, pre-training can help to close the accuracy gap between a model trained in the federated setting and its centrally trained counterpart, particularly in the case of non-i.i.d. client data. Prior work on transfer learning in the federated setting has focused on treating the pre-trained model as a stable initialization for classical FL algorithms that adapt all the parameters of the model. Approaches from the transfer learning literature demonstrate that it is often more efficient to adapt only parts of the model such as just the last layers (Kornblith et al., 2019), affine parameters (Lian et al., 2022; Yazdampanah et al., 2022), or adapters Houlsby et al. (2019). These approaches frequently yield combinations of better performance, computation time and more easily avoided over-fitting. The selection is often a function of architecture, task similarity, and dataset size (Evci et al., 2022; Shysheya et al., 2023; Yazdanpanah et al., 2022). In many supervised learning tasks studied in the literature such as transfer learning from ImageNet, the representations are powerful and training only the linear classifier is sufficient for strong performance (Kornblith et al., 2019). Notably, a key driver of algorithmic development in FL is the reduction of communication cost, having largely motivated the multi-iteration design of the classical FedAVG algorithm (see e.g., McMahan et al. (2017)). Although not studied in the prior works, updating only the linear classifier can be highly efficient in the federated setting when starting from a pre-trained model. It can allow for both high performance, limited communication cost (since only the linear layer needs to be transmitted), and potentially rapid convergence due to the stability of training only the final layer. Training the linear classifier in a federated setting can lead to classical FL problems such as client drift if not treated appropriately. An example of this is illustrated in Nguyen et al. (2023, Appendix C). Our work highlights that the Nearest Class Mean (NCM), a simple classical alternative to the classification layer, allows us to obtain a powerful classifier. We highlight that NCM can be computed exactly and efficiently in the federated setting without violating privacy constraints. In what follows we will demonstrate that in many cases of interest, using NCM to tune the classification head can even outperform approaches considered in prior work with communication and computation costs an order of magnitude less. It is well known from the transfer learning literature that in some cases fine-tuning the entire model is necessary for best performance (Evci et al., 2022). We thus propose a two-stage approach based on first deriving a powerful classification head (HeadTuning stage) and subsequently performing a full fine-tuning of the model (Fine-Tune stage). Such two-stage approaches have been applied in practice in the literature and have recently been studied theoretically in the transfer learning literature (Kumar et al., 2022; Ren et al., 2023). They have been shown to give both improved performance in in-distribution and out-of-distribution settings (Kumar et al., 2022). In this work we highlight that the two-stage procedure can naturally lead to many advantages in FL setting: **(a)** the fine-tuning stage is more stable under averaging of heterogenous models, and **(b)** convergence of the fine-tuning stage is rapid (minimizing compute and communication cost). We will demonstrate that the HeadTuning stage can be very efficiently performed by using the nearest class means and using them to initialize a linear classifier (FedNCM-FT). Our contributions in this work are: * We provide empirical evidence that, for numerous downstream datasets, training only the classifier head proves to be an effective approach in FL settings. * We present FedNCM, a straightforward FL method that significantly reduces communication costs when used as a standalone technique or as an initialization step for headtuning which leads to improved accuracy. * We demonstrate that employing a two-stage process consisting of headtuning (e.g., via FedNCM) followed by fine-tuning results in faster convergence and higher accuracy without violating FL constraints. We further illustrate that it can address many key desiderata of FL: high accuracy, low communication, low computation, and robustness to high heterogeneity while being easier to tune in terms of hyperparameter selection. Related work Federated LearningThe most well known approach in FL is the FedAvg algorithm proposed by McMahan et al. (2017). In the random initialization setting, convergence of FedAvg and related algorithms has been widely studied for both i.i.d. (Stich, 2018; Wang and Joshi, 2018) and non i.i.d. settings (Karimireddy et al., 2020; Li et al., 2020; Fallah et al., 2020; Yu et al., 2019). A commonly cited problem in the literature is the challenge of heterogeneous or non-i.i.d. data and a variety of algorithms have been developed to tackle this (Li et al., 2020; Hsu et al., 2019; Legate et al., 2023; Karimireddy et al., 2020). Transfer LearningTransfer learning is widely used in many domains where data is scarce (Girshick et al., 2014; Alyafeai et al., 2020; Zhuang et al., 2020; Yazdanpanah et al., 2022). A number of approaches for transfer learning have been proposed including the most commonly used full model fine-tuning and last layer tuning Kornblith et al. (2019) and some more efficient methods such as selecting features Evci et al. (2022), adding affine parameters Lian et al. (2022); Yazdanpanah et al. (2022), and adapters for transformers Houlsby et al. (2019). Transfer learning and the effects of pre-training in FL have so far only been explored in limited capacity. In their recent publication, Nguyen et al. (2023) show that initializing a model with pre-trained weights consistently improves training accuracy and reduces the performance gap between homogeneous and heterogeneous client data distributions. Additionally, in the case where pre-trained data is not readily available, producing synthetic data and training the global model centrally on this has been shown to be beneficial to FL model performance (Chen et al., 2023). Nearest Class Means ClassifierThe use of the NCM algorithm in artificial intelligence has a long history. Each class is represented as a point in feature space defined by the mean feature vector of its training samples. New samples are classified by computing the distances between them and the class means and selecting the class whose mean is the nearest. In 1990, Ratcliff proposed to use NCM to mitigate catastrophic forgetting in continual learning and since then the use of NCM has been widely adopted and extended by continual learning researchers. This is due to its simplicity and minimal compute requirements to obtain a final classifier when a strong representation has already been learnt. Some of these methods include Rebuffi et al. (2017); Li and Hoiem (2017); Davari et al. (2022) who maintain a memory of exemplars used to compute an NCM classifier. Related to our work, recent literature in continual learning that have considered pre-trained models were shown to ignore a simple NCM baseline (Janson et al., 2022) which can outperform many of the more complicated methods proposed. In our work this NCM baseline, denoted as FedNCM for the federated setting, demonstrates similar strong performance for FL while serving as a very practical first stage of training in our proposed two-step process. ## 3 Methods ### Background and Notation In FL, distributed optimization occurs over \(K\) clients with each client \(k\in\{1,...,K\}\) having data \(\mathbf{X}_{k},\mathbf{Y}_{k}\) that contains \(n_{k}\) samples drawn from distribution \(D_{k}\). We define the total number of samples across all clients as \(n=\sum_{k=1}^{K}n_{k}\). The data \(\mathbf{X}_{k}\) at each node may be drawn from different distributions and/or may be unbalanced with some clients possessing more training samples than others. The typical objective function for federated optimization is given in Eq. 1 (Konecny et al., 2016) and aims to find the minimizer of the loss over the sum of the client data: \[\mathbf{w}^{*},\mathbf{v}^{*}\in\operatorname*{arg\,min}_{\mathbf{w},\mathbf{ v}}\sum_{k=1}^{K}\frac{n_{k}}{n}\mathcal{L}(g(f(\mathbf{w},\mathbf{X}_{k}), \mathbf{v}))\,. \tag{1}\] Here we have split the model prediction into \(f\), a base parameterized by \(\mathbf{w}\) that produces representations, and \(g\), a task head parameterized by \(\mathbf{v}\). In this work we will focus on the case where the task head is a linear model, and the loss function, \(\mathcal{L}\) represents a standard classification or regression loss. The \(\mathbf{w}\) are derived from a pre-trained model and they can be optimized or held fixed. One approach to obtain the task head while using a fixed \(\mathbf{w}\) is to optimize only \(\mathbf{v}\) in a federated manner over all the data. In the case that \(g\) is given as a linear model and we absorb the softmax into \(\mathcal{L}\) this is known as Linear Probing (LP) in the literature (Nguyen et al., 2023; Ren et al., 2023a). ``` 0:\((\mathbf{X}_{1},\mathbf{Y}_{1}),(\mathbf{X}_{2},\mathbf{Y}_{2}),\ldots,(\mathbf{X}_{K},\mathbf{Y}_{K})\) - Local datasets, \(w_{pt}\) - pre-trained model Server Executes: 1:for each client \(k\in K\) in parallel do 2:\([m_{c}^{k}]_{c\in C}\leftarrow\) LocalClientStats\((X_{k},Y_{k},\mathbf{w}_{pt})\)\(\triangleright\) Send to all clients, receive weighted class means 3:endfor 4:for each class \(c\in C\)do 5:\(\mathbf{l}_{c}\leftarrow\frac{1}{D_{c}}\sum_{k=1}^{K}m_{c}^{k}\)\(\triangleright\)\(\mathbf{l}_{c}\) can be used in NCM classifier 6:endfor Client Side: 7:functionLocalClientStats\((\mathbf{X},\mathbf{Y},\mathbf{w})\) 8:for each class \(c\in N\)do 9: Let \(\mathbf{X}_{c}=\{x_{i}\in X,y_{i}=c\}\) 10:\(m_{c}\leftarrow\sum_{x\in X_{c}}f_{w}(x)\) 11:endfor 12:return\([m_{c}]_{c\in C}\) 13:endfunction ``` **Algorithm 1** FedNCM. \(K\) is the total number of clients, \(C\) is the number of classes in the training dataset, \(D_{c}\) is the total number of samples of class \(c\) ### FedNCM Algorithm An alternative approach to derive an efficient \(g\) is through the use of NCM. We note that NCM can be derived exactly in a federated setting (which we denote as FedNCM). Outlined in Algo. 1., FedNCM allows an efficient classifier approximation for pre-trained models that addresses many of the critical concerns in the FL setting including privacy, communication, and computation time. Specifically, the server only communicates the pre-trained weights once to each of the clients and clients only communicate once with the server to send back their weighted class means. The server can then use each client's class means to compute exactly the NCM which can be used to perform classification directly using the class centroids or to initialize a linear task head for further fine-tuning. To use NCM as an initialization, consider the cross-entropy loss and \(f\circ g(\mathbf{x})=\mathbf{V}f(\mathbf{x};\mathbf{w})+\mathbf{b}\). We can set the matrix \(V\) corresponding to the class \(c\) logit with the normalized class centroid \(\mathbf{l}_{c}/\|\mathbf{l}_{c}\|\) and the bias term to 0. This allows us to initialize the task head with FedNCM and obtain further improvement through fine-tuning \(f\). ### HeadTune + FineTune FL algorithms are often unstable due to the mismatch in client objectives which can lead to large changes during local training and cause significant deviations amongst the different client models. In the setting where a pre-trained model allows us a powerful initial representation we argue that a way to improve the stability and converge quickly is by considering a two-stage procedure. In the first phase (HeadTune) we perform HeadTuning where the parameters of \(g\) are updated e.g. by learning a linear model in federated fashion or by using FedNCM. We observe that FedNCM HeadTuning requires only a single forward pass through the data; one model communication to each client and one communication of centroids back. This is a negligible cost in compute and communication with respect to any typical fine-tuning phase. In the second phase (FineTune), \(f\) and the parameters \(\mathbf{w}\) are fine tuned in a federated setting according to the FL objective function specified in Eq. 1. Taking the negligible cost of communication and compute provided by FedNCM into account, our two-phase approach can have a substantial advantage in convergence when compared to simply a fine-tuning phase (Nguyen et al., 2023; Chen et al., 2023). We also note that our two-phase strategy is compatible with any federated optimization algorithm in the literature. We now give an intuitive interpretation of the advantages provided by our method using the framework of Ren et al. (2023). Assume that the \(k\)-th worker is initialized via \(\mathbf{w}_{0}\), and trained locally with SGD for several steps until it reaches the parameter \(\mathbf{w}_{k}\). Writing \(\mathbf{w}^{*}\) the optimal parameter, via triangular inequality, we obtain the following inequality: \[\mathbb{E}_{\mathbf{X}_{k}}[\|f(\mathbf{w}_{k};\mathbf{X}_{k})-f(\mathbf{w}^{*}; \mathbf{X}_{k})\|]\leq\mathbb{E}_{\mathbf{X}_{k}}[\|f(\mathbf{w}_{0};\mathbf{X }_{k})-f(\mathbf{w}^{*};\mathbf{X}_{k})\|+\|f(\mathbf{w}_{k};\mathbf{X}_{k})-f( \mathbf{w}_{0};\mathbf{X}_{k})\|]\,. \tag{2}\] In the NTK regime, for sufficiently small step size, Ren et al. (2023) showed that the second term depends on the approximation quality of the head \(g_{0}\) at initialization, which is bounded (where \(\sigma\) is the sigmoid activation and \(\{\mathbf{e}_{i}\}_{i}\) the canonical basis) for some \(c>0\), by: \[\mathbb{E}_{\mathbf{X}_{k}}\|f(\mathbf{w}_{k};\mathbf{X}_{k})-f(\mathbf{w}_{0 };\mathbf{X}_{k})\|\leq c\cdot\mathbb{E}_{(\mathbf{X}_{k},\mathbf{Y}_{k})}\| \mathbf{e}_{\mathbf{Y}_{k}}-g_{\mathbf{V}}(f(\mathbf{w}_{0};\mathbf{X}_{k})) \|\,.\] This suggests in particular that a good choice of linear head \(\mathbf{V}\) will lead to a smaller right hand side term in Eq. 2, and thus reduce the distance to the optimum. Consequently, FedNCM or LP derived \(\mathbf{V}\) (compared to a random \(\mathbf{V}\)) may be expected to lead to a more rapid convergence. Thanks to the initial consensus on the classifier, we may also expect less client drift to occur, at least in the first round of training, when \(\mathbf{V}\) it intialized by HeadTuning, compared to a random initialization. ## 4 Experiments In this section we will experimentally demonstrate the advantages of our proposed FedNCM and FedNCM+FT. Additionally, we show that simple LP tuning can at times be more stable and communication efficient than undertaking the full fine tuning considered almost exclusively in prior work on FL with pre-trained models. Our experiments focus on image classification tasks and we consider a setting similar to Nguyen et al. (2023) using the CIFAR 10 dataset (Krizhevsky, 2009). We also expand our setting to include four additional standard computer vision datasets shown in Tab. 1. Following the method of Hsu et al. (2019) data is distributed between clients using a dirichlet distribution parameterized by \(\alpha=0.1\) for our primary experiments. We also set the number of clients to 100, train for 1 local epoch per round, and set client participation to 30\(\%\) for CIFAR (as in Nguyen et al. (2023)). For all other datasets we use full client participation for simplicity. Additional variations of simulation settings above are provided in Sec. 4.2 and in the Appendix. Like Nguyen et al. (2023), we use SqueezeNet (Iandola et al., 2016), and we additionally consider a ResNet18 (He et al., 2016) as the base model for fine-tuning, the results of which are presented in Appendix B. For all datasets when performing fine-tuning and evaluation we resize images to 224x224, the training input size of ImageNet. We run all experiments for three seeds using the FLSim library described in Nguyen et al. (2023). Baseline methodsWe compare our methods to the following approaches as per Nguyen et al. (2023): (a) _Random_: the model is initialized at random with no use of This setting corresponds to the standard FL paradigm of McMahan et al. (2017). (b) _LP_: Given a pre-trained model, we freeze the base and train only the linear head using standard FL optimizer for training. (c) _FT_: A pre-trained model is used to initialize the global model weights and then a standard FL optimization algorithm is applied. (d) _LP and FT Oracles_: These are equivalent baselines trained in the centralized setting that provide an upper bound to the expected performance. All of the above baseline methods as well as our FedNCM and FedNCM+FT can be combined with any core FL optimization algorithm such as FedAVG and FedAdam (Reddi et al., 2020). In our primary experiments, we focus on the high-performing FedAVG and FedAdam which have been shown to do well in these settings in prior art (Nguyen et al., 2023). HyperparametersWe follow the approach of Nguyen et al. (2023), Reddi et al. (2020) to select the learning rate for each method on the various datasets. For CIFAR-10 and SqueezeNet experiments we take the hyperparameters already derived in Nguyen et al. (2023). Additional details of selected hyperparameters for all experiments are provided in Appendix C. \begin{table} \begin{tabular}{c c c} \hline \hline **Dataset** & **Num. Classes** & **Num. Images** \\ \hline CIFAR-10 & 10 & 50000 \\ Flowers102 & 102 & 1020 \\ Stanford Cars & 196 & 8144 \\ CUB & 200 & 5994 \\ EuroSAT-Sub & 10 & 5000 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of datasets used in our experiments. Communication and Computation BudgetWe evaluate the communication and computation costs of each proposed method. Costs are considered both in total and given a fixed budget for either communication or computation. For the communication costs, we assume that each model parameter that needs to be transmitted is transmitted via a 32-bit floating point number. This assumption allows us to compute the total expected communication between clients and server. It is important to emphasize that linear probing only requires that we send client updates for the classifier rather than the entire model as is the case in the other settings. Consequently, LP has much lower communication costs when compared to FT for any given number of rounds. Our proposed FedNCM is a one-round algorithm and therefore has even lower communication costs than any other algorithm considered. For computation time we consider the total FLOPs executed on the clients. We assume for simplicity that the backward pass of a model is \(2\times\) the forward pass. For example, in the case of LP (with data augmentation) each federated round leads to one forward communication on the base model, \(f\), and one forward and one backward (equivalent to two forward passes) on the head, \(g\). Similarly, for FedNCM the communication cost consists only one forward pass through the data. ### Efficiency of Pure HeadTuning for FL As discussed in Sec. 1 tuning the classifier head is at times at least as effective as updating the entire model in the context of transfer learning Evci et al. (2022). In prior work, this situation was briefly considered as a limited case in Nguyen et al. (2023, Appendix C.2) for CIFAR-10 and suggested that tuning just the linear head (LP) might be a weak approach in the heterogeneous setting. We first revisit this claim and expand the scope of these experiments to highlight where LP can be beneficial in terms of performance, communication costs, and compute time. Subsequently, we show another approach for approximating a good classifier, FedNCM which can be competitive with orders of magnitude less computation and communication cost. We will demonstrate how to get the best of both HeadTuning and fine-tuning in the FL setting. In Nguyen et al. (2023) the CIFAR-10 fine-tuning is done by feeding the \(32\times 32\) input image directly into a pre-trained ImageNet model. Since the architectures are adapted to the \(224\times 224\) size and trained at this scale originally, such an approach can lead to a very large distribution shift and may be sub-optimal for transfer learning. Thus we additionally compare to CIFAR-10 using the traditional approach of resizing the image to the source data (Kornblith et al., 2019, Evci et al., 2022). Tab. 2 shows accuracy, compute, and communication cost results for pure HeadTuning Methods (FedNCM and LP) as well as full tuning approaches including our FedNCM+FT. We note that in Tab. 2, CIFAR-10-\(32\times 32\) refers to results published in Nguyen et al. (2023). We first point out the difference image input size has on the results. Overall accuracy is much higher (highest is 86% vs 63%) and the gap between FT and LP is substantially smaller when using the model's native \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dataset** & **Method** & **Accuracy** & **Total Compute** & **Total Comm.** \\ \hline \multirow{4}{*}{CIFAR-10} & Random & \(67.8\pm 0.6\) & \(4.5\times 10^{8}\) F & 1.7Tb \\ & FT Pretrain & \(85.4\pm 0.4\) & \(2.5\times 10^{7}\) F & 10.7GB \\ & FoNCM+FT Pretrain & \(\mathbf{87.2\pm 0.2}\) & \(2.5\times 10^{7}\) F & 10.7GB \\ & LP Pretrain & \(82.5\pm 0.2\) & \(7.5\times 10^{7}\) F & 149.5 GB \\ & FeNCM & \(64.8\pm 0.1\) & \(\mathbf{1\times F}\) & **319 Mb** \\ \hline \multirow{4}{*}{CIFAR-10 \(\times 32\)} & RandomNevipen et al. (2023) & \(34.2\) & \(4.5\times 10^{8}\) F & 1.7TB \\ & FT Pretrain Nguyen et al. (2023) & \(\mathbf{59.1}\) & \(7.5\times 10^{7}\) F & 149.3 GB \\ & LP Pretrain Nguyen et al. (2023) & \(44.7\) & \(2.5\times 10^{7}\) F & 10.7GB \\ & FeNCM & \(44.9\) & \(\mathbf{1\times F}\) & **319 Mb** \\ \hline \multirow{4}{*}{Flowers-102} & Random & \(33.2\pm 0.7\) & \(3.7\times 10^{7}\) F & 1.7TB \\ & FT Pretrain & \(64.5\pm 1.0\) & \(3.15\times 10^{6}\) F & 149.3 GB \\ & FoNCM+FT Pretrain & \(\mathbf{74.9\pm 0.2}\) & \(3.15\times 10^{6}\) F & 149.3 GB \\ & - & LP Pretrain & \(74.1\pm 1.2\) & \(1.05\times 10^{6}\) F & 10.7GB \\ & - & FFNNCM & \(71.8\pm 0.03\) & **1579 Mb** \\ \hline \multirow{4}{*}{CUB} & Random & \(15.0\pm 0.7\) & \(2.2\times 10^{8}\) F & 1.7TB \\ & FT Pretrain & \(52.0\pm 0.9\) & \(1.9\times 10^{7}\) F & 149.3 GB \\ & FoNCM+FT Pretrain & \(\mathbf{55.0\pm 0.3}\) & \(1.9\times 10^{7}\) F & 149.3 GB \\ & - & LP Pretrain & \(50.0\pm 0.3\) & \(6.3\times 10^{6}\) F & 10.7GB \\ & FoNCM & \(37.9\pm 0.2\) & \(\mathbf{1\times F}\) & **319 MB** \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy, total computation and total communication costs of pure HeadTuning methods (below dashed lines) and their counterparts. We observe pure headtuning approaches, FedNCM and LP can be a powerful approach especially under compute and communication constraints. F is one forward pass of a single sample. input size, it shows an absolute improvement of only \(4.6\%\) vs \(18.4\%\). For both sizes of CIFAR-10 and on CUB, FedNCM can substantially exceed random performance while maintaining a highly competitive compute and communication budget. Experiments on the Flowers102 dataset show that FedNCM can already far exceed the difficult-to-train FT setting and furthermore, LP alone exceeds both FedNCM and FT. Our two-phase method of FedNCM+FT outperforms all other methods in terms of accuracy. In what follows we will show how FedNCM+FT also allows high efficiency given a specific, potentially limited compute and computational budget. When considering the results, we note that CIFAR-10 contains the same object categories as the original ImageNet dataset but the Flowers102 and CUB datasets, represent more realistic transfer learning tasks and under these conditions we observe the true effectiveness of HeadTuning methods such as FedNCM and LP. Figure 1: A comparison of the accuracy between models initialized with pre-trained weights and trained on different downstream tasks. We show as well the communication and compute costs, where HeadTuning methods and FedNCM+FT shine. ### FedNCM then FineTune We now study in more detail the two-phase approach described in Sec. 3.3. Fig. 1 shows the comparison of our baselines and FedNCM+FT with FedAVG. We show both accuracies versus rounds as well as accuracy given a communication and computation cost budget. Firstly, we observe that when going beyond CIFAR-10, LP can converge rather quickly and sometimes to the same accuracy as FT, which supports its consideration in federated learning scenarios and shows the importance of HeadTuning. Secondly, we can see the clear advantage of FedNCM+FT, due to the phase one FedNCM initialization, it is able to achieve a strong starting accuracy, in phase two, it converges with a better accuracy than FT with the same computation budget. The rapid convergence allows FedNCM+FT to be highly efficient under most communication budgets compared to other methods as shown by the second column of Tab. 2. Indeed for all the datasets FedNCM+FT is always optimal early on. For three of the datasets (Flowers, CUB, Cars) it exceeds LP over any communication budget. For CIFAR-10 and Eurosat LP can overtake it after the early stage, however, FedNCM+FT remains competitive and ultimately reaches higher performance. Similar trends are observed for computation time. On EuroSAT data we see an initial decrease from the FedNCM but than quickly recovers and obtains the highest performance. We note overall as compared to FT the performance improvement can be drastic when considering the trade-off of accuracy as a function of communication and compute available. We also remark that the variance of LP and FedNCM+FT is lower across runs than the FT and Random counterparts. We note that the Random baseline, typically requires longer training than others to reach the convergence criteria, thus for the purpose of our visualization we do not show the fully converged random baseline, which always requires many more communication rounds than the other approaches; however, the full curves are included in Appendix D. We now focus on demonstrating other advantages of FedNCM+FT, in particular robustness to larger number of clients, insensitivity to hyperparameters, and compatibility with multiple FL algorithms and architectures. **Choice of FL Algorithm** So far we have focused on FedAvg, since our method is compatible with any FL optimizer we further analyze FedNCM+FT for the case of FedAdam which obtained some of the higher performances in Nguyen et al. (2023). We observe that improved FL optimizers can complement FedNCM+FT which can systematically exceed FT. Even with this improved method, FedNCM which does not require an optimizer, continues to exceed the performance of FT on Flowers102. This result suggests the consideration of the FL optimization algorithm (Nguyen et al., 2023) is not always the most critical aspect for optimal performance. **Hyperparameter Tuning** FL algorithms are known to be challenging for hyperparameter selection (Reddi et al., 2020) and this can affect their practical application. We first note that FedNCM does not have any hyperparameters which already provides a large advantage. In Fig. 4, we observe the final performance for a grid search over a range of server and client learning rates for FedAdam using both FT and FedNCM+FT. We observe that FedNCM+FT not only has higher performance but it is also more stable over the entire hyperparameter grid on Flowers dataset, and outperforms for all settings on CIFAR-10. **Heterogeneity**Nguyen et al. (2023) points out that starting from a pre-trained model can reduce the effect of system heterogeneity. This is evaluated by comparing a specific Dirichlet distribution (\(\alpha=0.1\)) used to partition data into a non-i.i.d. partitioning. Although the effect of heterogeneity is reduced we observe that in highly heterogeneous settings we still see substantial degradation in FT as shown in Fig. 2. Here we consider for CIFAR-10 the nearly i.i.d. \(\alpha=100\), \(\alpha=0.1\) as considered in Nguyen et al. (2023), and a very heterogeneous \(\alpha=0.01\). Firstly, we observe that FedNCM+FT can provide benefits in the i.i.d. setting. As heterogeneity degrades the naive FT setting sees a large \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Algorithm & FedNCM & FedNCM + FT & FT+Pretrain \\ \hline \multirow{2}{*}{CIFAR-10} & FedAvg & \(64.8\pm 0.1\) & \(87.2\pm 0.2\) & \(85.4\pm 0.4\) \\ & FedDAM & \(64.8\pm 0.1\) & \(\mathbf{89.4\pm 1.1}\) & \(88.2\pm 0.2\) \\ Flowers102 & FedAvg & \(71.8\pm 0.03\) & \(\mathbf{74.9\pm 0.2}\) & \(64.5\pm 1.0\) \\ & FedDAM & \(71.8\pm 0.03\) & \(\mathbf{76.7\pm 0.2}\) & \(66.6\pm 1.0\) \\ \hline \hline \end{tabular} \end{table} Table 3: Model performance with different methods for a variety of FL algorithms, for FedAvg and FedADAM. FedNCM+FT outperforms in all cases. absolute and relative drop in performance. On the other hand, FedNCM+FT as well as LP are able to degrade more gracefully. **Varying the Local Epoch** The number of local epochs can drastically affect FL algorithms, typically a larger amount of local computation between rounds is desired to minimize communication. However; this can often come at a cost of degraded performance. We observe in Fig. 3 as in Nguyen et al. (2023) that FT can be relatively robust in some cases (CIFAR-10) to increasing local epochs. However, we also observe for some datasets that it can degrade, while LP and FedNCM+FT are less likely to degrade. Overall FedNCM+FT continues to outperform for larger local epochs. **Increasing clients** Tab. 5 shows that as we increase the number of clients we observe that the degradation of FedNCM+FT is less severe than both LP and FT, suggesting it is stable under a large number of workers being averaged. As discussed in Sec. 3.3 it is expected in the same round that a representation would shift less from a starting point, and therefore since the starting point is the same for all Figure 4: Hyperparameter grids for FedAdam for CIFAR-10 FT, FedNCMFT (left) and Flowers (right). We observe CIFAR-10 FedNCM-FT tends to do better or equal for all hyperparameters compared to FT. For Flowers it is much easier to tune, achieving strong values over a wide range, a noticeable advantage in FL Figure 5: We increase the number of clients on CIFAR-10. FedNCM+FT degrades more gracefully than FT and LP. Figure 3: We vary the number of local epochs. FedNCM+FT always outperforms FT and nearly always LP in this challenging setting. Figure 2: We vary the heterogeneity (Dirichlet-\(\alpha\)) for CIFAR-10 and Flowers102. Methods with HeadTuning: LP and FedNCM+FT are more robust, with the substantial advantage of FedNCM + FT increasing in challenging higher heterogeneity. clients, we expect the client drift within a round to be less given a fixed update budget. ## 5 Conclusion and Limitations We have highlighted the importance of the last layers in federated learning from pre-trained models. We used this observation to then derive two highly efficient methods FedNCM and FedNCM+FT whose advantages in terms of performance, communication, computation, and robustness to heterogeneity were demonstrated. A limitation of our work is that it focus on image data and models, as this the primary set of data and models studied in prior work particularly in the context of transfer learning. ## 6 Acknowledgements This research was partially funded by NSERC Discovery Grant RGPIN- 2021-04104 and FRQNT New Scholar grant. We acknowledge resources provided by Compute Canada and Calcul Quebec.
2302.00711
Generating Linear, Semidefinite, and Second-order Cone Optimization Problems for Numerical Experiments
The numerical performance of algorithms can be studied using test sets or procedures that generate such problems. This paper proposes various methods for generating linear, semidefinite, and second-order cone optimization problems. Specifically, we are interested in problem instances requiring a known optimal solution, a known optimal partition, a specific interior solution, or all these together. In the proposed problem generators, different characteristics of optimization problems, including dimension, size, condition number, degeneracy, optimal partition, and sparsity, can be chosen to facilitate comprehensive computational experiments. We also develop procedures to generate instances with a maximally complementary optimal solution with predetermined optimal partition to generate challenging semidefinite and second-order cone optimization problems. Generated instances enable us to evaluate efficient interior-point methods for conic optimization problems.
Mohammadhossein Mohammadisiahroudi, Ramin Fakhimi, Brandon Augustino, Tamás Terlaky
2023-02-01T19:08:15Z
http://arxiv.org/abs/2302.00711v1
Generating Linear, Semidefinite, and Second-order Cone Optimization Problems for Numerical Experiments ###### Abstract The numerical performance of algorithms can be studied using test sets or procedures that generate such problems. This paper proposes various methods for generating linear, semidefinite, and second-order cone optimization problems. Specifically, we are interested in problem instances requiring a known optimal solution, a known optimal partition, a specific interior solution, or all these together. In the proposed problem generators, different characteristics of optimization problems, including dimension, size, condition number, degeneracy, optimal partition, and sparsity, can be chosen to facilitate comprehensive computational experiments. We also develop procedures to generate instances with a maximally complementary optimal solution with predetermined optimal partition to generate challenging semidefinite and second-order cone optimization problems. Generated instances enable us to evaluate efficient interior-point methods for conic optimization problems. P remarkRemark Problem Generator; Conic Optimization; Linear Optimization; Semidefinite Optimization; Second-order Cone Optimization ## 1 Introduction Optimization is just one of many fields in which the empirical analysis of algorithms is heavily reliant on the quality of the provided test instances. Scholars assess the strengths and weaknesses of algorithms based on these test problems, which must be unbiased, representative, and diverse in their measurable features or characteristics. However, many benchmark test problems do not possess these desired qualities, as they are often based on a limited set of real-world problems or have been reused from earlier studies that by now may be obsolete [2]. An alternative approach is using random test problem generators for experimentation in optimization. While their design must be carefully considered, one advantage of simple random generation approaches is their ability to produce problems that possess predictable characteristics. As a result, scientists have advocated for using highly parameterized generators to produce appropriately controlled data for experimentation [11]. As one of the first attempts in this area, randomly generated feasible polyhedra properties were investigated by Todd [24]. Pilcher and Rardin [17] proposed a generator for pure integer optimization problems with a known partial polytope by introducing random cuts. Yet, this methodology is restricted to traveling salesman problems and does not explicitly consider the solution of relaxation or structural features. Lacking the ability to vary features of interest, the scope of these generators for experimentation is limited to specific problem domains. At times, it can be challenging to develop instance generators in a way that allows properties of interest to be suitably varied. While specific characteristics, such as the density of a graph, can usually be directly controlled through the generation process, other attributes can be harder to predefine or control explicitly. Many measurable features of the same problem instance can be highly correlated, either due to interacting bounds or simply as a consequence of the random generation process. Instances with less-like feature combinations can be attained through an iterative local search, which successively modifies an instance until it possesses the desired properties. While these instance-space search techniques are more computationally intensive than parameterized generators, they provide a reliable method for producing instances with specific target characteristics [2]. The most prevalent search techniques for this application are evolutionary algorithms. Chakraborty and Choudhury [5] and Cotta and Moscato [8] applied this approach to perform statistical average- and worst-case analysis of algorithm performance. More recently, exploration in this direction has focused on improving the spectrum of instance hardness and diversity of measured features [20]. The success of these techniques in combinatorial optimization opens up questions on the use of similar approaches for linear optimization (LO) and mixed-integer optimization, adopting a more comprehensive range of search algorithms for obtaining difficult-to-design instances, and considering how to best construct the search space for efficient performance. To develop instance generation techniques for LO test problems with controllable properties, Bowly et al. [2] presented a comparison of a naive random generator with a highly parameterized generator, showing which feature values can be effectively controlled by each method. They also investigated iterative search approaches to find instances that are difficult to design or rarely produced by the generator. These approaches allow practitioners to explore areas of interest in the space of linear optimization problems (LOPs), where challenging instances have previously been found. This would be impossible using static test sets or naive random generation methods, which provide limited feature control. Further, large-scale linear optimization problems are prevalent in economics, industry, logistics, statistics, quantum physics, and other fields. As is the case with any real-world application, the aim is to obtain high-quality solutions efficiently, a task for which high-performance computing systems and parallel algorithms are required. Thus, the development of new parallel algorithms for generating LOPs and the revision of current algorithms are considered by Sokolinsky and Sokolinskaya [21]. Developing new algorithms for solving large-scale LOPs necessitates testing them on benchmark and random problems. At times, it is sensible to construct linear and integer optimization instance generators specified for special purposes. The NETGEN generator [12] and its successor MNETGEN produce parameterized multicommodity flow, transport, and assignment problems. The parameters used are thus appropriate to the underlying network, not the feasible set. One of the well-known benchmark repositories of LOPs is Netlib-LP [10]. Yet, when debugging LO solvers, generating random LOPs with specific characteristics (such as, e.g., the sparsity, condition number of the coefficient matrix, or a known optimal partition) is often necessary. Charnes et al. [6] suggested one of the first methods for generating random LOPs with known solutions. This method allows one to generate test problems of arbitrary size with a wide range of numerical characteristics. The main idea of the method is as follows; take as a basis a LOP with a known solution, and then randomly modify it so that the solution does not change. The key drawback of this approach is that fixing the optimal solution in advance significantly restricts the random nature of the resulting LOP. Arthur and Trendewey [1] described the GENGUB generator, which constructs random LOPs with a known solution and given characteristics, such as the problem size, the density of the coefficient matrix, the number of binding inequalities, or the degeneracy status. A distinctive feature of GENGUB is the ability to introduce generalized upper bound constraints, defined to be a (sub)set of constraints in which each variable appears at most once (i.e., has at most one nonzero coefficient). This method has similar drawbacks to the generator found in [6]: by fixing the optimal solution ex ante, the random nature of the resulting LOP is significantly restricted. Castillo et al. [4] suggest a method for generating random LOPs with a preselected solution type: bounded or unbounded, unique or multiple. Each structure is generated using random vectors with integer components, whose range can be treated as given. Next, an objective function that satisfies the required conditions, i.e., leads to a solution of the desired type, is obtained. This LO problem generator is mainly used for educational purposes rather than testing new LO algorithms. Okolinsky and Sokolinskaya [21] proposed the random LOP generator FRaGenLP (Feasible Random Generator of LP), which is implemented as a parallel program for cluster computing systems. Calamai et al. [3] described a new technique for generating convex, strictly concave, and indefinite (bilinear or not) quadratic optimization problems. In the semidefinite optimization literature, scholars were interested in complex problems. They pursued various directions for characterizing what constitutes hardness in SDO problems, e.g., not having a strictly complementary solution [16], or a solution with a nonzero duality gap [22]. Wei and Wolkowicz [25] proposed a procedure to generate SDO problems without a strictly complementary solution. We build on these ideas to develop highly parameterized generators. ### Contributions This paper reviews and proposes several procedures to generate random LOPs, semidefinite optimization problems (SDOPs), and second-order cone optimization problems (SOCOPs) with a specified optimal solution, interior solution, and both of them. We also develop SDOP and SOCOP generators with specific maximally complementary solutions to predetermine the optimal partition. Generating SDOPs and SOCOPs with a specific interior solution ensures that Strong Duality holds for the generated problems, and the set of optimal solutions will be bounded. Access to predefined interior solutions will enable researchers to analyze the performance of optimization algorithms, such as feasible Interior Point Methods (IPMs), with respect to various initial interior solutions. Generating problems with known optimal solutions ensures that the generated problem has a bounded optimum and helps to analyze the algorithm concerning the characteristics of the optimal solution. These procedures will serve to further scholars' ability to examine their algorithms by altering different features of input data such as dimension, sparsity, condition number, solution size (which plays an essential role in the performance of Infeasible IPMs), and many others, besides predefined properties of the optimal solution. Another possible application of the proposed procedures is the average-case complexity analysis of algorithms. The rest of the paper is organized as follows. In Section 2, we give a brief review of LO theory before considering several LOP generators that can generate instances with specific optimal solutions, specific interior solutions, or both. We then develop similar generators for SDO and SOCO in Sections 3 and 4, respectively. A discussion on the implementation of the proposed instance generators is provided in Section 5, and Section 6 concludes the paper. ## 2 Linear Optimization In this section, we provide a gentle review of Linear Optimization theory before presenting three different algorithms for randomly generating Linear Optimization test problems. ### Linear Optimization Problems In LOPs, we seek to minimize the inner product of two \(n\)-dimensional vectors \[c^{\top}x=\sum_{i=1}^{n}c_{i}\cdot x_{i},\] for a constant vector \(c\in\mathbb{R}^{n}\) and variable vector \(x\in\mathbb{R}^{n}\). In this minimization, variable \(x\) must satisfy linear constraints of the form \[Ax=b,\] for a given matrix \(A\in\mathbb{R}^{m\times n}\) and vector \(b\in\mathbb{R}^{m}\). Moreover, we require that \(x\) be elementwise nonnegative, which we denote by \(x\geq 0\). We are therefore interested in randomly generating LOPs of the form \[z_{LO}^{P}=\min_{x}\left\{c^{\top}x:Ax=b,x\geq 0\right\},\] (LOP-P) and refer to (LOP-P) as the _primal problem_. Given the primal problem (LOP-P), we are also interested in a second problem known as the _dual problem_ of (LOP-P), which we write in standard form as follows, \[z_{LO}^{D}=\max_{(y,s)}\left\{b^{\top}y:A^{\top}y+s=c,s\geq 0,y\in\mathbb{R}^{m }\right\},\] (LOP-D) where \(s=c-A^{\top}y\) is the dual slack variable. We say that \(x\) and \((y,s)\) are _feasible solutions_ whenever they satisfy the constraints of the primal and dual problems, respectively. The set of primal-dual feasible solutions is thus defined as \[\mathcal{PD}_{LO}=\left\{(x,y,s)\in\mathbb{R}^{n}\times\mathbb{R}^{m}\times \mathbb{R}^{n}:Ax=b,A^{\top}y+s=c,(x,s)\geq 0\right\}.\] Similarly, the set of all _feasible interior solutions_ is given by \[\mathcal{PD}^{0}_{LO}=\left\{(x,y,s)\in\mathcal{PD}_{LO}:(x,s)>0\right\}.\] A crucial property of linear optimization is _weak duality_; any \((y,s)\) that is feasible for (LOP-D), provides a lower bound \(b^{\top}y\) on the value of \(c^{\top}x\) for any \(x\) feasible for (LOP-P), i.e.: \[b^{\top}y\leq c^{\top}x,\] for any \((x,y,s)\in\mathcal{PD}_{LO}\). Conversely, any \(x\) that is feasible for (LOP-P) provides an upper bound \(c^{\top}x\) on \(b^{\top}y\) for any \(y\) that is feasible for (LOP-D), and we refer to the nonnegative quantity \(c^{\top}x-b^{\top}y=x^{\top}s\) as the _duality gap_. Whenever \((x,y,s)\in\mathcal{PD}\) with \(c^{\top}x=b^{\top}y\), or equivalently \(x^{\top}s=0\), then \(x\) is optimal for (LOP-P) and \((y,s)\) is optimal for (LOP-D). In this case, _strong duality_ holds for LOPs, i.e., if both the primal and dual problems have feasible solutions, then both have optimal solution with equal objective value. Under strong duality, all optimal solutions, if there exist any, belong to the set \(\mathcal{PD}^{*}_{LO}\), defined as \[\mathcal{PD}^{*}_{LO}=\left\{(x,y,s)\in\mathcal{PD}_{LO}:x^{\top}s=0\right\}.\] Let \([n]\) denote the set \(\{1,2,\ldots,n\}\). Following Roos et al. [18], LOPs admit an optimal partition \(\mathcal{N}\cup\mathcal{B}=[n]\), and \(\mathcal{B}\cap\mathcal{N}=\emptyset\), where \[\mathcal{B} =\{i:\exists(x^{*},y^{*},s^{*})\in\mathcal{PD}^{*}_{LO}\text{ with }x^{*}_{i}>0\},\] \[\mathcal{N} =\{i:\exists(x^{*},y^{*},s^{*})\in\mathcal{PD}^{*}_{LO}\text{ with }s^{*}_{i}>0\}.\] If \((x^{*},y^{*},s^{*})\in\mathcal{PD}^{*}_{LO}\) with \(x^{*}_{i}>0\) for all \(i\in\mathcal{B}\), and \(s^{*}_{i}>0\) for all \(i\in\mathcal{N}\), then we have \(x^{*}+s^{*}>0\) and the optimal solution pair \((x^{*},y^{*},s^{*})\) is called strictly complementary. In this section, we use \((\mathcal{B},\mathcal{N})\) to denote the optimal partition, and \((B,N)\) the index set partition in the algorithms. After presenting each algorithm, we clarify when the predefined partition \((B,N)\) is equal to the optimal partition \((\mathcal{B},\mathcal{N})\). ### Instance Generators for LOPs In the rest of this section, we review three main generators which produce LO instances given either a predefined (or randomly chosen) interior solution, a predefined (or randomly chosen) optimal solution (maybe strictly complementary or not), or both. Each LOP generator allows the user to control the characteristics of parameters \((A,b,c)\), including but not limited to their condition number, sparsity, and norm. Further, users can alter the optimal solution's features to examine their algorithm's performance. In the following algorithms, the term "generate" should be interpreted freely. It may refer to generating the respective data randomly, or the connotation could be that the data is constructed with some specific purpose, e.g., to obtain matrices with some specific structure such as sparsity or conditioning. #### 2.2.1 LOPs with a Predefined Interior Solution To study the performance of IPMs applied to LOPs, it is often helpful to have instances with specific interior solutions, and a common approach to generating LOPs with a desired interior solution is presented as Algorithm 1. ``` 1:Choose dimensions \(m<n\) 2:Choose or generate \((x^{0},s^{0})\) such that \(x^{0}_{i}>0\) and \(s^{0}_{i}>0\) for all \(i\in[n]\) 3:Generate \(A\in\mathbb{R}^{m\times n}\) 4:Generate \(y^{0}\in\mathbb{R}^{m}\) 5:Calculate \(b=Ax^{0}\) and \(c=A^{\top}y^{0}+s^{0}\) 6:Return LOP \((A,b,c)\) with interior solution \((x^{0},y^{0},s^{0})\) ``` **Algorithm 1** Generating a LOP with a specific interior solution **Remark 1**: _Suppose we want the interior solution \((x^{0},s^{0})\) to have a duality gap of \(x^{0^{\top}}s^{0}=n\mu\) for some scalar \(\mu>0\). Then, in Step 1 of Algorithm 1, we generate \(x^{0}_{i}>0\) and calculate \(s^{0}_{i}=\frac{\mu}{x^{0}_{i}}\) for \(i\in[n]\)._ The above remark makes an observation relevant to IPMs, as in the context of IPMs, the constant \(\mu\), referred to as the central path parameter, plays a crucial role. IPMs begin with some initial interior solution \((x^{0},s^{0})\in\mathcal{PD}^{0}\) with \[\frac{x^{0^{\top}}s^{0}}{n}=\mu^{0}>0,\] and subsequently, reduce \(\mu\) in each iteration as the algorithm progresses toward a solution to the LOP with desired complementarity gap. In line with our discussion on LO duality, it is easy to see that when \(\mu\to 0\), we approach an optimal solution to the primal-dual pair (LOP-P)-(LOP-D). **Remark 2**: _Algorithm 1 facilitates the generation of a coefficient matrix \(A\) with any desired properties, e.g., sparsity, structure, or being ill-conditioned._ **Remark 3**: _Several conditions are needed to generate a full row rank coefficient matrix \(A\) with probability one randomly [see e.g., 7]._ #### 2.2.2 LOPs with a Predefined Optimal Solution A prevailing approach for generating LOPs with a known optimal solution is described in Algorithm 2. ``` 1:Choose dimensions \(m<n\) 2:Partition the index set \([n]\) to \(B\) and \(N\) with \(B\cap N=\emptyset\) and \(B\cup N=[n]\) 3:Generate \(x^{*}\) such that \(x^{*}_{i}>0\) for \(i\in B\) and \(x^{*}_{i}=0\) for \(i\in N\) 4:Generate \(s^{*}\) such that \(s^{*}_{i}>0\) for \(i\in N\) and \(s^{*}_{i}=0\) for \(i\in B\) 5:Generate \(A\in\mathbb{R}^{m\times n}\) 6:Generate \(y^{*}\in\mathbb{R}^{m}\) 7:Calculate \(b=Ax^{*}\) and \(c=A^{\top}y^{*}+s^{*}\) 8:Return LOP \((A,b,c)\) with optimal solution \((x^{*},y^{*},s^{*})\) ``` **Algorithm 2** Generating a LOP with a specific optimal solution **Remark 4**.: Since the generated optimal solution \((x^{*},y^{*},s^{*})\) by Algorithm 2 is strictly complementary, the optimal partition \((\mathcal{B},\mathcal{N})\) is equal to \((B,N)\). **Remark 5**.: Partition \((B,N)\) may be generated randomly or to satisfy some desired properties, such as primal or dual degeneracy, or both, or having a unique optimal basis solution. **Remark 6**.: Let \(A=[A_{B}\ A_{N}]\). If \(|B|=m\) and \(A_{B}\) is nonsingular, then \(x^{*}\) and \(s^{*}\) yield the unique optimal basis solution. **Remark 7**.: If we modify Algorithm 2 by generating \(x^{*}\) such that \(x_{i}^{*}\geq 0\) for \(i\in B\) and \(x_{i}^{*}=0\) for \(i\in N\), and \(s^{*}\) such that \(s_{i}^{*}\geq 0\) for \(i\in N\) and \(s_{i}^{*}=0\) for \(i\in B\), then \(B\) and \(N\) do not necessarily give the optimal partition. While \(x^{*}\) and \(s^{*}\) are complementary solutions, they are not necessarily strictly complementary. #### 2.2.3 LOPs with Predefined Optimal and Interior Solutions Charnes et al. [6] discuss procedures to generate problems with a specific optimal _or_ interior solution. Here, we develop a novel procedure to generate a LOP with a specific optimal solution \((x^{*},y^{*},s^{*})\)_and_ a specific interior solution \((x^{0},y^{0},s^{0})\), as presented in Algorithm 3. The general idea is first to use Algorithm 2 to generate a problem with optimal solution \((x^{*},y^{*},s^{*})\) before extending the problem by adding a variable and a constraint to make the interior point \((x^{0},y^{0},s^{0})\) feasible for the new problem. Using this scheme, we can produce LOPs for any general predefined optimal and interior solutions, where the only additional condition is \[(x^{0}-x^{*})^{\top}(s^{0}-s^{*})=0. \tag{1}\] The condition stipulated by equation (1) is a natural property; for any feasible solution pairs, it follows that \((x^{*}-x^{0})\in\mathrm{Lin}^{\perp}(A)\) and \((s^{*}-s^{0})\in\mathrm{Lin}(A)\), where \(\mathrm{Lin}(A)\) denotes the lineality space of \(A\). In other words, the difference of the predefined solutions \(x^{0}-x^{*}\) and \(s^{0}-s^{*}\)_must_ be orthogonal, and steps 4 and 5 of Algorithm 3 ensure this property holds. Theorem 2.2 asserts that the claimed properties of \((x^{0},y^{0},s^{0})\) and \((x^{*},y^{*},s^{*})\) are indeed correct. Before presenting and proving Theorem 2.2, we need to verify the orthogonality properties of the generated solution. Lemma 2.1: _For \((x^{0},y^{0},s^{0})\) and \((x^{*},y^{*},s^{*})\) generated by Algorithm 3, then we have_ \[(x^{0}-x^{*})^{\top}(s^{0}-s^{*})=0.\] Proof.: By construction, we have \[(x^{0}-x^{*})^{\top}(s^{0}-s^{*})= (x^{0}_{B})^{\top}s^{0}_{B}+(x^{*}_{B})^{\top}s^{*}_{B}-(x^{0}_{B})^ {\top}s^{*}_{B}-(x^{*}_{B})^{\top}s^{0}_{B}(x^{0}_{N})^{\top}s^{0}_{N}\] \[+(x^{*}_{N})^{\top}s^{*}_{N}-(x^{0}_{N})^{\top}s^{*}_{N}-(x^{*}_{N })^{\top}s^{0}_{N}(x^{0}_{n+1})^{\top}s^{0}_{n+1}\] \[+(x^{*}_{n+1})^{\top}s^{*}_{N}-(x^{0}_{n+1})^{\top}s^{*}_{n+1}-(x^ {*}_{n+1})^{\top}s^{0}_{n+1}\] \[= -\delta+(x^{0}_{n+1})^{\top}s^{0}_{n+1}-(x^{0}_{n+1})^{\top}s^{*} _{n+1}=0.\] The proof is complete. Using Lemma 2.1, the following theorem shows that the generated problem satisfies the desired properties. **Theorem 2.2**.: _Let \((x^{0},y^{0},s^{0})\) and \((x^{*},y^{*},s^{*})\) be generated by Algorithm 3. Then,_ \[x^{*}\geq 0,\ s^{*}\geq 0,\ x^{0}>0,\ s^{0} >0, \tag{2a}\] \[(x^{*})^{\top}s^{*} =0,\] (2b) \[Ax^{*} =b,\] (2c) \[A^{\top}y^{*}+s^{*} =c,\] (2d) \[Ax^{0} =b,\] (2e) \[A^{\top}y^{0}+s^{0} =c. \tag{2f}\] _That is, \((x^{0},y^{0},s^{0})\) and \((x^{*},y^{*},s^{*})\) and are, respectively, interior and optimal solutions of the generated LOP \((A,b,c)\)._ Proof.: Observe that (2a) holds by construction. Equality (2b) refers to compelmentarity of \((x^{*},y^{*},s^{*})\), which holds due to the fact that \[x^{*\top}s^{*}=\hat{x}_{B}^{\top}0+0^{\top}\hat{s}_{N}+0s^{*}_{n+1}=0.\] To see that equation (2c) holds, i.e., the optimal solution satisfies primal feasibility, observe that \[Ax^{*}=\begin{pmatrix}\hat{A}_{B}&\hat{A}_{N}&\hat{a}_{n+1}\\ d_{B}^{\top}&d_{N}^{\top}&d_{n+1}\end{pmatrix}\begin{pmatrix}\hat{x}_{B}\\ 0\\ 0\end{pmatrix}=\begin{pmatrix}\hat{A}_{B}\hat{x}_{B}\\ d_{B}^{\top}\hat{x}_{B}\end{pmatrix}=\begin{pmatrix}\hat{b}\\ d_{B}^{\top}\hat{x}_{B}\end{pmatrix}=b.\] Similarly, dual feasibility is satisfied by the optimal solution, since \[A^{\top}y^{*}+s^{*}=\begin{pmatrix}\hat{A}_{B}^{\top}&d_{B}\\ \hat{A}_{N}^{\top}&d_{N}\\ \hat{a}_{n+1}^{\top}&d_{n+1}\end{pmatrix}\begin{pmatrix}\hat{y}\\ 0\end{pmatrix}+\begin{pmatrix}0\\ \hat{s}_{N}\\ s^{*}_{n+1}\end{pmatrix}=\begin{pmatrix}\hat{c}\\ \hat{a}_{n+1}^{\top}\hat{y}+s^{*}_{n+1}\end{pmatrix}=c.\] That is, equation (2d) holds. The interior solution \((x^{0},y^{0},s^{0})\) is primal feasible since \[Ax^{0} =\begin{pmatrix}\hat{A}_{B}&\hat{A}_{N}&\hat{a}_{n+1}\\ d_{B}^{\top}&d_{N}^{\top}&d_{n+1}\end{pmatrix}\begin{pmatrix}x_{B}^{0}\\ x_{N}^{0}\\ x_{n+1}^{0}\end{pmatrix}\] \[=\begin{pmatrix}\hat{A}_{B}x_{B}^{0}+\hat{A}_{N}x_{N}^{0}+\hat{a} _{n+1}x_{n+1}^{0}\\ d_{B}^{\top}x_{B}^{0}+d_{N}^{\top}x_{N}^{0}+d_{n+1}x_{n+1}^{0}\end{pmatrix}\] \[=\begin{pmatrix}\hat{A}_{B}x_{B}^{0}+\hat{A}_{N}x_{N}^{0}+(\hat{A} _{B}(\hat{x}_{B}-x_{B}^{0})-\hat{A}_{N}x_{N}^{0})\\ d_{B}^{\top}x_{B}^{0}+d_{N}^{\top}x_{N}^{0}+(d_{B}^{\top}(\hat{x}_{B}-x_{B}^{0} )-d_{N}^{\top}x_{N}^{0})\end{pmatrix}\] \[=\begin{pmatrix}\hat{A}_{B}\hat{x}_{B}\\ d_{B}^{\top}\hat{x}_{B}\end{pmatrix}=\begin{pmatrix}\hat{b}\\ d_{B}^{\top}\hat{x}_{B}\end{pmatrix}=b,\] which proves (2e). We can also certify the dual feasibility of the interior solution: \[A^{\top}y^{0}+s^{0} =\begin{pmatrix}\hat{A}_{B}^{\top}&d_{B}\\ \hat{A}_{N}^{\top}&d_{N}\\ \hat{a}_{n+1}^{\top}&d_{n+1}\end{pmatrix}\begin{pmatrix}y_{1:m}^{0}\\ y_{m+1}^{0}\\ y_{m+1}^{0}\end{pmatrix}+\begin{pmatrix}s_{B}^{0}\\ s_{N}^{0}\\ s_{n+1}^{0}\end{pmatrix}=\begin{pmatrix}\hat{A}_{B}^{\top}y_{1:m}^{0}+d_{B}y_{m +1}^{0}+s_{B}^{0}\\ \hat{A}_{N}^{\top}y_{1:m}^{0}+d_{N}y_{m+1}^{0}+s_{N}^{0}\\ \hat{a}_{n+1}^{\top}y_{1:m}^{0}+d_{n+1}y_{m+1}^{0}+s_{n+1}^{0}\end{pmatrix}\] \[=\begin{pmatrix}\hat{A}_{B}^{\top}y_{1:m}^{0}+(\hat{A}_{B}^{\top} (\hat{y}-y_{1:m}^{0})-s_{B}^{0})+s_{B}^{0}\\ \hat{A}_{n}^{\top}y_{1:m}^{0}+(\hat{A}_{N}^{\top}(\hat{y}-y_{1:m}^{0})+s_{N}^ {*}-s_{N}^{0})+s_{N}^{0}\\ \alpha\end{pmatrix}=\begin{pmatrix}\hat{A}_{B}^{\top}\hat{y}\\ \hat{A}_{N}^{\top}\hat{y}+s_{N}^{*}\\ \alpha\end{pmatrix}\] \[=\begin{pmatrix}\hat{c}\\ \hat{a}_{n+1}^{\top}\hat{y}+s_{n+1}^{*}\end{pmatrix}=c,\] where \(\alpha=\hat{a}_{n+1}^{\top}y_{1:m}^{0}+d_{n+1}y_{m+1}^{0}+s_{n+1}^{0}\). Finally, to prove that equation (2f) holds as well, we still need to show that \(\alpha=\hat{a}_{n+1}^{\top}\hat{y}+s_{n+1}^{*}\). By straightforward calculation, we have \[\alpha =\frac{1}{x_{n+1}^{0}}(\hat{A}_{B}(\hat{x}_{B}-x_{B}^{0})-\hat{A} _{N}x_{N}^{0})^{\top}y_{1:m}^{0}+\frac{y_{m+1}^{0}}{x_{n+1}^{0}}(\hat{d}_{B}^{ \top}(\hat{x}_{B}-x_{B}^{0})-\hat{d}_{N}^{\top}\hat{x}_{N}^{0})+s_{n+1}^{0}\] \[=\frac{1}{x_{n+1}^{0}}\Big{(}(\hat{A}_{B}(\hat{x}_{B}-x_{B}^{0})- \hat{A}_{N}x_{N}^{0})^{\top}y_{1:m}^{0}+(\hat{A}_{B}^{\top}(\hat{y}-y_{1:m}^{0} )-s_{B}^{0})^{\top}(\hat{x}_{B}-x_{B}^{0})-(\hat{A}_{N}^{\top}(\hat{y}-y_{1:m}^ {0})+s_{N}^{*}-s_{N}^{0})^{\top}x_{N}^{0}\Big{)}+s_{n+1}^{0}\] \[=\frac{1}{x_{n+1}^{0}}\Big{(}y_{1:m}^{0}\hat{A}_{B}\hat{x}_{B}-y_ {1:m}^{0}\hat{A}_{B}B_{B}^{0}-y_{1:m}^{0}\hat{A}_{N}x_{N}^{0}+\hat{x}_{B}^{ \top}\hat{A}_{B}^{\top}\hat{y}-\hat{x}_{B}^{\top}\hat{A}_{B}^{\top}y_{1:m}^{0} -\hat{x}_{B}^{\top}s_{B}^{0}-x_{B}^{0}\hat{A}_{B}^{\top}\hat{y}+x_{B}^{0}\hat{A }_{B}^{\top}\hat{A}_{B}^{\top}\hat{B}_{B}^{0}\] \[\quad-x_{N}^{0}\hat{A}_{N}^{\top}\hat{y}+x_{N}^{0}\hat{A}_{N}^{ \top}y_{1:m}^{0}-x_{N}^{0}\hat{A}_{N}^{\top}s_{N}^{*}+x_{N}^{0}\hat{s}_{N}^{0} \Big{)}+s_{n+1}^{0}\] \[=\frac{1}{x_{n+1}^{0}}\Big{(}\hat{x}_{B}^{\top}\hat{A}_{B}^{\top} \hat{y}-{x_{B}^{0}}^{\top}\hat{A}_{B}^{\top}\hat{y}-x_{N}^{0}\hat{A}_{N}^{ \top}\hat{y}-\hat{x}_{B}^{\top}\hat{y}_{B}+x_{B}^{0}\hat{y}_{B}-x_{N}^{0}\hat{ \top}s_{N}^{*}+x_{N}^{0}\hat{\top}s_{N}^{0}\Big{)}+s_{n+1}^{0}\] \[=\frac{(\hat{x}_{B}^{\top}\hat{A}_{B}^{\top}-x_{B}^{0}\hat{A}_{B}^{ \top}-x_{N}^{0}\hat{A}_{N}^{\top})}{x_{n+1}^{0}}=\hat{a}_{n+1}^{\top}\hat{y}+s_ {n+1}^{*}.\] The proof is complete. **Remark 8**.: Since the generated optimal solution \((x^{*},y^{*},s^{*})\) by Algorithm 3 is strictly complementary, the optimal partition \((\mathcal{B},\mathcal{N})\) is equal to \((B,N)\). If we modify Algorithm 3 such that \(x_{i}^{*}\geq 0\) for \(i\in B\) and \(s_{i}^{*}\geq 0\) for \(i\in N\), then \(B\) and \(N\) do not necessarily give the optimal partition. While \(x^{*}\) and \(s^{*}\) are complementary solutions, they are not necessarily strictly complementary. **Remark 9**.: We can simplify Algorithm 3 by setting \[x_{B}^{0}=\hat{x}_{B},s_{N}^{0}=\hat{s}_{N},s_{n+1}^{*}=s_{n+1}^{0},and\ y_{1:m}^{0 }=\hat{y}.\] It is straightforward to verify that Condition 1 is satisfied for these choices. An even simpler case arises if we choose \[x_{N}^{0}=e,s_{B}^{0}=e,\text{ and }y_{m+1}^{0}=x_{n+1}^{0}=s_{n+1}^{0}=1.\] In the next section, we extend these problem generators to generate SDO problems. ## 3 Semidefinite Optimization Now, we turn our attention to SDO. Just as in the previous section, we begin by reviewing the problem setting and important properties before presenting the instance generators for this class of optimization problems. ### Semidefinite Optimization Problems In _semidefinite optimization_, one seeks to minimize the inner product of two \(n\times n\) symmetric matrices: \[C\bullet X=\operatorname{tr}\left(CX\right)=\sum_{i=1}^{n}\sum_{j=1}^{n}C_{ij} X_{ij},\] for some symmetric constant matrix \(C\in\mathcal{S}^{n}\) and matrix variable \(X\in\mathcal{S}^{n}\). Note that \(\mathcal{S}^{n}\) denotes the space of \(n\times n\) symmetric matrices, and we write \(\mathcal{S}^{n}_{+}\) (\(\mathcal{S}^{n}_{++}\)) to represent the cone of symmetric positive semidefinite (symmetric positive definite) matrices. Similar to the LOP studied in the previous section, variable \(X\) must satisfy linear constraints of the form \[A_{i}\bullet X=b_{i},\quad\forall i\in[m],\] where \(A_{1},\ldots,A_{m}\in\mathcal{S}^{n}\) are given symmetric matrices and \(b\in\mathbb{R}^{m}\). Given that \(C\bullet X\) is a linear function of \(X\), stopping here would simply yield a LOP in which the variables are given by the entries of the matrix \(X\). Rather, we add a nonlinear (albeit convex) constraint, which stipulates that \(X\) must be a positive semidefinite matrix, which we write \(X\succeq 0\). More generally, the notation \(U\succeq V\) indicates that \(U-V\) is symmetric positive semidefinite, and is equivalent to stating \(U-V\in\mathcal{S}^{n}_{+}\). Likewise, when the inequality is strict, i.e., \(U\succ V\), it follows that \(U-V\in\mathcal{S}^{n}_{++}\), so \(U-V\) is symmetric positive definite. From the above discussion, it is straightforward to observe that SDO is a generalization of LO, in which we replace the element-wise nonnegativity constraint \(x\geq 0\) found in (LOP-P) by a conic inequality with respect to the cone \(\mathcal{S}^{n}_{+}\). Accordingly, in this section we are interested in generating problems of the form \[z_{SDO}^{P}=\inf_{X}\left\{C\bullet X:A_{i}\bullet X=b_{i},\ \forall i\in[m],X\succeq 0 \right\},\] (SDOP-P) which has an associated dual problem \[z_{SDO}^{D}=\sup_{(y,S)}\left\{b^{\top}y:\sum_{i=1}^{m}y_{i}A_{i}+S=C,\ S\succeq 0,y \in\mathbb{R}^{m}\right\},\] (SDOP-D) where \(S=C-\sum_{i=1}^{m}y_{i}A_{i}\) is the slack matrix of the dual problem. Without loss of generality, we may assume that the matrices \(A_{1},\ldots,A_{m}\) are linearly independent. If \(X\) and \((y,S)\) satisfy the primal and dual constraints, respectively, we say that they are feasible solutions, denoting the feasible sets of (SDOP-P) and (SDOP-D) by: \[\mathcal{P}_{SDO} =\left\{X\in\mathcal{S}^{n}:A_{i}\bullet X=b_{i},\ i\in[m],X \succeq 0\right\}\] \[\mathcal{D}_{SDO} =\left\{(y,S)\in\mathbb{R}^{m}\times\mathcal{S}^{n}\ :\sum_{i=1}^{m}y_{i}A_{i}+S=C,S\succeq 0 \right\}.\] Accordingly, the sets of feasible interior solutions are given by \[\mathcal{P}_{SDO}^{0} =\left\{X\in\mathcal{P}_{SDO}:X\succ 0\right\},\] \[\mathcal{D}_{SDO}^{0} =\left\{(y,S)\in\mathcal{P}_{SDO}:S\succ 0\right\}.\] For ease of notation, we adopt the syntax \(\mathcal{PD}_{SDO}=\mathcal{P}_{SDO}\times\mathcal{D}_{SDO}\) and \(\mathcal{PD}_{SDO}^{0}=\mathcal{P}_{SDO}^{0}\times\mathcal{D}_{SDO}^{0}\). Just as in the case of LO, when IPMs are applied to SDOPs, it is standard to assume the existence of a strictly feasible primal-dual pair \(X\) and \((y,S)\) with \((X,S)\succ 0\). From the existence of a strictly feasible initial solution \((X^{0},S^{0})\succ 0\), it follows that the Interior Point Condition (IPC) is satisfied [9], guaranteeing that the primal and dual optimal sets \[\mathcal{P}_{SDO}^{*} =\left\{X\in\mathcal{P}_{SDO}:C\bullet X=z_{SDO}^{P}\right\},\] \[\mathcal{D}_{SDO}^{*} =\left\{(y,S)\in\mathcal{D}_{SDO}:b^{\top}y=z_{SDO}^{D}\right\},\] are nonempty and bounded, that an optimal primal-dual pair with zero duality gap exists, i.e., strong duality holds. That is, for optimal solutions \((X^{*},y^{*},S^{*})\in\mathcal{PD}_{SDO}^{*}\), where \(\mathcal{PD}_{SDO}^{*}=\mathcal{P}_{SDO}^{*}\times\mathcal{D}_{SDO}^{*}\), we have \[C\bullet X^{*}-b^{\top}y^{*}=X^{*}\bullet S^{*}=0,\] which implies \(X^{*}S^{*}=S^{*}X^{*}=0\) as \(X^{*}\) and \(S^{*}\) are symmetric positive semidefinite matrices. ### Instance Generators for SDOPs Similar to our work on LO, we propose three generators that produce SDO instances with a predefined interior solution, optimal solution, and both. Each generator is designed such that the user can control the characteristics of parameters such as condition number, sparsity, matrix structure, and size. Additionally, users can modify the features of optimal solutions to evaluate the performance of their algorithms. #### 3.2.1 SDOPs with a Predefined Interior Solution To study the performance of IPMs applied to SDO, it is helpful to have instances with a specific interior solution. Generally, some users may need to generate problems with an interior solution to ensure that Strong Duality, i.e., zero duality gap at optimality, holds. Along this line, we adapt Algorithm 1 to generate SDO instances with known interior solutions, as given in Algorithm 4. ``` 1:Choose dimensions \(m,n\) with \(m<\frac{n(n+1)}{2}\) 2:Generate \((X^{0},S^{0})\) such that \(X^{0}\succ 0\) and \(S^{0}\succ 0\) 3:Generate \(A_{i}\in\mathcal{S}^{n}\) for \(i\in[m]\) 4:Generate \(y^{0}\in\mathbb{R}^{m}\) 5:Calculate \(b_{i}=A_{i}\bullet X^{0}\) for \(i\in[m]\) and \(C=\sum_{i=1}^{m}y_{i}^{0}A_{i}+S^{0}\) 6:Return SDOP \((A_{1},\ldots,A_{m},b,C)\) with interior solution \((X^{0},y^{0},S^{0})\) ``` **Algorithm 4** Generating SDO problems with a specific interior solution Compared to Algorithm 1, the task of generating \(X^{0}\) and \(S^{0}\) in a general manner such that \(X^{0}S^{0}=\mu I\) for \(\mu>0\) is more computationally involved; we would first have to generate \(X^{0}\succ 0\) randomly, and subsequently calculate \(S^{0}\) as \(S^{0}=\mu(X^{0})^{-1}\). However, we can easily generate \(X^{0}\) and \(S^{0}\) for a specified value of \(\mu\) if we make additional assumptions regarding their structure (e.g., we can assume they are diagonal). We can also generate the matrices \(A_{1},\ldots,A_{m}\) to have desired properties such as sparsity, conditioning, or to satisfy some norm bound. Several approaches for generating random positive semidefinite are discussed in Appendix A. #### 3.2.2 SDOPs with a Predefined Block-diagonal Optimal Solution Algorithm 5 can be seen as a generalization of Algorithm 2 to SDO problems, in which the generated optimal solution explicitly has a block-diagonal structure corresponding to the optimal partition. Before presenting the instance generator, we review the notation of the optimal partition in the context of SDO. We are interested in problems whose optimal solution \((X^{*},y^{*},S^{*})\) exhibits zero duality gap, i.e., \(X^{*}S^{*}=0\). Thus, the spectral decomposition of an optimal pair \(X^{*}\) and \(S^{*}\) takes the form \[X^{*}=Q\Sigma Q^{\top}\text{ and }S^{*}=Q\Lambda Q^{\top},\] where \(Q\) is orthonormal, and the matrices \(\Sigma\) and \(\Lambda\) are diagonal, containing eigenvalues of \(X^{*}\) and \(S^{*}\), respectively. Letting \(\sigma_{i}=\Sigma_{i,i}\) and \(\lambda_{i}=\Lambda_{i,i}\), it follows that \(X^{*}S^{*}=0\) holds if and only if \(\sigma_{i}\lambda_{i}=0\) for all \(i\in[n]\). A primal-dual optimal solution \((X^{*},y^{*},S^{*})\in\mathcal{PD}_{SDO}^{*}\) is called maximally complementary if \(X^{*}\in\mathrm{ri}(\mathcal{P}_{SDO}^{*})\) and \((y^{*},S^{*})\in\mathrm{ri}(\mathcal{D}_{SDO}^{*})\). A maximally complementary optimal solution \((X^{*},y^{*},S^{*})\) is called strictly complementary if \(X^{*}+S^{*}\succ 0\). Let \(\mathcal{B}\coloneqq\mathcal{R}(X^{*})\) and \(\mathcal{N}\coloneqq\mathcal{R}(S^{*})\), where \((X^{*},y^{*},S^{*})\) is a maximally complementary optimal solution and \(\mathcal{R}(.)\) denotes the range space. We define \(n_{\mathcal{B}}\coloneqq\dim(\mathcal{B})\) and \(n_{\mathcal{N}}\coloneqq\dim(\mathcal{N})\). Then, we have \(\mathcal{R}(X)\subseteq B\) and \(\mathcal{R}(S)\subseteq\mathcal{N}\) for all \((X,y,S)\in\mathcal{PD}_{SDO}^{*}\). By the complementarity condition, the subspaces \(\mathcal{B}\) and \(\mathcal{N}\) are orthogonal, and this implies that \(n_{\mathcal{B}}+n_{\mathcal{N}}\leq n\), and in case of strict complementarity, \(n_{\mathcal{B}}+n_{\mathcal{N}}=n\). Otherwise, a subspace \(\mathcal{T}\) exists, which is the orthogonal complement to \(\mathcal{B}+\mathcal{N}\). Similarly, we have \(n_{\mathcal{T}}\coloneqq\dim(\mathcal{T})\), and so \(n_{\mathcal{B}}+n_{\mathcal{N}}+n_{\mathcal{T}}=n\)[15]. The partition \((\mathcal{B},\mathcal{N},\mathcal{T})\) of \(\mathbb{R}^{n}\) is called the optimal partition of an SDO problem. In LOPs, we know that \(\mathcal{T}\) is empty, but in general SDOPs \(\mathcal{T}\) can be non-empty [9]. In Algorithm 5, we generate SDOPs with optimal solutions which exhibit a block-diagonal structure using a partition \((B,N,T)\), which may be different from the optimal partition \((\mathcal{B},\mathcal{N},\mathcal{T})\) of the generated problem. ``` 1:Choose dimensions \(m,n\) with \(m<\frac{n(n+1)}{2}\) 2:Choose \(n_{B},n_{N}\in[n]\) where \(n_{B}+n_{N}\leq n\) 3:Generate positive definite matrix \(X_{B}\in\mathcal{S}_{++}^{n_{B}}\) 4:Generate positive definite matrix \(S_{N}\in\mathcal{S}_{++}^{n_{N}}\) 5:Build\({}^{1}\)\(X^{*}=\begin{pmatrix}X_{B}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\) and \(S^{*}=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&S_{N}\end{pmatrix}\) 6:Generate \(A_{i}\in\mathcal{S}^{n}\) for \(i\in[m]\) 7:Generate \(y^{*}\in\mathbb{R}^{m}\) 8:Calculate \(b_{i}=A_{i}\bullet X^{*}\) for \(i\in[m]\) and \(C=\sum_{i=1}^{m}y_{i}^{*}A_{i}+S^{*}\) 9:Return SDOP \((A_{1},\ldots,A_{m},b,C)\) with optimal solution \((X^{*},y^{*},S^{*})\) ``` **Algorithm 5** Generating SDO problems with a specific optimal solution **Remark 10**.: The sets \((B,N,T)\) generated in Algorithm 5 are not necessarily the optimal partition \((\mathcal{B},\mathcal{N},\mathcal{T})\) for the generated SDO problem \((A_{1},\ldots,A_{m},b,C)\). In general, we only have \[B\subseteq\mathcal{B},N\subseteq\mathcal{N},\text{ and }\mathcal{T}\subseteq T.\] **Remark 11**.: If an SDOP with a strictly complementary optimal solution is required, then we set \(n_{N}=n-n_{B}\). In this case the optimal partition is predefined as \(\mathcal{B}=B\), \(\mathcal{N}=N\), and \(\mathcal{T}=\emptyset\). One can easily verify that the solution \((X^{*},y^{*},S^{*})\) generated by Algorithm 5 is feasible for the SDO problem \((A_{1},\ldots,A_{m},b,C)\), and optimal since \(X^{*}S^{*}=0\). In addition, matrices \(A_{i}\) and \(C\) can be generated in a way to exhibit a particular sparsity, condition number or norm, and we can also control primal and/or dual degeneracy. #### 3.2.3 SDOPs with Predefined Block-diagonal Optimal and Interior Solutions By generating SDO problems with specific interior and optimal solutions, we can study the performance of various solution approaches. For example, one can analyze how efficiently feasible IPMs reduce the complementarity starting from a predefined interior solution to an optimal solution, or alternatively examine how robust performance is to the provided starting point or changes in the characteristics of the optimal solutions or partition. To accomplish this, we propose several algorithms in this paper providing an optimal solution or a maximally complementary solution. This section is focused on the case in which the user is interested in predefining an interior solution and an optimal solution, which need not necessarily be maximally complementary. Accordingly, Algorithm 6 generalizes Algorithm 3 to SDO for the case in which the generated optimal solution has a block-diagonal structure. We similarly seek to generate an optimal solution \((X^{*},y^{*},S^{*})\) and interior solution \((X^{0},y^{0},S^{0})\) as generally as possible, but we need to impose some additional requirements. Letting \(\mathcal{L}=\text{span}\{A_{1},\ldots,A_{m}\}\), we have \(X^{0}-X^{*}\in\mathcal{L}^{\perp}\) and \(S^{0}-S^{*}\in\mathcal{L}\), and hence, the generated solutions are required to satisfy the orthogonality condition \[(X^{0}-X^{*})\bullet(S^{0}-S^{*})=0. \tag{3}\] In Algorithm 6, steps 7 and 8 are designed to ensure the generated solutions \((X^{*},y^{*},S^{*})\) and \((X^{0},y^{0},S^{0})\) indeed satisfy orthogonality. ``` 1:Choose dimensions \(m,n\) with \(m<\frac{n(n+1)}{2}\) 2:Choose \(n_{B},n_{N}\in[n]\) where \(n_{B}+n_{N}\leq n\) 3:Generate SDO problem \((\hat{A}_{1},\ldots,\hat{A}_{m},\hat{b},\hat{C})\) with optimal solution \((\hat{X},\hat{y},\hat{s})\) using Algorithm 5 4:Generate \(X^{0}_{B}\succ 0\), \(X^{0}_{T}\succ 0\),\(X^{0}_{N}\succ 0\),\(X^{0}_{n+1}>0\) randomly 5:Build \(X^{*}_{(n+1)\times(n+1)}=\begin{pmatrix}\hat{X}&0\\ 0&0\end{pmatrix}\) and \(X^{0}_{(n+1)\times(n+1)}=\begin{pmatrix}X^{0}_{B}&0&0&0\\ 0&X^{0}_{T}&0&0\\ 0&0&X^{0}_{N}&0\\ 0&0&0&X^{0}_{n+1}\end{pmatrix}\) 6:Generate \(S^{0}_{T}\succ 0\), \(S^{0}_{B}\succ 0\), \(S^{0}_{N}\succ 0\) randomly 7:Calculate \(\delta=(X^{0}_{B}-\hat{X}_{B})\bullet S^{0}_{B}+X^{0}_{T}\bullet S^{0}_{T}+X^ {0}_{N}\bullet(S^{0}_{N}-\hat{S}_{N})\) 8:Generate \(S^{0}_{n+1}>(\frac{-\delta}{X^{0}_{n+1}})^{+}\) and calculate \(\hat{S}_{n+1}=\frac{\delta}{X^{0}_{n+1}}+S^{0}_{n+1}\) 9:Build \(S^{*}_{(n+1)\times(n+1)}=\begin{pmatrix}\hat{S}&0\\ 0&\hat{S}_{n+1}\end{pmatrix}\) and \(S^{0}_{(n+1)\times(n+1)}=\begin{pmatrix}S^{0}_{B}&0&0&0\\ 0&S^{0}_{T}&0&0\\ 0&0&S^{0}_{N}&0\\ 0&0&0&S^{0}_{n+1}\end{pmatrix}\) 10:Generate \(y^{0}\in\mathbb{R}_{m+1}\) randomly such that \(y^{0}_{m+1}\neq 0\) 11:Build \(y^{*}=\begin{pmatrix}\hat{y}\\ 0\end{pmatrix}\in\mathbb{R}_{m+1}\) 12:Calculate \(\alpha_{i}=\frac{1}{X^{0}_{n+1}}(\hat{A}_{i_{B}}\bullet(X_{B}-X^{0}_{B}))-( \hat{A}_{i_{N}}\bullet X^{0}_{N})-(\hat{A}_{i_{T}}\bullet X^{0}_{T}))\) for \(i\in[m]\) 13:Build \(A_{i}=\begin{pmatrix}\hat{A}_{i}&0\\ 0&\alpha_{i}\end{pmatrix}\) for \(i\in[m]\) 14:Build \(A_{n+1}=\sum_{i=1}^{m}\frac{\hat{y}_{i}-y^{0}_{i}}{y^{0}_{m+1}}A_{i}+\frac{1} {y^{0}_{m+1}}\begin{pmatrix}-S^{0}_{B}&0&0&0\\ 0&-S^{0}_{T}&0&0\\ 0&0&\hat{S}_{N}-S^{0}_{N}&0\\ 0&0&0&\hat{S}_{n+1}-S^{0}_{n+1}\end{pmatrix}\) 15:Calculate \(\theta=\hat{S}_{n+1}+\sum_{i=1}^{m}\hat{y}_{i}\alpha_{i}\) and build \(C=\begin{pmatrix}\hat{C}&0\\ 0&\theta\end{pmatrix}\) 16:Calculate \(\beta=A_{n+1}\bullet X^{*}\) and build \(b=\begin{pmatrix}\hat{b}\\ \beta\end{pmatrix}\) 17:Return SDOP \((A_{1},\ldots,A_{m},b,C)\) with optimal solution \((X^{*},y^{*},S^{*})\) and interior solution \((X^{0},y^{0},S^{0})\) ``` **Algorithm 6** Generating SDO problems with specific interior and optimal solutions Before proving the correctness of Algorithm 6, the next result certifies the orthogonality properties of the generated solution. **Lemma 3.1**.: _For any \((X^{0},y^{0},S^{0})\) and \((X^{*},y^{*},S^{*})\) generated by Algorithm 6, we have_ \[(X^{0}-X^{*})\bullet(S^{0}-S^{*})=0.\] Proof.: Similar to the proof of Lemma 2.1, it can be proved by substitution and using Steps 7 and 8. Using Lemma 3.1, the following theorem shows that the generated problem satisfies the desired properties. **Theorem 3.2**.: _Let \((X^{0},y^{0},S^{0})\) and \((X^{*},y^{*},S^{*})\) be solutions generated by Algorithm 6. Then,_ \[X^{*}\succeq 0,\ S^{*}\succeq 0,\ X^{0}\succ 0,\ S^{0} \succ 0, \tag{4a}\] \[X^{*}\bullet S^{*} =0,\] (4b) \[A_{i}\bullet X^{*} =b_{i},\] (4c) \[\sum_{i=1}^{n}y_{i}^{*}A_{i}^{\top}+S^{*} =C,\] (4d) \[A_{i}\bullet X^{0} =b_{i},\] (4e) \[\sum_{i=1}^{n}y_{i}^{0}A_{i}^{\top}+S^{0} =C. \tag{4f}\] Proof.: Just as in the case of LO, all parts of Theorem 3.2 are easy to verify based on the steps of Algorithm 6, save for equation (4e) for \(i=n+1\). Following the proof of Theorem 2.2, the claimed result follows from the definition of \(\alpha\) and equation (3). Similar to generating SDOPs with an optimal solution, we can also generate problems with both a specific strictly complementary optimal solution and a specific interior solution. **Remark 12**.: One special case is when \(n_{B}+n_{N}=n\), \(T=\emptyset\), and \[X_{B}^{0}=\hat{X}_{B},X_{N}^{0}=I,X_{T}^{0}=I,S_{B}^{0}=I,S_{T}^{0}=I,S_{N}^{0 }=\hat{S}_{N},X_{n+1}^{0}=1,S_{n+1}^{0}=1.\] #### 3.2.4 SDOPs with Predefined Optimal Solution (General Structure) We are also interested in the situation where the desired optimal solution does not exhibit a block structure. Some methods can exploit the structural properties of the optimal solution, for example, when it exhibits a block-diagonal structure or is sparse. In order to generate an optimal solution that possesses certain desired qualities, we use the inverse process of eigenvalue decomposition. First, we generate diagonal matrices \(\Sigma\) and \(\Lambda\), whose diagonal elements are the eigenvalues of \(X^{*}\) and \(S^{*}\), respectively. Then, \(X^{*}\) and \(S^{*}\) can be calculated by masking these diagonal matrices using a randomly generated orthonormal matrix \(Q\), and techniques for generating orthonormal matrices are discussed in Appendix B. The overall scheme is formalized below in Algorithm 7. ``` 1:Choose dimensions \(m,n\) with \(m<\frac{n(n+1)}{2}\) 2:Choose \(n_{B},n_{N}\in[n]\) where \(n_{B}+n_{N}\leq n\) 3:Generate \(\sigma_{i}>0\) for \(i\in[n_{B}]\) and \(\lambda_{i}>0\) for \(i\in[n_{N}]\) 4:Generate orthonormal matrix Q 5:Build \(X^{*}=Q\begin{pmatrix}\text{diag}(\sigma)&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}Q^{\top}\) and \(S^{*}=Q\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&\text{diag}(\lambda)\end{pmatrix}Q^{\top}\) 6:Generate \(A_{i}\in\mathcal{S}^{n}\) randomly for \(i\in[m]\) 7:Generate \(y^{*}\in\mathbb{R}^{m}\) randomly 8:Calculate \(b_{i}=A_{i}\bullet X^{*}\) for \(i\in[m]\) and \(C=\sum_{i=1}^{m}y_{i}^{*}A_{i}+S^{*}\) 9:Return SDOP \((A_{1},\ldots,A_{m},b,C)\) with optimal solution \((X^{*},y^{*},S^{*})\) ``` **Algorithm 7** Generating SDO problems with a specific optimal solution While Algorithm 7 generates a more general optimal solution than Algorithm 5, it is computationally more demanding due to several matrix multiplications and generating an orthonormal matrix. In Appendix B, some procedures to generate an orthogonal matrix are discussed. One can easily verify that \((X^{*},y^{*},S^{*})\) is optimal since \(X^{*}S^{*}=Q\Sigma\Lambda Q^{\top}=0\). However, similar to Remark 10, the generated optimal solution may not be the maximally complementary solution for the SDOP \((A_{1},\ldots,A_{m},b,C)\). Thus, the optimal partition \((\mathcal{B},\mathcal{N},\mathcal{T})\) of the generated SDOP may be different from \((B,N,T)\) such that \(\dim(\mathcal{B})\geq n_{B}\) and \(\dim(\mathcal{N})\geq n_{N}\), i.e. the set of indices such that \(\sigma_{i}=\lambda_{i}=0\) may be bigger than the partition \(T\). The next section discusses how we can generate SDOPs with predefined optimal partition. #### 3.2.5 SDOPs with a Predefined Maximally Complementary Solution (General Structure) The SDOP generated by Algorithm 7 may have an optimal partition that differs from the input partition, since the specified optimal solution may not be maximally complementary. In this section, we develop a procedure to generate SDOPs with a specific optimal partition, and by extension, a specific maximally complementary solution. ``` 1:Choose dimensions \(m,n\) with \(m<\frac{n(n+1)}{2}\) 2:Choose \(n_{B},n_{N}\in[n]\) where \(n_{B}+n_{N}\leq n\) 3:Generate \(\sigma_{i}>0\) for \(i\in[n_{B}]\) and \(\lambda_{i}>0\) for \(i\in[n_{N}]\) 4:Generate orthonormal matrix Q 5:Build \(X^{*}=Q\begin{pmatrix}\text{diag}(\sigma)&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}Q^{\top}\) and \(S^{*}=Q\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&\text{diag}(\lambda)\end{pmatrix}Q^{\top}\) 6:Generate \(A_{1}=Q\Gamma Q^{\top}\) such that \(\Gamma=\text{diag}(\gamma)\) where \(\gamma_{B}=0\), \(\gamma_{T}>0\), and \(\gamma_{N}\in\mathbb{R}^{n_{N}}\) 7:Generate \(A_{i}\in\mathcal{S}^{n}\)\(i\in[m]\) such that \(A_{i}Q_{B}\) are linearly independent for \(i\in[m]\) 8:Generate \(y^{*}\in\mathbb{R}^{m}\) 9:Calculate \(b_{i}=A_{i}\bullet X^{*}\) for \(i\in[m]\) and \(C=\sum_{i=1}^{m}y_{i}^{*}A_{i}+S^{*}\) 10:Return SDOP \((A_{1},\ldots,A_{m},b,C)\) with maximally complementary solution \((X^{*},y^{*},S^{*})\) ``` **Algorithm 8** Generating SDO problems with a specific maximally complementary solution As we can see, there is less freedom in generating an SDOP using Algorithm 8 when compared to Algorithm 7. This can be attributed to the fact that the matrix \(A_{1}\) is specified to ensure that the specified optimal solution is maximally complementary, and we can not alter its characteristics directly. The next theorem proves the correctness of the generator. **Theorem 3.3**.: _For the generated problem \((A_{1},\ldots,A_{m},b,C)\) by Algorithm 8, the solution \((X^{*},y^{*},S^{*})\) is a maximally complementary optimal solution._ Proof.: The result follows from a proof by contradiction, which is adapted from [25]. Suppose that \((X^{*},y^{*},S^{*})\) is not maximally complementary, and \((\tilde{X},\tilde{y},\tilde{S})\) is a maximally complementary solution. Since \(\tilde{X}S^{*}=0\), we have \[\mathcal{R}(X^{*})\subseteq\mathcal{R}(\tilde{X})\subseteq\mathcal{R}(S^{*}) ^{\perp}.\] Therefore, we can write \[\tilde{X}=Q\begin{pmatrix}D_{B}&0&0\\ 0&D_{T}&0\\ 0&0&0\end{pmatrix}Q^{\top}.\] Since both \(\tilde{X}\) and \(X^{*}\) are feasible, it follows \[0=A_{1}\bullet(\tilde{X}-X^{*})=A_{1}\bullet Q\begin{pmatrix}D_{ B}-\Lambda_{B}&0&0\\ 0&D_{T}&0\\ 0&0&0\end{pmatrix}Q^{\top} =\Gamma\bullet\begin{pmatrix}D_{B}-\Lambda_{B}&0&0\\ 0&D_{T}&0\\ 0&0&0\end{pmatrix}\] \[=\Gamma_{T}\bullet D_{T}.\] Given that \(\Gamma_{T}>0\), it follows that \(D_{T}=0\), which implies \(\mathcal{R}(X^{*})=\mathcal{R}(\tilde{X})\). Next, we need to show that \(\mathcal{R}(S^{*})=\mathcal{R}(\tilde{S})\). Again, from dual feasibility, we have \[\sum_{i=1}^{m}A_{i}(y_{i}^{*}-\tilde{y}_{i})=-(S^{*}-\tilde{S}).\] By the orthogonality of \(Q_{B}\) and \(S^{*}-\tilde{S}\), one can observe \[\sum_{i=1}^{m}A_{i}Q_{B}(y_{i}^{*}-\tilde{y}_{i})=-(S^{*}-\tilde{S})Q_{B}=0.\] Since the matrices \(A_{i}Q_{B}\) are linearly independent for \(i\in[m]\), it follows \(y_{i}^{*}=\tilde{y}_{i}\) and \(S^{*}=\tilde{S}\) and thus \((X^{*},y^{*},S^{*})\) is maximally complementary. Therefore, we have arrived at a contradiction, and the proof is complete. **Corollary 3.4**.: _For the SDOP generated by Algorithm 8, the optimal partition \((\mathcal{B},\mathcal{N},\mathcal{T})\) is equal to \((B,N,T)\)._ **Remark 13**.: Let \(\Pi_{i}=A_{i}Q_{B}\) for \(i\in[m]\). One can generate matrix \(\Pi_{i}\) for \(i\in\{2,\ldots,m\}\) so that the set of matrices \(\Pi_{i}\) for \(i\in[m]\) are linearly independent, and calculate \(A_{i}=\Pi_{i}Q_{B}^{\top}\). Consequently, the matrices \(A_{i}Q_{B}\) will be linearly independent with probability 1. The framework we have described is correct when \(B\neq\emptyset\) and \(N\neq\emptyset.\) For the cases \(B=\emptyset\) and/or \(N=\emptyset,\) one can construct a simple procedure, such as the one presented in Algorithm 9, to generate problems with predetermined optimal partition. ``` 1:Choose dimensions \(m,n\) with \(m<\frac{n(n+1)}{2}\) 2:Choose \(n_{N}\in[n]\) 3:Generate \(\lambda_{i}>0\) for \(i\in[n_{N}]\) 4:Generate orthonormal matrix Q 5:Build \(X^{*}=0\) and \(S^{*}=Q\begin{pmatrix}0&0\\ 0&\operatorname{diag}(\lambda)\end{pmatrix}Q^{\top}\) 6:Generate \(A_{i}=Q\Gamma_{i}Q^{\top}\) such that \(\Gamma_{i}=\operatorname{diag}(\gamma_{i})\) where \(\gamma_{T\,i}=0,\) and \(\gamma_{N\,i}\in\mathbb{R}^{n_{N}}\) for \(i\in[m]\) 7:Generate \(y^{*}\in\mathbb{R}^{m}\) 8:Let \(b=0\) and calculate \(C=\sum_{i=1}^{m}y_{i}^{*}A_{i}+S^{*}\) 9:Return SDO problem \((A_{1},\ldots,A_{m},b,C)\) with optimal solution \((X^{*},y^{*},S^{*})\) ``` **Algorithm 9** Generating SDO problems with a specific maximally complementary solution when \(B=\emptyset\) #### 3.2.6 SDOPs with Predefined Optimal and Interior Solutions (General Structure) We can also generalize Algorithm 7 to provide SDOPs with interior solutions, and the resulting scheme is presented in Algorithm 10. Here, both the generated optimal and interior solutions have general structure by using inverse of eigenvalue decomposition and at a high level the overall scheme can be viewed as a combination of Algorithms 6 and 7. ``` 1:Choose dimensions \(m,n\) with \(m<\frac{n(n+1)}{2}\) (the dimensions of generated SDOP: \(m+1,n+1\)) 2:Choose \(n_{B},n_{N}\in[n]\) where \(n_{B}+n_{N}\leq n\) 3:Define sets \[B=\{1,\ldots,n_{B}\},T=\{n_{B}+1,\ldots,n-n_{N}\},\text{ and }N=\{n-n_{N}+1,\ldots,n\}\] 4:Generate \(\sigma_{i}>0\) for \(i\in B\) and build \(\Sigma_{B}=\text{diag}(\sigma)\) 5:Generate \(\lambda_{i}>0\) for \(i\in N\) and build \(\Lambda_{N}=\text{diag}(\lambda)\) 6:Generate orthonormal matrix \(\hat{Q}_{n\times n}\) 7:Build \(\hat{X}=\hat{Q}\begin{pmatrix}\Sigma_{B}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\hat{Q}^{\top}\) and \(\hat{S}=\hat{Q}\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&\Lambda_{N}\end{pmatrix}\hat{Q}^{\top}\) 8:Generate \(\hat{y}\in\mathbb{R}^{m}\) and \(\hat{A}_{i}\in\mathcal{S}^{n}\) for \(i\in[m]\) 9:Calculate \(\hat{b}_{i}=\hat{A}_{i}\bullet X^{*}\) for \(i\in[m]\) and \(\hat{C}=\sum_{i=1}^{m}\hat{y}_{i}\hat{A}_{i}+\hat{S}\) 10:Build \(Q_{(n+1)\times(n+1)}=\begin{pmatrix}\hat{Q}&0\\ 0&1\end{pmatrix}\) 11:Generate positive diagonal matrix \(\Sigma_{B}^{0}\), \(\Sigma_{T}^{0}\), \(\Sigma_{N}^{0}\), and number \(\sigma_{n+1}^{0}>0\) 12:Build \(X^{*}=Q\begin{pmatrix}\Sigma_{B}&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}Q^{\top}\) and \(X^{0}=Q\begin{pmatrix}\Sigma_{B}^{0}&0&0&0\\ 0&\Sigma_{T}^{0}&0&0\\ 0&0&\Sigma_{N}^{0}&0\\ 0&0&0&\sigma_{n+1}^{0}\end{pmatrix}Q^{\top}\) 13:Generate positive diagonal matrix \(\Lambda_{B}^{0}\), \(\Lambda_{T}^{0}\), and \(\Lambda_{N}^{0}\) 14:Calculate \(\delta=\sum_{i\in B}(\sigma_{i}-\sigma_{i}^{0})\lambda_{i}^{0}+\sum_{i\in T} \sigma_{i}^{0}\lambda_{i}^{0}+\sum_{i\in N}\sigma_{i}^{0}(\lambda_{i}^{0}- \lambda_{i})\) 15:Generate \(\lambda_{n+1}^{0}>(\frac{-\delta}{\sigma_{n+1}^{0}})^{+}\), and calculate \(\lambda_{n+1}=\frac{\delta}{\sigma_{n+1}^{0}}+\lambda_{n+1}^{0}\) 16:Build \(S^{*}=Q\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&\Lambda_{N}&0\\ 0&0&0&\lambda_{n+1}\end{pmatrix}Q^{\top}\) and \(S^{0}=Q\begin{pmatrix}\Lambda_{B}^{0}&0&0&0\\ 0&\Lambda_{T}^{0}&0&0\\ 0&0&\Lambda_{N}^{0}&0\\ 0&0&0&\lambda_{n+1}^{0}\end{pmatrix}Q^{\top}\) 17:Generate \(y^{0}\in\mathbb{R}^{m+1}\) randomly such that \(y^{0}_{m+1}\neq 0\) and build \(y^{*}_{(m+1)}=\begin{pmatrix}\hat{y}\\ 0\end{pmatrix}\) 18:Calculate \(\alpha_{i}=\frac{1}{\sigma_{n+1}^{0}}\operatorname{tr}\left(\hat{A}_{i}\hat{ Q}\begin{pmatrix}\Sigma_{B}-\Sigma_{B}^{0}&0&0\\ 0&-\Sigma_{T}^{0}&0\\ 0&0&-\Sigma_{N}^{0}\end{pmatrix}\hat{Q}^{\top}\right)\) for \(i\in[m]\) 19:Build \(A_{i}=\begin{pmatrix}\hat{A}_{i}&0\\ 0&\alpha_{i}\end{pmatrix}\) for \(i\in[m]\) 20:\(A_{m+1}=\frac{1}{y^{0}_{m+1}}\begin{pmatrix}\sum_{i=1}^{m}(\hat{y}_{i}-y^{0}_{ i})\hat{A}_{i}+Q\begin{pmatrix}-\Lambda_{B}^{0}&0&0&0\\ 0&-\Lambda_{T}^{0}&0&0\\ 0&0&\Lambda_{T}-\Lambda_{T}^{0}&0\\ 0&0&0&\Lambda_{n+1}-\Lambda_{n+1}^{0}\end{pmatrix}Q^{\top}\) 21:Calculate \(C=\begin{pmatrix}\hat{C}&0\\ 0&\sum_{i=1}^{m}\hat{y}_{i}\alpha_{i}+\lambda_{m+1}\end{pmatrix}\) 22:Calculate \(b_{i}=\hat{b}_{i}\) for \(i\in[m]\) and \(b_{m+1}=\operatorname{tr}\left(A_{m+1}X^{*}\right)\) 23:Return SDOP \((A_{i},b,C)\) with optimal solution \((X^{*},y^{*},S^{*})\) and interior solution \((X^{0},y^{0},S^{0})\) ``` **Algorithm 10** Generating SDO problems with specific interior and optimal solutions For the generated solutions \((X^{0},y^{0},S^{0})\) and \((X^{*},y^{*},S^{*})\), Steps 14 and 15 ensure that \[\sum_{i\in B}(\sigma_{i}^{0}-\sigma_{i})\lambda_{i}^{0}+\sum_{i\in T }\sigma_{i}^{0}\lambda_{i}^{0}+\sum_{i\in N}\sigma_{i}^{0}(\lambda_{i}^{0}- \lambda_{i})+\sigma_{n+1}^{0}(\lambda_{n+1}^{0}-\lambda_{n+1})=0, \tag{5}\] Consequently, the orthogonality condition (3) is satisfied. Similar to the block-diagonal case, Theorem 3.5 establishes that the generated SDO problem and its optimal and interior solutions are correct. **Theorem 3.5**.: _Let \((X^{0},y^{0},S^{0})\) and \((X^{*},y^{*},S^{*})\) be solutions generated by Algorithm 10. Then,_ \[X^{*}\succeq 0,\ S^{*}\succeq 0,\ X^{0}\succ 0,\ S^{0} \succ 0,\] \[X^{*}\bullet S^{*} =0,\] \[A_{i}\bullet X^{*} =b_{i},\] \[\sum_{i=1}^{n}y_{i}^{*}A_{i}^{\top}+S^{*} =C,\] \[A_{i}\bullet X^{0} =b_{i},\] \[\sum_{i=1}^{n}y_{i}^{0}A_{i}^{\top}+S^{0} =C.\] Proof.: Analogous to the proof of Theorem 3.2 based on the steps of Algorithm 10 and Condition (3). #### 3.2.7 SDOPs with Predefined Interior and Maximally Complementary Solutions (General Structure) To have a predetermined optimal partition, we develop Algorithm 11 to generate SDOPs with specific interior and maximally complementary solutions as follows. **Algorithm 11** Generating SDO problems with specific interior and maximally complementary solutions ``` 1:Choose dimensions \(m,n\) with \(m<\frac{n(n+1)}{2}\) (the dimensions of generated SDOP: \(m+1,n+1\)) 2:Choose \(n_{B},n_{N}\in[n]\) where \(n_{B}+n_{N}\leq n\) 3:Define sets \[B=\{1,\ldots,n_{B}\},T=\{n_{B}+1,\ldots,n-n_{N}\},\text{ and }N=\{n-n_{N}+1,\ldots,n\}\] 4:Generate \(\sigma_{i}>0\) for \(i\in B\) and build \(\Sigma_{B}=\text{diag}(\sigma)\) 5:Generate \(\lambda_{i}>0\) for \(i\in N\) and build \(\Lambda_{N}=\text{diag}(\lambda)\) 6:Generate orthonormal matrix \(\hat{Q}_{n\times n}\) 7:Build \(\hat{X}=\hat{Q}\begin{pmatrix}\Sigma_{B}&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\hat{Q}^{\top}\) and \(\hat{S}=\hat{Q}\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&\Lambda_{N}\end{pmatrix}\hat{Q}^{\top}\) 8:Generate \(A_{1}=Q\Gamma Q^{\top}\) such that \(\Gamma=\text{diag}(\gamma)\) where \(\gamma_{B}=0\), \(\gamma_{T}>0\), and \(\gamma_{T}\in\mathbb{R}^{q}\) 9:Generate \(\hat{y}\in\mathbb{R}^{m}\) and \(\hat{A}_{i}\in\mathcal{S}^{n}\) for \(i\in\{2,\ldots,m\}\) 10:Calculate \(\hat{b}_{i}=\text{tr}\left(\hat{A}_{i}X^{*}\right)\) for \(i\in[m]\) and \(\hat{C}=\sum_{i=1}^{m}\hat{y}_{i}\hat{A}_{i}+\hat{S}\) 11:Build \(Q_{(n+1)\times(n+1)}=\begin{pmatrix}\hat{Q}&0\\ 0&1\end{pmatrix}\) 12:Generate positive diagonal matrix \(\Sigma_{B}^{0}\), \(\Sigma_{T}^{0}\), \(\Sigma_{N}^{0}\), and number \(\sigma_{n+1}^{0}>0\) 13:Build \(X^{*}=Q\begin{pmatrix}\Sigma_{B}&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}Q^{\top}\) and \(X^{0}=Q\begin{pmatrix}\Sigma_{B}^{0}&0&0&0\\ 0&\Sigma_{T}^{0}&0&0\\ 0&0&\Sigma_{N}^{0}&0\\ 0&0&0&\sigma_{n+1}^{0}\end{pmatrix}Q^{\top}\) 14:Generate positive diagonal matrix \(\Lambda_{B}^{0}\), \(\Lambda_{T}^{0}\), and \(\Lambda_{N}^{0}\) 15:Calculate \(\delta=\sum_{i\in B}(\sigma_{i}-\sigma_{i}^{0})\lambda_{i}^{0}+\sum_{i\in T} \sigma_{i}^{0}\lambda_{i}^{0}+\sum_{i\in N}\sigma_{i}^{0}(\lambda_{i}^{0}- \lambda_{i})\) 16:Generate \(\lambda_{n+1}^{0}>\left(\frac{-\delta}{\sigma_{n+1}^{0}}\right)^{+}\), and calculate \(\lambda_{n+1}=\frac{\delta}{\sigma_{n+1}^{0}}+\lambda_{n+1}^{0}\) 17:Build \(S^{*}=Q\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&\Lambda_{N}&0\\ 0&0&0&\lambda_{n+1}\end{pmatrix}Q^{\top}\) and \(S^{0}=Q\begin{pmatrix}\Lambda_{B}^{0}&0&0&0\\ 0&\Lambda_{T}^{0}&0&0\\ 0&0&\Lambda_{N}^{0}&0\\ 0&0&0&\lambda_{n+1}^{0}\end{pmatrix}Q^{\top}\) 18:Generate \(y^{0}\in\mathbb{R}^{m+1}\) such that \(y^{0}_{m+1}\neq 0\) and build \(y^{*}_{(m+1)}=\begin{pmatrix}\hat{y}\\ 0\end{pmatrix}\) 19:Calculate \(\alpha_{i}=\frac{1}{\sigma_{n+1}^{0}}\text{tr}\left(\hat{A}_{i}\hat{Q} \begin{pmatrix}\Sigma_{B}-\Sigma_{B}^{0}&0&0\\ 0&-\Sigma_{T}^{0}&0\\ 0&0&-\Sigma_{N}^{0}\end{pmatrix}\hat{Q}^{\top}\right)\) for \(i\in[m]\) 20:Build \(A_{i}=\begin{pmatrix}\hat{A}_{i}&0\\ 0&\alpha_{i}\end{pmatrix}\) for \(i\in[m]\) 21:\(A_{m+1}=\frac{1}{y^{0}_{m+1}}\begin{pmatrix}\sum_{i=1}^{m}(\hat{y}_{i}-y^{0}_{i })\hat{A}_{i}+Q\begin{pmatrix}-\Lambda_{B}^{0}&0&0&0\\ 0&-\Lambda_{T}^{0}&0&0\\ 0&0&\Lambda_{T}-\Lambda_{T}^{0}&0\\ 0&0&0&\Lambda_{n+1}-\Lambda_{n+1}^{0}\end{pmatrix}Q^{\top}\) 22:Calculate \(C=\begin{pmatrix}\hat{C}&0\\ 0&\sum_{i=1}^{m}\hat{y}_{i}\alpha_{i}+\lambda_{m+1}\end{pmatrix}\) 23:Calculate \(b_{i}=\hat{b}_{i}\) for \(i\in[m]\) and \(b_{m+1}=\text{tr}\left(A_{m+1}X^{*}\right)\) 24:Return SDOP \((A_{1},\ldots,A_{m},b,C)\) with optimal solution \((X^{*},y^{*},S^{*})\) and interior solution \((X^{0},y^{0},S^{0})\) **Theorem 3.6**.: _For Algorithm 11, solution \((X^{*},y^{*},S^{*})\) is the maximally complementary optimal solution of the generated problem \((A_{1},\ldots,A_{m},b,C)\)._ Proof.: The proof closely follows the proof of Theorem 3.3; the only difference being that we expanded the matrices \(A_{i}\) by adding a row and column. For constructing matrix \(A_{1}\), the added eigenvalue \(\gamma_{n+1}=\alpha_{1}\) and \(Q_{n+1}=(0,0,0,\ldots,0,1)^{\top}\) belong to partition \(N\) where we do not have any restriction. Thus, adapting the proof of Theorem 3.3 to this theorem is straightforward. Among all proposed SDOP generators, Algorithm 11 provides the most sophisticated SDOPs, with maximally complementary and interior solutions in a general manner and gives opportunities for altering characteristics of an optimal solution, optimal partition, matrices \(A_{i}\), \(C\), and vector \(b\) to study the performance of solution methods in a detailed and sophisticated analysis. However, this algorithm requires much more complicated computation than the other proposed generators. ## 4 Second-Order Cone Optimization Before concluding, we adapt our techniques for LO and SDO to linear optimization problems over second order (or _Lorentz_) cones. ### Second Order Cone Optimization Problems A second-order cone is defined as follows \[\left\{(x_{1},x_{2},\ldots,x_{n})\in\mathbb{R}^{n}:x_{1}^{2}-\sum_{i=2}^{n}x_{ i}^{2}\geq 0,x_{1}\geq 0\right\}.\] Observe that the above definition implies that \((x_{1},x_{2},\ldots,x_{n})\) is a second-order cone if and only if the matrix \[\begin{pmatrix}x_{1}&x_{2:n}^{\top}\\ x_{2:n}&x_{1}I_{n-1}\end{pmatrix}\] is positive semidefinite, where \(x_{2:n}^{\top}\equiv(x_{2},x_{3},\ldots,x_{n})\) and \(I_{n-1}\) is the identity matrix of order \(n-1\). Accordingly, a primal or dual second-order cone optimization problem (SOCOP) may be interpreted as a special case of SDO [19]. In SOCO problems, we seek to minimize a linear objective function over a feasible region which is defined by the intersection of an affine space and the Cartesian product of \(p\) second-order cones of dimension \(n_{i}\), which is defined as \[\mathbb{L}^{n}=\mathcal{L}^{n_{1}}\times\cdots\times\mathcal{L}^{n_{p}},\quad n =\sum_{i=1}^{p}n_{i},\] where \[\mathcal{L}^{n_{i}}=\{x^{i}=(x_{1}^{i},\ldots,x_{n_{i}}^{i})^{\top}\in\mathbb{ R}^{n_{i}}:x_{1}^{i}\geq\|x_{2:n_{i}}^{i}\|\},\quad i\in[p].\] It is clear that LOPs are a special case of SOCOPs, where \(n_{i}=1\) for \(i\in[p]\). The primal and dual SOCO problems in standard form are represented as \[z_{SOCO}^{P} =\inf_{x}\{c^{\top}x:Ax=b\,\ x\in\mathbb{L}^{n}\},\] \[z_{SOCO}^{D} =\sup_{(y,s)}\{b^{\top}y:A^{\top}y+s=c\,\ s\in\mathbb{L}^{n}\},\] where \(b\in\mathbb{R}^{m}\), \(A=(A_{1},\ldots,A_{p})\), \(x=(x^{1};\ldots;x^{p})\), \(s=(s^{1};\ldots;s^{p})\), and \(c=(c^{1};\ldots;c^{p})\), in which \(A_{i}\in\mathbb{R}^{m\times n_{i}}\), \(s^{i}\in\mathbb{R}^{n_{i}}\), and \(c^{i}\in\mathbb{R}^{n_{i}}\) for \(i\in[p]\). The set of primal and dual feasible solutions is defined as \[\mathcal{PD}_{SOCO}=\{(x,y,s)\in\mathbb{L}^{n}\times\mathbb{R}^{m}\times \mathbb{L}^{n}:Ax=b,A^{\top}y+s=c\}.\] Let \[\mathcal{L}_{+}^{n_{i}}=\{x^{i}\in\mathcal{L}^{n_{i}}:x_{1}^{i}>\|x_{2:n_{i}}^ {i}\|\},\quad i\in[p],\] then we can define the set of primal and dual interior feasible solutions as \[\mathcal{PD}_{SOCO}^{0}=\{(x,y,s)\in\mathbb{L}_{+}^{n}\times \mathbb{R}^{m}\times\mathbb{L}_{+}^{n}:Ax=b,A^{\top}y+s=c\}.\] Just as in LO and SDO, it is standard practice to assume the existence of an interior feasible primal-dual solution. With the existence of a strictly feasible solution, it follows that the Interior Point Condition (IPC) is satisfied [14], guaranteeing that \(z_{SOCO}^{P}=z_{SOCO}^{D}\) and the primal-dual optimal set \[\mathcal{PD}_{SOCO}^{*}=\left\{(x,y,s)\in\mathcal{PD}_{SOCO}\ :\ c^{\top}x=z_{SOCO}^{P}=b^{T}y=z_{SOCO}^{D}\right\},\] is nonempty and bounded. Therefore, there exists an optimal primal-dual pair with zero duality gap. That is, for optimal solutions \(x^{*}\) and \((y^{*},s^{*})\), we have \[x^{*}\circ s^{*}=(x^{1}\circ s^{1},\ldots,x^{p}\circ s^{p})=0, \tag{7}\] where the Jordan product "\(\circ\)" is defined as \[x^{i}\circ s^{i}=\begin{pmatrix}(x^{i})^{\top}s^{i}\\ x_{1}^{i}s_{2:n_{i}}^{i}+s_{1}^{i}x_{2:n_{i}}^{i}\end{pmatrix}. \tag{8}\] An optimal solution \((x^{*},y^{*},s^{*})\) is called maximally complementary if \(x^{*}\in\mathrm{ri}(\mathcal{P}_{SOCO}^{*})\) and \((y^{*};s^{*})\in\mathrm{ri}(\mathcal{D}_{SOCO}^{*})\). Further, \((x^{*},y^{*},s^{*})\) is called strictly complementary if \[x^{*}+s^{*}\in\mathbb{L}_{+}^{n}.\] ### Instance Generators for SOCOPs Motivated by our work on LOP and SDOP generators, we are further interested in applying these ideas to generate SOCO problems. Since SOCO can be interpreted as a special case of SDO, Sampourmahani et al. [19] studied mappings between SDOPs and SOCOPs and their optimal partitions. It is straightforward to develop SOCOP generators using the proposed SDOP generators augmented with the appropriate mapping. However, that route is not efficient, and we alternatively propose several SOCOP generators without using their SDO representation. #### 4.2.1 SOCOPs with a Predefined Interior Solution Generating SOCOPs with an interior solution also ensures that the problem has an optimal solution with zero duality gap. Algorithm 12 is a modification of Algorithms 1 and 4 to generate SOCOPs with specific interior solutions. ``` 1:Choose dimensions \(m<n\) 2:Choose \(n_{1},\ldots,n_{p}\) such that \(n=n_{1}+\cdots+n_{p}\) 3:Generate \((x^{0},s^{0})\) such that \(x^{0}\in\mathbb{L}_{+}^{n}\) and \(s^{0}\in\mathbb{L}_{+}^{n}\) 4:Generate \(A\in\mathbb{R}^{m\times n}\) 5:Generate \(y^{0}\in\mathbb{R}^{m}\) 6:Calculate \(b=Ax^{0}\) and \(c=A^{\top}y^{0}+s^{0}\) 7:Return SOCOP \((A,b,c)\) with interior solution \((x^{0},y^{0},s^{0})\) ``` **Algorithm 12** Generating SOCO problems with a specific interior solution To have an interior solution \(x^{0}\), we must generate \((x^{0})^{i}\) for \(i=1,\ldots,p\), such that \(((x^{0})_{2}^{i},\ldots,(x^{0})_{n_{i}}^{i})\in\mathbb{R}^{n_{i}-1}\) and \((x^{0})_{1}^{i}>\|(x^{0})_{2:n_{i}}^{i}\|\). One way to generate such a solution is to generate \((x^{0})^{i}\in\mathbb{R}^{n_{i}}\), and update it using the rule \[(x^{0})_{1}^{i}=\|(x^{0})_{2:n_{i}}^{i}\|+\|(x^{0})_{1}^{i}\|.\] Similar to LO, if the matrix \(A\) is generated randomly, then the probability of that all rows of \(A\) are linearly independent is one. In addition, the user can generate a desired matrix \(A\) with specific characteristics such as sparsity, condition number, and norm. #### 4.2.2 SOCOPs with a Predefined Optimal Solution For SOCOPs, the optimal partition is a bit more complicated than for LO and SDO. The index set \([p]\) is partitioned to sets \((\mathcal{B},\mathcal{N},\mathcal{R},\mathcal{T}_{1},\mathcal{T}_{2},\mathcal{ T}_{3})\) defined as \[\mathcal{B} \coloneqq\{i:x_{1}^{i}>\|x_{2:n_{i}}^{i}\|_{2},\text{ for some }x \in\mathcal{P}_{SOCO}^{*}\},\] \[\mathcal{N} \coloneqq\{i:x_{1}^{i}>\|x_{2:n_{i}}^{i}\|_{2},\text{ for some }x \in\mathcal{D}_{SOCO}^{*}\},\] \[\mathcal{R} \coloneqq\{i:x_{1}^{i}=\|x_{2:n_{i}}^{i}\|_{2}>0,x_{1}^{i}=\|x_ {2:n_{i}}^{i}\|_{2}>0,\text{ for some }(x,y,x)\in\mathcal{P}_{SOCO}^{*}\times\mathcal{D}_{SOCO}^{*}\},\] \[\mathcal{T}_{1} \coloneqq\{i:x^{i}=x^{i}=0,\text{ for all }(x,y,x)\in\mathcal{P}_{SOCO}^{*}\times\mathcal{D}_{SOCO}^{*}\},\] \[\mathcal{T}_{2} \coloneqq\{i:x^{i}=0,\text{ for all }(y,x)\in\mathcal{D}_{SOCO}^{*},x_{1}^{i}=\|x_ {2:n_{i}}^{i}\|_{2}>0,\text{ for some }x\in\mathcal{P}_{SOCO}^{*}\},\] \[\mathcal{T}_{3} \coloneqq\{i:x^{i}=0,\text{ for all }x\in\mathcal{P}_{SOCO}^{*},x_{1}^{i}=\|x_ {2:n_{i}}^{i}\|_{2}>0,\text{ for some }(y,x)\in\mathcal{D}_{SOCO}^{*}\}.\] For further discussion regarding the optimal partition in SOCOPs, we refer the reader to [23]. From here, we can develop Algorithm 13 which is a generalization of Algorithm 2 for generating random SOCOPs with specific optimal solutions. **Remark 14**.: Algorithm 13 provides a SOCOP with an optimal solution, and that optimal solution may not be maximally complementary. Thus, the optimal partition \((\mathcal{B},\mathcal{N},\mathcal{R},\mathcal{T}_{1},\mathcal{T}_{2},\mathcal{T} _{3})\) of the generated problem may differ from \((B,N,R,T_{1},T_{2},T_{3})\), and we only have \[B\subseteq\mathcal{B},\ N\subseteq\mathcal{N},\ R\subseteq\mathcal{R},\ \mathcal{T}_{1}\subseteq T_{1},\ \mathcal{T}_{2}\subseteq T_{2}\cap T_{1},\text{ and }\mathcal{T}_{3} \subseteq T_{3}\cap T_{1}.\] Similar to LOPs and SDOPs generators, one can generate \(A\) random in a way to have specific characteristics. The norm and properties of \((x^{*},y^{*},s^{*})\) are controllable directly. Also, the norm of \((b,c)\) can be predetermined by scaling \((x^{*},y^{*},s^{*})\) appropriately and carefully, since determining the norm of all parameters and the optimal solution simultaneously is possible if the equations in line 11 of Algorithm 13 hold. It is easy to see that \((x^{*},y^{*},s^{*})\) is optimal since \(x^{*}\circ s^{*}=0\) and it is feasible by construction. In the next section, we discuss how to generate a SOCOP with a maximally complementary solution. #### 4.2.3 SOCOPs with a Predefined Maximally Complementary Solution Since the optimal partition can affect the performance of algorithms to solve SOCOPs similar to SDO, we are interested in generating problems with predetermined optimal partitions. To do so, we adapt our instance generator for SDOPs with maximally complementary solution to SOCO in Algorithm 14. Let \(A_{i,j}^{p}\) be the element in row \(i\) and column \(j\) of part (columns) of A that correspond to cone \(p\). We also use the superscript to show the partition, e.g., \(A^{B}\) denotes the columns of A which correspond to partition \(B\). ``` 1:Choose dimensions \(m<n\) 2:Choose \(n_{1},\ldots,n_{p}\) such that \(n=n_{1}+\cdots+n_{p}\) 3:Partition the index set \([p]\) to \((B,N,R,T_{1},T_{2},T_{3})\) such that \[|T_{2}|+1<m\leq|B|+|R|+|T_{2}|\] 4:For \(i\in B\), \((s^{*})^{i}=0\) and generate \((x^{*})^{i}\in\mathbb{R}^{n_{i}}\) such \((x^{*})^{i}_{1}>\|(x^{*})^{i}_{2:n_{i}}\|\) 5:For \(i\in N\), \((x^{*})^{i}=0\) and generate \((s^{*})^{i}\in\mathbb{R}^{n_{i}}\) such \((s^{*})^{i}_{1}>\|(s^{*})^{i}_{2:n_{i}}\|\) 6:For \(i\in T_{1}\), \((s^{*})^{i}=0\) and \((x^{*})^{i}=0\) 7:For \(i\in T_{2}\), \((s^{*})^{i}=0\) and generate \((x^{*})^{i}\in\mathbb{R}^{n_{i}}\) such \((x^{*})^{i}_{1}=\|(x^{*})^{i}_{2:n_{i}}\|>0\) 8:For \(i\in T_{3}\), \((x^{*})^{i}=0\) and generate \((s^{*})^{i}\in\mathbb{R}^{n_{i}}\) such \((s^{*})^{i}_{1}=\|(s^{*})^{i}_{2:n_{i}}\|>0\) 9:For \(i\in R\), generate \((x^{*})^{i}_{2:n_{i}}\in\mathbb{R}^{n_{i}-1}\) and \(\delta\in\mathbb{R}\) and build \[(x^{*})^{i}=\begin{pmatrix}\|(x^{*})^{i}_{2:n_{i}}\|\\ (x^{*})^{i}_{2:n_{i}}\end{pmatrix}\text{, and }(s^{*})^{i}=\delta\begin{pmatrix}\|(x^{*}) ^{i}_{2:n_{i}}\|\\ -(x^{*})^{i}_{2:n_{i}}\end{pmatrix}\] 10:Generate \(y^{*}\in\mathbb{R}^{m}\) 11:Generate \(A\in\mathbb{R}^{m\times n}\) such that * First row: \[A^{p}_{1,1}>0,A^{p}_{1,j} =0\text{ for }j=2,\ldots,n_{p},\text{ and }p\in T_{1}\cup T_{3}\] \[A^{p}_{1,j} =0\text{ for }j=1,\ldots,n_{p},\text{ and }p\in B\cup R\cup T_{2}\] \[A^{p}_{1,j} \in\mathbb{R}\text{ for }j=1,\ldots,n_{p},\text{ and }p\in N\] * Row 2 to \(|T_{2}|+1\): \[A^{p}_{p,1:n_{p}}=\begin{bmatrix}-1&\frac{(x^{*}_{2:n_{p}})^{\top}}{\|x^{*}_{2:n _{p}}\|}\end{bmatrix},A^{p}_{k,1:n_{k}}=0\text{ for all }k\neq q,\text{ and }p\in T_{2}\] * The other rows should be generated such that \(\text{rank}([A^{B},A^{R},A^{T_{2}}])=m\). 12:Calculate \(b=Ax^{*}\) and \(c=A^{\top}y^{*}+s^{*}\) 13:Return SOCOP \((A,b,c)\) with maximally complementary solution \((x^{*},y^{*},s^{*})\) ``` **Algorithm 14** Generating SOCOPs with a specific maximally complementary solution Compared to Algorithm 13, Algorithm 14 imposes more restrictions on how \(A\) is generated. **Theorem 4.1**.: _For any SOCOP \((A,b,c)\) generated by Algorithm 14, the generated optimal solution \((x^{*},y^{*},s^{*})\) is maximally complementary._ Proof.: One can verify that \(b_{1}=0\), and the first row of \(A\) enforces any optimal solution \(\bar{x}\) to satisfy \[\bar{x}^{p}=0\text{ for all }p\in T_{1}\cup T_{3}.\] From constraint 2 to \(|T_{2}|+1\), we add a constraint for each cone \(p\) in partition \(T_{2}\) in which coefficients are zero for all variables except for variables in cone \(p\). Since the corresponding right-hand side is zero and the coefficients are the normal vector to the cone \(p\) at the point \(x^{*}\), all feasible solutions must lie on the ray which is on the boundary of the cone \(p\) and passing through the point \(x^{*}\). Thus, for any optimal solution \(\bar{x}\), we have \[\bar{x}_{1}^{p}=\|x_{2:n_{p}}^{p}\|\text{ for all }p\in T_{2}.\] Up to this point, we have shown that \(x^{*}\in ri(\mathcal{P}^{*})\), and the last part of the proof is to establish that the dual problem has a unique optimal solution \((y^{*},s^{*})\). To prove it, let assume that it has another optimal solution \((\bar{y},\bar{s})\), and \(X^{*}=\operatorname{diag}(x^{*})\). Then, we have \[X^{*}A^{\top}(\overline{y}-y^{*})=-X^{*}(\overline{s}-s^{*})=0.\] Since \(A\) generated in a way that the rank of \(X^{*}A^{\top}\) is \(m\), we have \(\overline{y}=y^{*}\). We can conclude that \((x^{*},y^{*},s^{*})\) is a maximal complementary solution for the generated problem. **Corollary 4.2**.: _For any SOCOP generated by Algorithm 14, the optimal partition \((\mathcal{B},\mathcal{N},\mathcal{R},\mathcal{T}_{1},\mathcal{T}_{2}, \mathcal{T}_{3})\) is equal to \((B,N,R,T_{1},T_{2},T_{3})\)._ As expected, generating SOCOPs with predefined optimal partitions restricts on how matrix \(A\) is generated. However, some components of \(A\) are not restricted, and enable the user to control the properties of \(A\). This is discussed next. #### 4.2.4 SOCOPs with Optimal and Interior Solutions We can extend Algorithm 13 to provide both specific interior and optimal solutions by adding one row and column to the matrix \(A\). We aim to generate optimal and interior solutions in a general manner, but we need to enforce the orthogonality condition: \[(x^{0}-x^{*})^{\top}(s^{0}-s^{*})=0. \tag{9}\] Note that this is a natural requirement; similar to LOPs, we have \((x^{*}-x^{0})\in\operatorname{Lin}^{\perp}(A)\) and \((s^{*}-s^{0})\in\operatorname{Lin}(A)\). Theorem 4.4 shows that the claimed properties of \((x^{0},y^{0},s^{0})\) and \((x^{*},y^{*},s^{*})\) are indeed correct. Before presenting Theorem 4.4, we need to verify the orthogonality properties of the generated solution. **Lemma 4.3**.: _For any \((x^{0},y^{0},s^{0})\) and \((x^{*},y^{*},s^{*})\) generated by Algorithm 15, we have_ \[(x^{0}-x^{*})^{\top}(s^{0}-s^{*})=0.\] Proof.: Similar to the proof of Lemma 2.1, Steps 9 and 11 of Algorithm 15 ensure that the orthogonality condition holds. Using Lemma 4.3, the following theorem shows that the generated problem satisfies the desired properties. **Theorem 4.4**.: _Let \((x^{0},y^{0},s^{0})\) and \((x^{*},y^{*},s^{*})\) be generated by Algorithm 15. Then,_ \[x^{*}\circ s^{*} =0,\] \[Ax^{*} =b,\] \[A^{\top}y^{*}+s^{*} =c,\] \[Ax^{0} =b,\] \[A^{\top}y^{0}+s^{0} =c,\] \[x^{*}\in\mathbb{L}^{n+1},\ s^{*}\in\mathbb{L}^{n+1},\ x^{0}\in \mathbb{L}^{n+1}_{+},\ s^{0}\in\mathbb{L}^{n+1}_{+}.\] _That is, \((x^{0},y^{0},s^{0})\) and \((x^{*},y^{*},s^{*})\) are interior and optimal solutions, respectively, for the generated SOCOP \((A,b,c)\)._ Proof.: The proof is similar to the proof of Theorem 2.2. Compared to the SDOP generators, the SOCOP generators are computationally simpler since they do not require generating random orthonormal or positive semidefinite matrices. Let \(t_{R}\) be the amount of arithmetic operations required to generate a number randomly. To generate orthonormal or positive semidefinite matrices, we need to use a decomposition method, which requires \(\mathcal{O}(n^{3})\) arithmetic operations as discussed in the appendix. In the general case, the LOP and SOCOP generators require \(\mathcal{O}(n^{2}t_{r})\) arithmetic operations, while the SDO generators require \(\mathcal{O}(n^{3}t_{r})\) arithmetic operations. It should be mentioned that if we want to generate a random matrix \(A\) in LOPs and SOCOPs with specific condition numbers, then we need to use decomposition methods and the complexity of the LOP and SOCOP generators increases to \(\mathcal{O}(n^{3}t_{r})\) arithmetic operations. #### 4.2.5 SOCOPs with Predefined Interior and Maximally Complementary Solutions To generate a SOCOPs with both interior and maximally complementary solutions, we can use Algorithm 15 and in its first step, use Algorithm 14 which provide a SOCOP with maximally complementary solution. The only difference is that we should choose the partition such that the last cone is in the partition \(N\). By this modification, the added column in Step 6 of Algorithm 15 will be in partition \(N\), which satisfies all the restrictions needed to keep \((x^{*},y^{*},s^{*})\) maximally complementary. In this way, we can generate a SOCOP with interior solution and predetermined optimal partition. ## 5 Implementation All mentioned generators are implemented in a python package, which is available in open source at [https://github.com/qcol-lu/qipm](https://github.com/qcol-lu/qipm). This package gives the option of prescribing the norm of vectors, condition numbers, and sparsity of the matrices. In addition, several versions of interior point methods, such as feasible/infeasible, exact/inexact, and long-step/short-step/predictor-corrector, are implemented and available for the experiment. There is also an option to choose the solver of the Newton system. One may choose classical or quantum linear system algorithms. ## 6 Conclusion We develop and implement several random instance generators for LO, SDO, and SOCO with specific optimal and/or interior solutions. Because of high level of controllability, these generators enable users to analyze different features of the problem, such as sparsity and condition number, to study the performance of different algorithms smartly. In addition, we proposed SDOP and SOCOP generators with predefined optimal partition, which can be used to generate computationally challenging instances. The proposed generators can also be used to study the average performance of algorithms for solving LO, SDO, and SOCO problems with different probability distributions for input data, optimal and interior solutions. Future research directions include expanding the construction of these generators for other classes of conic, polynomial, and nonlinear optimization problems. A useful direction for extending the proposed generators is to generate hard problems which are challenging for algorithms and solvers, e.g., LOPs which are primal or dual unbounded. For SDO and SOCO, it is worth exploring to develop generators which produce instances that have zero-duality gap, but with an optimal solution that is not attainable or instances with non-zero duality gap. ## 7 Acknowledgement This work is supported by Defense Advanced Research Projects Agency as part of the project W911NF2010022: _The Quantum Computing Revolution and Optimization: Challenges and Opportunities_.
2309.02274
A Comparison of Residual-based Methods on Fault Detection
An important initial step in fault detection for complex industrial systems is gaining an understanding of their health condition. Subsequently, continuous monitoring of this health condition becomes crucial to observe its evolution, track changes over time, and isolate faults. As faults are typically rare occurrences, it is essential to perform this monitoring in an unsupervised manner. Various approaches have been proposed not only to detect faults in an unsupervised manner but also to distinguish between different potential fault types. In this study, we perform a comprehensive comparison between two residual-based approaches: autoencoders, and the input-output models that establish a mapping between operating conditions and sensor readings. We explore the sensor-wise residuals and aggregated residuals for the entire system in both methods. The performance evaluation focuses on three tasks: health indicator construction, fault detection, and health indicator interpretation. To perform the comparison, we utilize the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dynamical model, specifically a subset of the turbofan engine dataset containing three different fault types. All models are trained exclusively on healthy data. Fault detection is achieved by applying a threshold that is determined based on the healthy condition. The detection results reveal that both models are capable of detecting faults with an average delay of around 20 cycles and maintain a low false positive rate. While the fault detection performance is similar for both models, the input-output model provides better interpretability regarding potential fault types and the possible faulty components.
Chi-Ching Hsu, Gaetan Frusque, Olga Fink
2023-09-05T14:39:27Z
http://arxiv.org/abs/2309.02274v1
# A Comparison of Residual-based Methods on Fault Detection ###### Abstract An important initial step in fault detection for complex industrial systems is gaining an understanding of their health condition. Subsequently, continuous monitoring of this health condition becomes crucial to observe its evolution, track changes over time, and isolate faults. As faults are typically rare occurrences, it is essential to perform this monitoring in an unsupervised manner. Various approaches have been proposed not only to detect faults in an unsupervised manner but also to distinguish between different potential fault types. In this study, we perform a comprehensive comparison between two residual-based approaches: autoencoders, and the input-output models that establish a mapping between operating conditions and sensor readings. We explore the sensor-wise residuals and aggregated residuals for the entire system in both methods. The performance evaluation focuses on three tasks: health indicator construction, fault detection, and health indicator interpretation. To perform the comparison, we utilize the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dynamical model, specifically a subset of the turbofan engine dataset containing three different fault types. All models are trained exclusively on healthy data. Fault detection is achieved by applying a threshold that is determined based on the healthy condition. The detection results reveal that both models are capable of detecting faults with an average delay of around 20 cycles and maintain a low false positive rate. While the fault detection performance is similar for both models, the input-output model provides better interpretability regarding potential fault types and the possible faulty components. ## 1 Introduction Determining the health state of complex industrial systems, such as turbofan engines, under different operating conditions has become feasible due to the abundance of condition monitoring data collected by diverse sensors. A health state is usually described by a health indicator or a condition indicator, which is a value that reflects system health conditions and health status in a predictable way as a system degrades (Lei et al., 2018). In complex systems, inferring these indicators and monitoring their evolution over time provide a more comprehensive understanding of the system's health and enable effective condition monitoring. Typically, a distinction is made between condition indicators and health indicators. A condition indicator refers to a specific feature within system data that exhibits predictable changes as the system undergoes degradation or operates in different operational modes (Fink et al., 2020). It encompasses any feature that proves valuable in differentiating normal operation from faulty or any deviation from normal operation. Health indicators, in contrast, integrate multiple condition indicators into a single value, providing the end user with a comprehensive health status of the component. Different approaches have been proposed to extract and learn the condition and health indicators of a system. These approaches can be categorised into three main categories: feature-based, one-class classification-based (OCC-based), and residual-based methods. Feature-based methods primarily focus on condition indicators. These methods identify relevant features that exhibit predictable changes as the system deteriorates, and they detect early-stage faults by directly applying the threshold method to the feature values. For instance, the relative root mean square (RMS) value of the acceleration signals from bearings serves as an indicator of wear evolution (Pan et al., 2020). While this approach is straightforward, it requires expert knowledge and can be sensitive to varying operating conditions (Saufi et al., 2019). While the feature-based methods for extracting condition indicators focus on expert-based determination of one or several features that capture the condition evolution of different components, OCC-based methods focus on learning a global indicator that represents the health state of the sys tems (Michau, Hu, Palme, & Fink, 2020). OCC-based methods are particularly suitable for setups of missing faulty samples during training. They are trained on data from one class (usually healthy data). While OCC outputs can provide binary health information (healthy or unhealthy), measuring the distance to the healthy data can effectively infer the evolution of the degradation. This distance can be interpreted as a health indicator, which can also be derived for subsystems by considering a subset of condition monitoring signals related to the specific subsystem. It can then be utilized to monitor the evolution of health conditions, detect anomalies, or distinguish between different severity levels of faults (Michau, Palme, & Fink, 2017). The third direction encompasses residual-based methods, which extract health indicators based on the residuals. The residuals are the differences between the measured values and the predicted outputs, serving as indicators of any deviation from the healthy training dataset (Arias Chao, Kulkarni, Goebel, & Fink, 2019). These methods can be categorised into two main types: autoencoders and input-output models. Autoencoders are trained to reconstruct their own inputs, whereas input-output models establish mappings between operating conditions and sensor readings. For example, in the case of a turbofan engine, operating conditions are used as inputs to the full authority digital electronic control (FADEC) to derive monitored signals as health indicators (Rausch, Goebel, Eklund, & Brunell, 2007). Both, input-output models and autoencoders are typically trained solely on healthy data and, as a result, learn the healthy data distribution. Consequently, when presented with anomalous samples stemming from a different data distribution, they generate significant residuals. In residual-based methods, there are various approaches to calculating residuals, particularly for aggregating the residuals of multivariate condition monitoring signals. One of the most commonly used methods for aggregating residuals is to compute their sum, offering a comprehensive representation of the overall global health condition (Guo, Yu, Duan, Gao, & Zhang, 2022). Another approach to utilizing the residuals is to bypass their aggregation and instead use them individually. By analyzing the residuals individually, it becomes possible to identify the specific signals most affected by each fault type (Reddy, Sarkar, Venugopalan, & Giering, 2016; Michau et al., 2020). This approach enables fault segmentation and fault diagnostics, as different faults tend to impact distinct sets of signals. However, since residual-based models are trained solely on healthy data and residuals are calculated based on the distance to the training data distribution, they are unable to differentiate between a fault and a new operating condition. In other words, high residuals may not necessarily indicate deteriorating health conditions of a system but rather the presence of a novel operating condition. This presents a significant challenge in accurately inferring the health state or conducting further downstream tasks, such as fault detection and fault segmentation. While several residual-based approaches have been applied to different case studies (Arias Chao et al., 2019; Lovberg, 2021; Darrah, Lovberg, Frank, Biswas, & Quinones-Gruiero, 2022), to the best of our knowledge, their performances have not been compared. In this study, we compare two residual-based methods: autoencoders and input-output models. We use a simulated turbofan dataset with three different fault engine components exhibiting degradation behavior. We evaluate their performance by first constructing the health condition using two types of residuals as health indicators. Subsequently, we perform fault detection and interpret the constructed health indicators. ## 2 Method We present in this section the overall proposed testing framework as summarised in Figure 1. First, in Section 2.1 we present two strategies for calculating residuals, enabling us to identify instances when the data distribution deviates from the healthy distribution. Second, we describe how the residuals can be used to construct health indicators in Section 2.2. We show in Section 2.3 how we infer the fault initiation from the constructed health indicators. ### Residual Calculating Models #### Autoencoder Model (AE Model) One commonly used residual-based models is the autoencoder. It aims to encode inputs into latent space with the encoder \(E_{\theta_{e}}(\bullet)\) while preserving important information, and then decode it back to its original form using the decoder \(D_{\theta_{d}}(\bullet)\). Here, \(\theta_{e}\) and \(\theta_{d}\) represent the model parameters of the encoder and decoder, respectively. In our case, we consider a multivariate dataset containing several sensors \(\mathbf{z}_{t}\in\mathbb{R}^{N_{z}}\), where \(t\) is the time index and \(N_{z}\) is the number of sensors. To learn the distribution of the healthy samples, the autoencoder is trained exclusively on samples captured during the early stages of the system's lifecycle, denoted as \(t\in\{1,...,T_{H}\}\). In this period, we assume that the system is in a healthy state. We denote by \(\mathbf{r}^{ae}\) the residual of the AE model, which represents the difference between the output and input signal. Mathematically, it can be written as: \[\mathbf{r}_{t}^{\rm AE}=\mathbf{z}_{t}-D_{\theta_{d}}(E_{\theta_{e}}(\mathbf{z}_{t})). \tag{1}\] By training the autoencoder, we aim to find the parameters \(\theta_{e}\) and \(\theta_{d}\) that minimise the residuals in terms of mean square error. With \(||\bullet||_{F}\) the Frobenius norm, the optimisation prob lem of the autoencoder model can be written as: \[\operatorname*{argmin}_{\theta_{e},\theta_{d}}\quad\frac{1}{T_{H}}\sum_{t=1}^{T_{ H}}||\ \mathbf{r}_{t}^{AE}\ ||_{F}. \tag{2}\] #### Operating-conditions-based Model (OC Model) In addition to the autoencoders, we also evaluate an input-output method that maps the operating conditions to the sensor readings (Lovberg, 2021; Darrah et al., 2022). We refer to this model as the operating-conditions-based model (OC Model). The OC model is based on operating condition descriptors that characterize the state of a system. For instance, in an industrial bearing, these descriptors include rotating speed and static loading. For a turboran engine, the considered state descriptors include altitude, flight Mach number, throttle-resolver angle, and the total temperature at the engine fan inlet. The multivariate time series \(\mathbf{z}_{t}\) can be subdivided into operating condition descriptors \(\mathbf{w}_{t}\in\mathbb{R}^{N_{w}}\) and sensor readings \(\mathbf{x}_{t}\in\mathbb{R}^{N_{x}}\). Here, \(N_{w}\) and \(N_{x}\) represent the number of operating condition descriptors and sensor readings, respectively, and \(N_{z}=N_{w}+N_{x}\). The OC model, denoted as \(M_{\theta_{m}}(\bullet)\), aims to establish a mapping between the operating conditions and sensor readings by learning the functional relationship between the two. The OC Model is expected to be more robust to variations in operating conditions compared to the autoencoders where the operating condition descriptors are part of the reconstructed signals. We define the residual of the OC Model as the difference between the estimated and real sensor readings such as: \[\mathbf{r}_{t}^{\mathrm{OC}}=\mathbf{x}_{t}-M_{\theta_{m}}(\mathbf{w}_{t}), \tag{3}\] The OC model is trained by finding the parameters \(\theta_{m}\) that minimise the residuals. The corresponding optimisation problem can be written as follows: \[\operatorname*{argmin}_{\theta_{m}}\quad\frac{1}{T_{H}}\sum_{t=1}^{T_{H}}|| \ \mathbf{r}_{t}^{\mathrm{OC}}\ ||_{F}. \tag{4}\] ### Health Indicators In this work, we consider the residuals defined in Equation 1 and Equation 3 as a basis for computing the health indicators. We assume that the training dataset is representative of all the operating conditions. Consequently, changes in operating conditions will not be detected as anomalies and an increase in the magnitude of the residuals will be associated with faulty system conditions. Figure 1: Overall architecture of the testing framework. The framework includes residual calculating models, health indicator construction, and fault detection algorithm. We assess the performances of each health indicators based on detection performances, data visualization and the interpretive capability associated with the machine’s condition. We first consider two aggregated health indicators, denoted as \(\mathbf{h}^{\text{A-AE}}\) and \(\mathbf{h}^{\text{A-OC}}\), which represent the norm of the residuals for the AE and OC models, respectively. These indicators combine the residual information from each sensor and can be written at any time \(t\) as follows: \[h_{t}^{\text{A-AE}}= ||\ \mathbf{r}_{t}^{\text{AE}}\ ||_{F}, \tag{6}\] \[h_{t}^{\text{A-OC}}= ||\ \mathbf{r}_{t}^{\text{OC}}\ ||_{F}. \tag{7}\] We also propose two sensor-wise multivariate health indicators, denoted as \(\mathbf{h}^{\text{S-AE}}\) and \(\mathbf{h}^{\text{S-OC}}\). These indicators correspond to the absolute residuals of the AE and OC models, respectively. By considering sensor-wise information, we aim to have indicators that are easier to interpret and more precise for the fault detection task. Using the absolute value operator \(|\ \bullet\ |\), the health indicators \(\mathbf{h}^{\text{S-AE}}\) and \(\mathbf{h}^{\text{S-OC}}\) can be written at any time \(t\) and for sensor \(i\) as follows: \[h_{it}^{\text{S-AE}}= |\ r_{it}^{\text{AE}}\ |, \tag{8}\] \[h_{it}^{\text{S-OC}}= |\ r_{it}^{\text{OC}}\ |. \tag{9}\] ### Fault Detection We propose to use a fault detection algorithm based on a threshold determined by the reconstruction performance of the models on the healthy validation dataset. Considering any of the previously presented health indicators \(\mathbf{h}\), we define the mean \(\mu_{i}\) and standard deviation \(\sigma_{i}\) characterising the healthy condition for sensor \(i\) as follows: \[\mu_{i}=\frac{1}{T_{H}}\sum_{t=1}^{T_{H}}h_{it}, \tag{10}\] \[\sigma_{i}^{2}=\frac{1}{T_{H}}\sum_{t=1}^{T_{H}}{(h_{it}-\mu_{i})}^{2}. \tag{11}\] Note that for an aggregated health indicator, there is only one set of statistics \(\mu\) and \(\sigma\), that needs to be computed. Thus, we can define the threshold \(\tau_{i}\) for sensor \(i\) as: \[\tau_{i}=\mu_{i}+3\sigma_{i}. \tag{12}\] We also divide the time index into \(C\) cycles. A cycle is denoted as \(n_{c}\), and it corresponds to a series of time indices \(t\in\{T_{0},T_{0}+1,...,T_{c-1}-1\}\) that is a segment of the time samples. A cycle can correspond to a full rotation of a bearing or the flight duration of a turbofan engine. The average health indicator during cycle \(n_{c}\) for sensor \(i\) is denoted as \(\bar{h}_{i}(n_{c})\) and is calculated as follows: \[\bar{h}_{i}(n_{c})=\frac{1}{T_{c+1}-T_{c}}\sum_{t=T_{c}}^{T_{c+1}}h_{it} \tag{13}\] To avoid false alarms, we introduce the waiting cycle number \(N_{\text{wait}}\). The fault is detected and the alarm is raised only when, for at least one sensor \(i\), the corresponding averaged health indicator \(\bar{h}_{i}(n_{c})\) is larger than the threshold \(\tau_{i}\) for \(N_{\text{wait}}\) consecutive cycles. For convenience, we denote \(n_{0}\) as the cycle where the fault is detected and the alarm is raised. ## 3 Case Study The dataset we used to evaluate the effectiveness of our proposed approach is the N-CMAPSS (Arias Chao et al., 2021). This dataset was synthetically generated from the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dynamical model. The N-CMAPSS dataset incorporates real flight operating conditions recorded from commercial jets as inputs to the simulation model. It consists of 14 sensor readings \(\mathbf{x}\), which are shown in Figure 2 with six main components: fan, low pressure compressor (LPC), high pressure compressor (HPC), low pressure turbine (LPT), high pressure turbine (HPT), and burner (Frederick, DeCastro, & Litt, 2007; Arias Chao et al., 2021). Faults are artificially introduced in simulation during the flights. In addition to the sensor readings, the dataset also provides four operating condition descriptors \(\mathbf{w}\) which describes the state of the flight. Figure 3 presents an instance of a unit for one flight cycle, illustrating these four descriptors. While certain research studies incorporate two additional auxiliary descriptors, namely flight class and positional variable (Lovberg, 2021; Darrah et al., 2022), we have chosen to exclusively utilize the original four operating condition descriptors. The descriptions of the sensor reading \(\mathbf{x}\) and the operating condition descriptors \(\mathbf{w}\) are presented in Table 1. The entire dataset is partitioned into multiple sub-datasets, each comprising run-to-failure trajectories of several units affected by distinct fault types. In this work, we focus on sub-datasets DS04, DS05, and DS07. These sub-datasets are chosen because their units are impacted by fault types that affect only a single component, rendering them well-suited for eval Figure 2: C-MAPSS model schematic representation with sensor position within the engine, adapted from (Arias Chao et al., 2021) uating fault segmentation performance. Other subsets contain units affected by fault types that involve multiple components. Specifically, the fault component for DS04 is the fan, for DS05, it is the HPC and for DS07, it is the LPT. Each sub-dataset contains 10 turbofan engines with the same fault types. ### Pre-processing All sensors undergo a downsampling process by a factor of 10 to reduce data size and computational costs. Each sensor reading and each descriptor are standardised to have a zero mean and unit standard deviation. This standardisation process is carried out on the training set and the resulting parameters are then applied to the test and validation sets. The training, validation, and test setup is explained in Section 3.3. For this study, we solely consider the cruising phase of the flight as it exhibits a more stable behavior in comparison to the take-off or landing phases. The cruising phase is defined when the normalised flight altitude exceeds 0.85. Normalised flight altitude is calculated by dividing all altitude values by the highest altitude within this cycle. The fault detection waiting cycle \(N_{wait}\) is fixed at 3 cycles. ### Applied Neural Network Architectures For the OC model, we use two 128-neuron layers. For the AE model, we consider three hidden layers with 128-8-128 neurons each. All activation functions used in the models are rectified linear units (ReLU), except for the final layer of both models, which employ a linear activation function. ### Training Setup The training was performed for 70 epochs with a batch size of 64, an early stopping waiting epoch of 10, and using Adam optimizer (Kingma and Ba, 2014) with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\), learning rate of 0.001. We arbitrarily selected the first 16 cycles of each unit to be the healthy data for training. The models are trained using healthy data from all 30 units from DS04, DS05, and DS07. The remaining cycles are then assigned to the test set for evaluation. Within the training set, we randomly select 15% as a validation set for deciding an early-stopping training epoch. For each setting, we train the models 5 times with the validation set randomly split. And the results are presented as the average over the 5 realisations. ### Evaluation Metrics To assess the fault detection results of a single engine or unit \(u\), we consider the detection delay \(d_{u}\) that can be computed as the difference between the ground truth occurring fault cycle \(n_{\mathrm{true}}\) and the cycle that raises the alarm \(n_{0}\). It is written as: \[d_{u}=n_{\mathrm{true}}-n_{0} \tag{14}\] In case the detection delay is negative (\(d_{u}<0\)), it corresponds to a false positive alarm. An effective detection algorithm should avoid generating false positive alarms, as they lead to the unnecessary consumption of resources. As a second metric to evaluate the fault detection algorithm, we pro \begin{table} \begin{tabular}{|c|c|c|c|} \hline \# & Symbol & Description & Units \\ \hline & \multicolumn{3}{c|}{sensor readings \(\mathbf{x}\)} \\ \hline 1 & T24 & Total temperature at LPC outlet & \({}^{\circ}\)R \\ 2 & T30 & Total temperature at HPC outlet & \({}^{\circ}\)R \\ 3 & T48 & Total temperature at HPT outlet & \({}^{\circ}\)R \\ 4 & T50 & Total temperature at LPT outlet & \({}^{\circ}\)R \\ 5 & P15 & Total pressure in bypass-duct & psia \\ 6 & P2 & Total pressure at fan inlet & psia \\ 7 & P21 & Total pressure at fan outlet & psia \\ 8 & P24 & Total pressure at LPC outlet & psia \\ 9 & Ps30 & Static pressure at HPC outlet & psia \\ 10 & P40 & Total pressure at burner outlet & psia \\ 11 & P50 & Total pressure at LPT outlet & psia \\ 12 & Nf & Physical fan speed & rpm \\ 13 & Nc & Physical core speed & rpm \\ 14 & Wf & Fuel flow & pps \\ \hline & \multicolumn{3}{c|}{operating condition descriptors \(\mathbf{w}\)} \\ \hline 15 & alt & altitude & ft \\ 16 & XM & Mach number & - \\ 17 & TRA & throttle-resolver angle & \% \\ 18 & T2 & total temperature at the fan inlet & \({}^{\circ}\)R \\ \hline \end{tabular} \end{table} Table 1: Sensor readings \(\mathbf{x}\) and operating condition descriptors \(\mathbf{w}\) in the N-CMAPSS dataset with its description and corresponding units Figure 3: Example of operating condition descriptors \(\mathbf{w}\) for the first cycle of unit 1 in DS04 including altitude, flight Mach number, throttle-resolver angle, and total temperature at the fan inlet, downsampled by a factor of 10 pose the false positive rate (FPR). FPR corresponds to the number of units with negative detection relative to all units. Additionally, the silhouette score (Rousseeuw, 1987) is applied to evaluate the clustering results for fault segmentation. The score measures the similarity of a sample to its own cluster and other clusters. It is calculated for a sample k using the mean intra-cluster distance \(d_{\rm intra,k}\) and the mean nearest-cluster distance \(d_{\rm nearest,k}\) and defined as follows: \[s_{k}=\frac{d_{\rm nearest,k}-d_{\rm intra,k}}{\max({\rm d_{\rm intra,k},d_{ \rm nearest,k}})}. \tag{15}\] The silhouette score is 1 if the clusters are well separated, 0 if they are overlapped, and -1 if at least one cluster is similar to the others. We take the mean of the scores over all samples. ## 4 Results ### Health Indicators The aggregated health indicators \(\mathbf{h}^{\rm A}\) obtained from both models \(\mathbf{h}^{\rm A-OC}\) and \(\mathbf{h}^{\rm A-AE}\) for DS07 unit 7 are shown at the top of Figure 4. This unit is randomly selected for visualization purposes. The fault occurs at cycle 24 and is detected when the fault detection algorithm is applied to \(\mathbf{h}^{\rm A-OC}\) at cycle 40 and \(\mathbf{h}^{\rm A-AE}\) at cycle 52. Both health indicators remain constant for approximately 10 to 15 cycles even after the fault occurs. This could be because the fault initially starts with mild severity, but as time progresses, it deteriorates and the health indicator increases at a faster rate. The bottom of Figure 4 displays both OC model health indicators \(\mathbf{h}^{\rm A-OC}\) and \(\mathbf{h}^{\rm S-OC}\). The sensor-wise residuals \(\mathbf{h}^{\rm S-OC}\) exhibit different degradation rates for different sensors. Furthermore, some trajectories show exponential behavior, increasing faster than the aggregated \(\mathbf{h}^{\rm A-OC}\) health indicator. This indicates that specific sensors exhibit the fault behavior before others. ### Fault Detection Performance We evaluate the fault detection performance using the proposed fault detection algorithm on the proposed health indicators. The detection delay \(d_{u}\) of each unit, the average detection delay of each model, and FPR are provided in Table 2. On average, the aggregated health indicators \(\mathbf{h}^{\rm A}\) raise an alarm 24.2 cycles and 33.4 cycles after fault initiation for the OC and AE models, respectively. In this case, the FPR is null, indicating that these indicators are robust against false alarms. However, no faults are detected for unit 3 of the DS04 dataset. The sensor-wise health indicators \(\mathbf{h}^{\rm S}\) raise alarms at earlier cycles, with an average detection at 15.5 cycles for the OC model and 17.3 cycles for the AE model. The detection occurs earlier than in the aggregated health indicators, primarily due to specific sensors that exhibit faulty behavior first. However, the sensor-wise health indicators are more sensitive to false alarms, as the FPR is not null. ### Sensor-wise Health Indicator Visualization We visualize the normalised sensor-wise health indicator of each unit in low-dimensional space using the first two principal components (PC1 and PC2) from Principal Component Analysis (PCA) in Figure 6. The visualization is performed at 10 cycles after the fault is detected (\(n_{0}+10\)). The value of 10 cycles is chosen to strike a balance, avoiding reaching the end-of-life while ensuring that the fault behavior is exhibited by multiple sensors, rather than just one. The colors in the visualization are assigned based on the ground-truth fault type, which is not available in reality. In the left figure representing the OC model, units with different fault types form distinct clusters. However, in the case of the AE model, the faults are mixed, and the clusters do not align with specific fault types. As an alternative to the sensor-wise health indicator, we also propose considering the visualization of the output from the embedding latent space layer of the AE model, referred to as AE-Embedding. However, in this case, units with similar fault types do not form coherent clusters. The silhouette scores with different numbers of cycles after the fault is detected are plotted in Figure 5 when using both sensor-wise health indicators \(\mathbf{h}^{\rm S-OC}\) and \(\mathbf{h}^{\rm S-AE}\) as inputs. Using the OC model, the silhouette score is consistently higher than when using the AE model for all cycles, indicating better clustering results. This superiority is also evident in Figure 6, where different fault types form distinct clusters. The score exhibits a decreasing trend with an increase Figure 4: Health indicators calculated from aggregated residuals \(\mathbf{h}^{\rm A}\) obtained from the OC and AE models: \(\mathbf{h}^{\rm A-OC}\) and \(\mathbf{h}^{\rm A-AE}\) for DS07 unit 7 (top). Health indicators, aggregated and sensor-wise residuals: \(\mathbf{h}^{\rm A-OC}\) and \(\mathbf{h}^{\rm S-OC}\) obtained from the OC model for the same unit (bottom). The vertical black line indicates the fault initiation at cycle 24. in cycles, suggesting that over time, the fault evolves into a higher severity state, leading to the degradation impacting all measurements. Consequently, the clusters begin to overlap, making differentiation more challenging. ### Sensor-wise Health Indicator Interpretation The sensor-wise health indicators \(\mathbf{h}^{\mathrm{S}}\) are displayed in Figure 7 at 10 cycles after fault detection. The residuals are normalised for each unit. In the case of the OC model, a larger number of sensors have high residuals when a fan fault occurs, compared to HPC or LPT faults. Since the fan is the first component in the engine system, most downstream components are affected when a fault arises in this particular component. In DS05 with an HPC fault, high residuals are observed in the temperature measurements S2-T30 (temperature at the HPC outlet), S3-T48 (temperature at the HPT outlet), and S4-T50 (temperature at the LPT outlet). Conversely, in DS07 with an LPT fault, high residuals are concentrated only in S3-T48 and S4-T50. It is worth noting that sensor S3-T48, which measures total temperature at the HPT outlet, also serves as the inlet to the LPT. Together with S4-T50, these two sensors effectively measure the input and output temperature of the faulty component, which is the LPT. To differentiate between the HPC fault in DS05 and the LPT fault in DS07, the key sensor is S2-T30 which measures the total temperature at the HPC outlet, which is the fault component of DS05. The sensor S2-T30 exhibits a high residual only in the case of the HPC fault in DS05 but not in the LPT fault in DS07. For the AE model, interpretation becomes more challenging as multiple sensors have high residuals simultaneously, and there is a higher degree of variation between engines. This highlights the advantage of the OC model, as it provides more refined residuals that are easier to relate to the physical system. The OC model offers a clearer and more straightforward understanding of the deviations from normal behavior. In Figure 8, we depict the sensors that have detected faults at different cycle numbers using the sensor-wise health indicator of the OC model \(\mathbf{h}^{\mathrm{S-OC}}\). We have not provided the same figure for the AE model due to excessive variation in sensor activation, which complicates interpretation. This figure is relevant to understanding the evolution of triggered sensors. A triggered sensor is a sensor that has a sensor-wise residual higher than the pre-defined threshold as discussed in Section 2.3. Darker colors indicate earlier sensor-wise fault detection. At 10 cycles after the first fault detection, the triggering pattern resembles that shown in Figure 7, wherein health indicators with high values are triggered. With this figure, it becomes possible to track the evolution of faulty components over time. For instance, in the case of an HPC fault, initially, sensors S2-T30, S3-T48, and S4-T50 are predominantly triggered. As the fault progresses, deviations are observed in S13-\(\mathrm{Nc}\) (physical core speed) and S14-\(\mathrm{Wf}\) (fuel flow), indicating that the fault begins to impact the burner and the shaft. Towards the end-of-life, sensors S8-P24 (pressure at LPC outlet), S9-Ps30 (pressure at HPC outlet), and S10-P40 (pressure at burner outlet) situated around the burner and HPC demonstrate further deterioration. \begin{table} \begin{tabular}{|c|c c c|c c c|} \hline & \multicolumn{3}{c|}{OC} & \multicolumn{3}{c|}{AE} \\ \hline Unit \# & DS04 & DS05 & DS07 & DS04 & DS05 & DS07 \\ \hline \multicolumn{7}{|c|}{\(\mathbf{h}^{\mathrm{S}}\)} \\ \hline [MISSING_PAGE_POST] \hline avg & \multicolumn{3}{c|}{24.2} & \multicolumn{3}{c|}{33.4} \\ \hline FPR & \multicolumn{3}{c|}{0\%*} & \multicolumn{3}{c|}{0\%} \\ \hline \end{tabular} \end{table} Table 2: Overview of the fault detection delays \(d_{u}\) using OC and AE models, average over five realisations. “-” means no fault is detected. In total, there are 30 different units, 10 units from each sub-dataset. *: FPR is 0% but the fault is not detected in one unit Figure 5: Silhouette scores of \(\mathbf{h}^{\mathrm{S-OC}}\) and \(\mathbf{h}^{\mathrm{S-AE}}\) calculated from 0 to 34 cycles after fault detection. The higher silhouette scores suggest more distinct clusters. Figure 8: Sensors that are triggered using \(\mathbf{h}^{\rm S-OC}\) with \(n_{c}=10,20,30,40\) cycles after the first fault detection. Darker colors indicate earlier sensor triggering. The darkest color signifies triggering within 10 cycles after fault detection. The lightest color, labeled as ”No” indicates no triggering up to \(n_{0}+40\) cycles. Figure 6: Visualization of the clustering results using the normalised sensor-wise health indicator with \(n_{c}\) set to 10 cycles after fault detection, color-coded by fault component. Figure 7: Normalised sensor-wise health indicator \(\mathbf{h}^{\rm S}\) values calculated 10 cycles after the fault detection. The upper figure depicts results using the OC model \(\mathbf{h}^{\rm S-OC}\), while the lower figure represents the AE model \(\mathbf{h}^{\rm S-AE}\). In the AE model, sensor 1 to 14 represent the sensor readings \(\mathbf{x}\), while sensors 15 to 18 correspond to operating condition descriptors \(\mathbf{w}\). This figure can also be used to differentiate between the HPC fault and the LPT fault. While the sensor S2-T30 is triggered in both faults, it exhibits an earlier trigger in the HPC fault compared to the LPT fault because the sensor S2-T30 measures the total temperature at the HPC outlet. In the case of the HPC fault, the impact on the sensor S2-T30 is direct and immediate. However, in the LPT fault, the impact on S2-T30 reading becomes noticeable only after a longer duration, typically 20 or 30 cycles after fault detection. ## 5 Conclusion In this study, we performed a comparative analysis of two residual-based models, specifically the AE and OC model, for the purpose of fault detection. We constructed two types of health indicators from residuals: a univariate aggregated health indicator and multivariate sensor-wise health indicators. Our framework was applied to three sub-datasets from N-CMAPSS, each presenting different fault types in different engine components. The results demonstrated that the sensor-wise health indicator outperformed the aggregated health indicator in terms of fault detection performance. Furthermore, the health indicators obtained using the OC model exhibited superior fault separation capabilities. It effectively highlighted the sensors triggering and could be directly linked to specific fault components. This research highlights an alternative residual model that surpasses the commonly used AE models in both fault detection and segmentation. It not only demonstrates superior performance but also provides more meaningful health indicators. As a future direction, it is essential to evaluate the proposed approaches on other systems that exhibit different fault evolution behavior. Additionally, it is important to consider systems with higher variability in terms of operating conditions and their impact on fault evolution. By testing the approaches across a diverse range of systems, we can ensure their effectiveness and applicability in various real-world scenarios. Furthermore, the prediction of remaining useful life could be achieved using a two-stage approach, where the prediction begins only after the fault is detected. Another potential avenue involves evaluating the evolution of fault patterns by analyzing factors like the sequence of sensor triggers. Lastly, it would be beneficial to explore different fault detection architectures, such as recurrent neural networks and variational autoencoders. ## Acknowledgment This work is part of a project that is financially supported by the Swiss Federal Office of Energy.
2303.00438
A Framework for Neurosymbolic Robot Action Planning using Large Language Models
Symbolic task planning is a widely used approach to enforce robot autonomy due to its ease of understanding and deployment in robot architectures. However, techniques for symbolic task planning are difficult to scale in real-world, human-robot collaboration scenarios because of the poor performance in complex planning domains or when frequent re-planning is needed. We present a framework, Teriyaki, specifically aimed at bridging the gap between symbolic task planning and machine learning approaches. The rationale is training Large Language Models (LLMs), namely GPT-3, into a neurosymbolic task planner compatible with the Planning Domain Definition Language (PDDL), and then leveraging its generative capabilities to overcome a number of limitations inherent to symbolic task planners. Potential benefits include (i) a better scalability in so far as the planning domain complexity increases, since LLMs' response time linearly scales with the combined length of the input and the output, and (ii) the ability to synthesize a plan action-by-action instead of end-to-end, making each action available for execution as soon as it is generated instead of waiting for the whole plan to be available, which in turn enables concurrent planning and execution. Recently, significant efforts have been devoted by the research community to evaluate the cognitive capabilities of LLMs, with alternate successes. Instead, with Teriyaki we aim to provide an overall planning performance comparable to traditional planners in specific planning domains, while leveraging LLMs capabilities to build a look-ahead predictive planning model. Preliminary results in selected domains show that our method can: (i) solve 95.5% of problems in a test data set of 1,000 samples; (ii) produce plans up to 13.5% shorter than a traditional symbolic planner; (iii) reduce average overall waiting times for a plan availability by up to 61.4%
Alessio Capitanelli, Fulvio Mastrogiovanni
2023-03-01T11:54:22Z
http://arxiv.org/abs/2303.00438v3
# A Framework to Generate Neurosymbolic PDDL-compliant Planners ###### Abstract The problem of integrating high-level task planning in the execution loop of a real-world robot architecture remains challenging, as the planning times of traditional symbolic planners explode combinatorially with the number of symbols to plan upon. In this paper, we present Teriyaki, a framework for training Large Language Models (LLMs), and in particular the now well-known GPT-3 model, into neurosymbolic planners compatible with the Planning Domain Definition Language (PDDL). Unlike symbolic approaches, LLMs require a training process. However, their response time scales with the combined length of the input and the output. Hence, LLM-based planners can potentially provide significant performance gains on complex planning problems as the technology matures and becomes more accessible. In this preliminary work, which to our knowledge is the first using LLMs for planning in robotics, we (i) outline a methodology for training LLMs as PDDL solvers, (ii) generate PDDL-compliant planners for two challenging PDDL domains, and (iii) test the planning times and the plan quality associated with the obtained planners, while also comparing them to a state-of-the-art PDDL planner, namely Probe. Results confirm the viability of the approach, with Teriyaki-based planners being able to solve 95.5% of problems in a test data set of 1000 samples, and even generating plans up to 13.5% shorter on average than the employed traditional planner, depending on the domain. ## I Introduction Today, we are witnessing the rise of robot-based applications in real-world scenarios characterized by the need for increased autonomy and robustness, especially in environments typically shared with humans, which could act as simple beneficiaries of robot behavior or even as teammates, actually partaking in robot actions. For example, the so-called Industry 4.0 paradigm is believed to reshape the nature of many manufacturing operations [1], and it requires deep collaboration between robots, which may be characterized by a continuum range in autonomy, and human workers, who can oversee the robot work and intervene if and when necessary. In scenarios like these, the robot must not only be robust to the interactions with the human operators, but also to unforeseen events very likely to happen when the work to be done is poorly structured and humans are involved. This paradigm shift requires robots to be able to validate and re-plan their actions continuously in response to changing conditions and on the basis of human behavior. This requirement makes planning performance of critical importance. In this paper, we explore as a possible solution, the training of Large Language Models (LLMs) [2], namely OpenAI's GPT-3 [3], into neurosymbolic [4] planners. The computational complexity of LLMs is known to scale linearly with the combined length of the prompt and the completion, as the network iteratively predicts the next symbol in a sequence. In contrast, traditional symbolic planners tends towards planning times' combinatorial explosion as the number of symbols to plan upon increases. Therefore, it is legitimate to argue that LLMs could provide a performance advantage in complex planning domains, allowing for their use in real-world applications requiring frequent in-the-loop re-planning steps. Moreover, the model could be trained to receive input and return output in the Planning Domain Definition Language (PDDL) [5], therefore maintaining full compatibility with existing software frameworks for high-level planning such as ROSPlan [6]. Unlike traditional symbolic planners, this approach requires training, but it is trivial to generate large datasets for training using traditional solvers. Herewith, we introduce Teriyaki, a framework to train and use GPT-3 models to solve PDDL problems for a given domain. The framework is evaluated and tested on two planning domains useful in human-robot collaboration tasks, specifically for the manipulation of articulated objects by robots, a challenging scenario previously described in the literature [7][8][9], and it is released for public use1. The scenario has been considered because of its inherent challenges for symbolic planners, as the problem of the manipulation of an articulated object scales very poorly in relation to the number of its links, joints, and the allowed angle configurations. Footnote 1: Teriyaki: [https://github.com/alessiocpt/teriyaki](https://github.com/alessiocpt/teriyaki) The training process has been performed on a large data set of 9000 _problem-plan_ pairs generated automatically and solved by an existing, state-of-the-art, traditional PDDL planner, namely Probe [10]. During training, data related to validation and planning accuracy have been collected to investigate their evolution with the growing number of training samples and assess whether transfer learning between similar planning domains is possible. The resulting models have then been rigorously tested on 1000 pairs not previously used for training, and evaluated in terms of planning accuracy, plan length, and planning times. Results show near identical planning accuracy between the state-of-the-art, traditional PDDL planner and the solvers generated with Teriyaki, with the latter even outperforming the former in terms of shorter plan length by up to 13.5% in one of the domains. The planning times of solvers generated with Teriyaki and traditional planners cannot be fairly compared due to the different computing architectures on which they run, yet our results confirm that Teriyaki solvers scale with the input and output length rather than with the problem and domain complexity, and provide a baseline for further research in the field. To summarize, the main contributions of this paper are the following: * we provide the first attempt to perform high-level task planning in robotics using LLMs; * we designed, developed, and released to the community Teriyaki, a framework to train and use GPT-3 to solve PDDL problems for a given domain; * we performed a preliminary yet detailed analysis of the performance of the solvers generated with Teriyaki, proving that they are effective, reliable, and in some metrics, better than their traditional counterparts. Despite the positive results, we want to highlight that solvers generated with Teriyaki should not be considered at the moment as a complete alternative to traditional symbolic planners. Instead, we recommend to treat them as a proof-of-concept of the planning capabilities of LLMs, and possibly in the near future as a useful tool to optimize planning performance when planning must take place in the execution loop of complex robot behaviors. ## II Neurosymbolic Approaches and Large Language Models Neurosymbolic systems assume that (i) their knowledge is grounded in vector representations useful for efficient learning from data, and (ii) symbols _become available_ as a result of querying and extracting knowledge from the trained network [11]. For a long time, such systems have been theorized to provide an alternative to the problem of combinatorial reasoning by learning. However, it is only recently that they started to gain traction, mainly for Natural Language Processing applications [12]. Most notably, GPT-3 is a famous LLM released by the company OpenAI2 that achieved astonishing results in generating human-level complex text in the form of structured sentences [3]. Other popular models are LaMDA [13], PALM [14], Megatron-Turing NLG [15] and BLOOM [16]. A few of the most notable applications of these models are text summary generation and completion, sentiment analysis, and, of particular interest to our application, code generation [17]. Footnote 2: Web: [https://openai.com/blog/gpt-3-apps](https://openai.com/blog/gpt-3-apps) LLMs are one of the most promising applications of _Deep Generative Models_[18], that is, a category of unsupervised Deep Learning algorithms capable of capturing the inner probabilistic distribution that generates a class of data, which can then be used to generate similar data. LLMs in particular are capable of estimating the probability of a sentence as a product of each symbol's probability given its preceding symbols. For instance, given a few words as a prompt, they can easily suggest the most likely way to complete the sentence. Usually, such symbols are referred to as _tokens_ and represent short sequences of characters. Common ways to implement LLMs are Recurrent Neural Networks [19], LSTM networks [20] and, most recently, Transformers [21], of which GPT-3 is the most notable example. Transformers are models based on an encoder-decoder architecture, and adopt a _self-attention_ mechanism, allowing them to weigh each part of the input data differently. On the one hand, while there are no guarantees about the long-term logical coherence of the replies of a Transformer, especially for long sequences of symbols, they have been proven to be capable of producing plausible and coherent lists of instructions, from cooking recipes to functioning code. On the other hand, they have already proved to have limited logic capabilities, especially in zero-shots to few-shots attempts. Referring to the nomenclature in [3], for zero-shot attempts, we refer to the case when logical questions are asked directly to the system, whereas one-shot and few-shot approaches provide a very limited amount of examples to the system as part of the request. In all such cases, requests are submitted to the system in natural language without further training for fine-tuning. The work in [22] proposed a benchmark for LLM-based planning and reasoning in a few-shots scenario. The authors employed PDDL to generate a number of logical problems never seen before by the model, to translate them into natural language requests, and finally to evaluate and compare the performance of popular LLMs under several metrics. The best performer could solve only \(25\) out of \(500\) problems (that is, 5%). Nevertheless, we argue that there are obvious similarities between the capability of LLMs to learn the probability of a symbol following another one and the heuristics that are usually implemented in traditional solvers for symbolic planning. We also argue that, if an LLM can learn to solve a specific class of problems, at the moment this appears to be possible only through an appropriate _fine-tuning_ process, that is, a further training process of the base model with specific examples related to the planning domain at hand. Some scholars might argue that an easier solution would be to adopt a fully connectivist approach, that is, to rely on machine learning techniques capable of learning policies to sequence actions, as it is entailed by such approaches as deep reinforcement learning. Yet, purely connectivist approaches are often very hard to train and lack the explainability, transparency, and ease of comprehension by a human observer that symbolic approaches provide. Moreover, to this day, PDDL is still the standard language for high-level planning in robotics, as demonstrated by the success of ROSPlan as the default planning module in many software architectures for robots. So far, attempts to (i) bridge the gap between symbolic and purely connectivist approaches [23], and (ii) introduce neurosymbolic systems capable of symbolic learning and reasoning [24] have been limited. Nevertheless, we believe that a neurosymbolic approach would be a realistic solution from a methodological and an engineering perspective, bringing together the reasoning capabilities of symbolic planning and the generalization capabilities offered by machine learning, and thus providing the best combination of usability, performance, robustness, and possibly compatibility with legacy systems. ## III Teriyaki ### _Planning Domains, PDDL Version, and Planner_ We have selected two PDDL domains modeling manipulation actions on an articulated object. Both of them use the so-called _absolute_ representation as described in [7], meaning that angles between pairwise links of the object are expressed with respect to an absolute, external reference frame. The choice is motivated by the fact that domains using absolute representation require _conditional effects_ in the planning domain, that is, actions can have implicit effects on joint angles not directly affected by the action itself. If managed by a traditional, symbolic planner this requires propagating the (conditional) effects of each action to modify the value of state variables not directly affected by the action. Likewise, we argue that in the case of LLMs like GPT-3 the generative process may be stressed because it should be harder to maintain coherence in the plans given the limited memory of the model. The main difference between the two selected domains is that one uses macros whereas the second does not, and hence we will refer to them as MACRO and NO-MACRO, respectively. Macros are compound actions that bundle together several elementary actions, for example, a grasp-rotate-release action instead of three _atomic_ ones. Their use is an effective way to reduce the planning times of traditional planners at the cost of not being able to achieve an optimal plan, but they are also supposed to facilitate the generative process of GPT-3, since the use of macros shortens the resulting plan. As the two domains are fundamentally similar but lead to plans of different lengths, they are ideal candidates to test how Teriyaki-based solvers scale with the output length. As both domains taken into consideration here used conditional effects, we used PDDL 2.1 and a compatible PDDL planner, namely Probe [10], one of the planners used in [7]. It must be noted that the choice of Probe definitely has an impact on the quality of the results, and in the future, different planners, or even plans generated by a multitude of planners, might be used instead as part of the training set for better results, that is, to avoid conditioning the training process on a specific planner. ### _Data Set Generation and Composition_ As we anticipated, in this work we leverage the availability of a specific LLM, namely GPT-3. In principle, though, our results should not be limited to the specific features of GPT-3. In this case, the training can be performed by providing a structured file where each line is a _prompt-completion_ pair. We assumed that the GPT-3 model can implicitly learn by example the rules usually defined in a PDDL domain file, such as the allowed (modeled) actions, and their respective preconditions and effects. Hence we conduct training using only _problem-plan_ pairs, whereas the problem is used as a prompt, and the plan is used as completion. We remark here that only part of the PDDL problem is used as a prompt, as many of the predicates are actually _static_, that is, they are used to encode general properties of the problem, such as the relative order of joint angle configurations, an example being: a joint at 45 _deg_ can be rotated clockwise to 60 _deg_, and counter-clockwise to 30 _deg_ in 15 _deg_ increments. These predicates remain the same in all problems and are never modified by action effects. Therefore, we assume that they can also be implicitly learned by the network in the same way as the domain. We thus removed them from the prompt to reduce its length, as prompt length has an impact on response time and the total prompt plus completion length of each query cannot exceed 2048 tokens, or circa 8000 characters, allowing us to generate longer plans without truncation. It must be noted that GPT-3 is a closed-source model and commercial product, thus at the moment it can only be accessed as a cloud-based service through OpenAI's API. The GPT-3 documentation suggests using the largest and most powerful version of the model, called davinci, for a conditional generation. Conditional generation is one of the possible applications of LLMs, and consists in generating original text based on input, for example, writing an advertisement based on a Wikipedia article. Our task clearly falls into this definition. For a conditional generation, the GPT-3 documentation recommends training the system using at least 500 training samples and aiming for 8000. One of the advantages of Teriyaki is that we can easily generate large training data sets using randomly generated problems and a PDDL-based planner, thus we generated 9000 problems, of which 8000 were reserved for training and 1000 for validation. It must be noted that out of the 9000 planning attempts performed by the planner 124 failed, so in reality the validation set has been reduced to 876 samples. We later added another 1000 problem-plan pairs as a test set, in this case, we ensured that all the chosen samples could be solved by Probe. The next step requires validating the completeness of the plans generated by Probe in order to ensure that we train our system only on correct data. To do so we use the widely known VAL validation suite [25], which requires plans to be formatted in the International Planning Competition (IPC) standard3. As Probe does return plans compliant with this standard, we developed custom code to reformat the plan files to be compatible with VAL. This is also necessary to ensure that Teriyaki will be trained on IPC standard plans, and will thus later reply to our queries in the same way, allowing us to easily benchmark its planning accuracy. Running VAL over the data set resulted in all plans passing the test, meaning that they reach the desired goal, even though it must be reminded that Probe previously failed to generate a plan _at all_ for 124/9000 problems, that is, about \(1.37\%\) of the total. Footnote 3: Web: [https://ipc2023.github.io/](https://ipc2023.github.io/) Finally, we compiled the files for the training and validation sets, adding the standard termination sequences to each prompt and completion, namely n\n####nn and END respectively, then, we validated the files using the OpenAI custom validation utility. As the name suggests, the termination sequences signal to the network the end of the message. This is especially important for the termination sequence related to completion as we can use it later as a stopping condition when we query the trained model. A sample line is provided in Listing 1, edited for the sake of brevity and clarity. ``` {"prompt":"(:init(angle_jointangle315joint1) (angle_jointangle300joint2) (angle_jointangle285joint3) (in-centrejoint2) (free9left) (free9right)) (igopal (and (angle_jointangle0joint1) (angle_jointangle300joint2) (angle_jointangle285joint3))) \n\n####\n^", "completion": = 0.00100: (link-to-central-grasp...) 0.00300: (increase_angle_first_child_45...) 0.00500: (release-links...) 0.00700: (link-to-central-grasp...) 0.00900: (decrease_angle_first_child_45...) END"} ``` Listing 1: An example of training sample. ### _Training_ As far as the training process is concerned, we decided to run first a complete 8000-sample training on the MACRO domain, as plans in its data set are shorter on average, and thus _supposedly_ easier to learn by GPT-3. When we train a LLM, the cost function rewards linguistic coherence rather than planning accuracy. This means that we are hypothesizing _linguistic coherence_ as a proxy for _logical coherence_, and therefore we must assume that validation accuracy during the training process differs from the planning accuracy of the resulting model, whereas the planning accuracy is defined as the percentage of plans that are (i) formally correct and (ii) reach the desired goal state. In order to be able to measure how planning accuracy increases with the size of the provided training set, we decided to perform the training process in steps. At each step, we provided samples to double the total size of the training set. Starting from the minimum 500 samples, we then trained the system to 1000, 2000, 4000, and 8000 total samples, and saved a snapshot of the trained model at each step. As anticipated above, the base model chosen for the training is davinci. Regarding the hyper-parameters, the number of training epochs was set to 2, while the batch size and the learning rate multiplier were left to their default values. We did provide the optional validation set and enabled the compute_classification_metrics options to obtain training statistics for further analysis. The total training cost for this procedure on a single planning domain at the time of writing is around 250$. ### _Curriculum Learning_ For the NO-MACRO domain, we used the same methodology, but we decided to generate two candidate models. The first model is trained starting from the davinci model as before, whereas the second is trained from the MACRO model obtained previously. The hypothesis is that as domains share many concepts and the MACRO model has already been primed for them, the second candidate should reach a higher planning accuracy with a smaller amount of training samples. Results shown in Section IV-B confirmed this hypothesis, so we interrupted training when the model trained from the MACRO model reached comparable results to its parent model and discarded the one trained from davinci. Regarding the training data set, the only difference with the example provided in Listing 1 is that the prompt part is preceded by the \(\backslash\)n--NO-MACRO tag. This tag was introduced to test whether the model could be used to solve problems for both the MACRO and the NO-MACRO domains, by simply adding the tag in our queries to the system. Unfortunately, the NO-MACRO model loses the ability to solve problems in the MACRO domain, suggesting that in order to generate models that can solve multiple domains, _training should be performed including examples from all the domains of interest_. ### _Usage and Testing_ After the training phase, it is possible to query the system through an API call by providing as a prompt the PDDL predicates describing the initial and goal states for the robot and the articulated object, as one would do with a traditional PDDL-compatible planner. There are a number of parameters that can be configured, and that may impact the overall quality of the resulting plan. * temperature is the most important as it controls how _deterministic_ the reply (and therefore the plan) will be. Keeping it at \(0\) corresponds to an argmax operation on the next token probability and ensures that we always get the most likely result. While in a regular text generation, some creativity is desirable, we strongly suggest keeping it at \(0\) for plan generation, especially when robot actions are planned in the context of a human-robot collaboration scenario. * presence_penalty and frequency_penalty can assume values between \(-2\) and \(2\), and can be used to reward or penalize repetitions in the generated text. We have observed that setting the former to \(2\) seems to improve planning accuracy by \(1-2\%\) and _vice versa_, but at the moment we have not investigated enough this effect, so we decided to set the default value to \(0\) for both in our tests. * best_of allows us to generate multiple completions and select the best among them, but it had no apparent effect on planning accuracy in our preliminary tests. * max_tokens controls the maximum response length, thus we recommend setting it as high as possible in order to minimize the chance of the plan's truncation. Since the average prompt length depends on the planning domain at hand, this value should be assessed case by case. In our case, \(1900\) appears to be the highest value for robust operation. ## IV Results ### _Relation between Token and Planning Accuracy with an Increasing Number of Training Samples_ In Section III-C, we anticipated that we use linguistic coherence as a proxy for logical coherence. Figure 1 compares the evolution of the validation token accuracy and the planning accuracy for the MACRO model, against the number of examples used to train the model. On the one hand, the validation token accuracy measures the accuracy at predicting the next token, that is, approximately the next 4 characters of the response, in our case the plan, on the validation set. On the other hand, we define planning accuracy as the percentage of plans in the 876 validation set that is both formally correct and reaches the desired goal state, that is, it passes the VAL validation utility test. For the former, data are retrieved from the classification metrics reported by GPT-3 itself after training, where this information is reported every 9 training steps. For the latter, we used the snapshots of the model taken after training with 500, 1000, 2000, 4000, and 8000 examples. Such snapshots are used to plan against all 876 problems in the validation set in the conditions described in Section III-E. VAL is run on the obtained plans and finally, the validation planning accuracy can be computed. In Figure 1, the evolution of planning accuracy is represented by the orange bars to highlight the fact that the value is measured at each snapshot. It must also be noted that elapsed_examples does not correspond to the number of unique examples in the training set, but it is scaled by a factor of 2 because we used two epochs for training, thus each example was used twice. In Figure 1 it is evident that GPT-3 reaches a very high validation token accuracy already after the first 500 samples, as expected by a model well-known for its few-shots learning capabilities. Nevertheless, planning accuracy rises at a much slower pace, as even a single mistaken token can break a plan that is otherwise _linguistically_ coherent. A very common mistake in plans generated by the model trained with only 500 samples is that the model ignored the conditional effects of actions. As we correctly hypothesized, conditional effects can be quite problematic as they imply propagating their effect to keep the semantic coherence of the plan. In this case, the actions are correct and reasonably parameterized but they do not meet the necessary preconditions as they ignore that a given joint is in a state different from the one expected by the model, due to the _indirect_ effects of a previous action. Eventually, the model reaches a very high \(95\%\) planning accuracy on the validation set after training over \(8000\) unique samples. Despite this, the model does not seem to behave in overfitting to the problems used for training, as shown by the results on the test set presented in Section IV-C. During this experiment, we measured also the number of planning attempts that failed because of their excessive length. While this number was always small compared to the validation set size, the number of failures decreased from \(24\) in the first snapshot, down to \(1\) in the final model, further suggesting that the model becomes better at generating shorter plans and avoiding loopholes. ### _Curriculum Learning_ As anticipated in Section III-D, the Teriyaki solver for the NO-MACRO planning domain has been chosen from two candidates, namely, one trained on top of the davinci models, as described in Section III-C for the MACRO solver, and the other was trained from the MACRO solver itself. As we trained the two models, we kept track of the planning accuracy as described in Section III-D. Table I reports the results and summarizes the characteristics of the snapshots taken into consideration. After 1000 samples, the model trained on top of the davinci model reached a planning accuracy of 32.5%, while the one exploiting transfer learning reached 95.2%. Because of this result, we immediately dropped the former model, while we further trained the latter to 2000 samples. At this stage the NO-MACRO model reached a validation planning accuracy of 98.8%, exceeding the 95% validation planning accuracy achieved by the MACRO model after 8000 samples. Because of this result, we decided to not proceed with further training. It is also interesting to note that the NO-MACRO solver trained on top of the davinci model still reached a higher validation planning accuracy after 1000 samples than the equivalent snapshot of the MACRO solver, which stopped at 27%, as shown in Figure 1. This result seems to suggest that against our initial assumption, the NO-MACRO model is easier to learn for GPT-3 than the MACRO one. This result must be explored more in-depth, but it could be related to the number and quality of the actions in the planning domain. Fig. 1: Evolution of the token validation accuracy and the planning accuracy with an increasing number of training examples for the MACRO model. The blue line represents the evolution of the validation token accuracy during learning as reported by GPT-3 classification metrics. The orange bars represent the planning accuracy, that is, the percentage of plans generated by Teriyaki from the validation data set that are both formally correct and reach the desired goal state, as measured at specific snapshots. ### _Comparison of Solvers_ Finally, we tested the performance of both the MACRO and NO-MACRO models in terms of planning accuracy, plan length, and planning times on a test set of \(1000\) problem-plan pairs not previously used for training and validation, and we compared the results to the performance of the traditional, state-of-the-art planner Probe. Before proceeding, we want to remark that, due to the significantly different computing architectures used to run Teriyaki and Probe, a fair comparison between them in terms of planning time is not possible. Therefore, results about planning times in this Section are only meant to provide a baseline for future work and to show how planning times scale differently for each Teriyaki planner in the two different planning domains taken into consideration in this work. Probe ran in a Ubuntu 20.04.4 Windows Linux Subsystem (WSL) environment, deployed on a machine equipped with an \(8\)-core Ryzen 3700X 3600 Mhz CPU and 16GB@3600 Mhz RAM. We recorded planning times as reported by the planner itself together with each generated plan. As far as planning accuracy is concerned, all plans generated were valid, but we considered the instances in which the planner failed to generate a plan as failures. Regarding the Teriyaki models, we prompted them using the settings described in Section III-E, then verified the validity of the obtained plans using the same VAL validation tool employed at the data set generation phase. As the OpenAI API does not provide a method to log the exact execution time of a call, planning times of Teriyaki solvers have been measured by recording the time it took each API call to return a response to the client. For this reason, it must be noted that it is impossible to discern how long each call has been queued by the system before execution. We assessed that the effect of queuing is not negligible as running tests after the official release of ChatGPT, another popular GPT-3 derived product by OpenAI, led to longer planning times than previously recorded, possibly due to the increased traffic to the servers. In order to minimize this effect, the tests presented here were performed during the weekend, preferably in the morning CET time. To further strengthen our results, in Table II we include the date and the starting and finishing time of each test session for reference. For all solvers and models, plan length has been computed simply as the number of actions in the plan. Table III compares the MACRO and NO-MACRO Teriyaki models against Probe in the respective domains, in terms of accuracy (in percentage), average plan length, maximum and average planning times, as well as the standard deviation of the planning times (in seconds). Probe is approximately \(4.25\times\) faster in the MACRO domain and approximately \(5.25\times\) faster in the NO-MACRO domain, yet Teriyaki solvers still offer decent performance. Despite being trained on plans generated by Probe itself, Teriyaki models are capable of solving problems that Probe failed to process, and even produce _shorter_ plans. The difference in plan length is only \(1.5\%\) for the MACRO domain, but it raises to \(13.5\%\) in the NO-MACRO domain, which in general leads to plans almost twice as long. This seems to suggest that the training procedure rewards shorter completions and that the effect might be stronger the longer the supposed completion gets. This is also apparent in the quality of the plans. Whereas in the former case, plans are almost identical and they only differ - sometimes - using a macro instead of the corresponding composing actions, in the latter case plans seem more _original_. Nevertheless, this phenomenon requires further investigation as it might be partially influenced by the fact that the NO-MACRO model has been trained on top of the MACRO model and thus has _more experience_ with planning. Figure 2 allows for a better assessment of the timing Fig. 2: Comparison of the Teriyaki MACRO and NO-MACRO models’ planning times against Probe in their respective planning domains. performance of the proposed model, and it compares them to those generated by Probe. The Figure confirms that Teriyaki models do actually scale with the combined length of the input and the output. The planning times of the NO-MACRO models are approximately twice those of the MACRO model, as expected considering that the plans of the former are about twice longer than those of the latter. Also, the box plots in the Figure associated with Teriyaki models have a very distinct shape when compared to those of Probe on both domains, which hints at an almost Gaussian distribution of planning times. This is coherent with the fact that the plans, which are generated from randomly initialized problems, can assume any length. ## V Conclusions In this paper, we present Teriyaki, a framework to generate neurosymbolic PDDL-compliant planners based on GPT-3, a Transformer-based Large Language Model. Our method relies on a practical and inexpensive procedure for the generation of a training data set. Training can leverage high-performance computing machinery in the cloud, and the resulting model can be deployed to any software architecture for robots using standard PDDL syntax and interfaces. Results on transfer learning demonstrate that the approach can fairly scale up in terms of the number of potentially supported planning domains, even though this remains one of the points to further investigate. Most importantly, we have proven our method to be a viable approach to planning, with planning accuracy on par with that of a traditional, state-of-the-art PDDL-compliant planner, and an average plan length of up to 13.5% shorter. Unfortunately, planning time, which was our initial motivation to develop Teriyaki, remains its greatest shortfall, at least for problem-plan pairs small enough to fit into 2048 tokens. Nevertheless, there are a number of LLMs and Transformer-based learning architectures that are being made available at the writing of this paper, and that offer greater customization options. Their _power_, as represented by the number of the embedded internal parameters, is increasing exponentially. This leads us to consider the use of generative, linguistic models as a viable approach to high-level task planning in robotics, and as a promising approach worth considering for future investigation.
2310.16362
Neural Potential Field for Obstacle-Aware Local Motion Planning
Model predictive control (MPC) may provide local motion planning for mobile robotic platforms. The challenging aspect is the analytic representation of collision cost for the case when both the obstacle map and robot footprint are arbitrary. We propose a Neural Potential Field: a neural network model that returns a differentiable collision cost based on robot pose, obstacle map, and robot footprint. The differentiability of our model allows its usage within the MPC solver. It is computationally hard to solve problems with a very high number of parameters. Therefore, our architecture includes neural image encoders, which transform obstacle maps and robot footprints into embeddings, which reduce problem dimensionality by two orders of magnitude. The reference data for network training are generated based on algorithmic calculation of a signed distance function. Comparative experiments showed that the proposed approach is comparable with existing local planners: it provides trajectories with outperforming smoothness, comparable path length, and safe distance from obstacles. Experiment on Husky UGV mobile robot showed that our approach allows real-time and safe local planning. The code for our approach is presented at https://github.com/cog-isa/NPField together with demo video.
Muhammad Alhaddad, Konstantin Mironov, Aleksey Staroverov, Aleksandr Panov
2023-10-25T05:00:21Z
http://arxiv.org/abs/2310.16362v1
# Neural Potential Field for Obstacle-Aware Local Motion Planning ###### Abstract Model predictive control (MPC) may provide local motion planning for mobile robotic platforms. The challenging aspect is the analytic representation of collision cost for the case when both the obstacle map and robot footprint are arbitrary. We propose a Neural Potential Field: a neural network model that returns a differentiable collision cost based on robot pose, obstacle map, and robot footprint. The differentiability of our model allows its usage within the MPC solver. It is computationally hard to solve problems with a very high number of parameters. Therefore, our architecture includes neural image encoders, which transform obstacle maps and robot footprints into embeddings, which reduce problem dimensionality by two orders of magnitude. The reference data for network training are generated based on algorithmic calculation of a signed distance function. Comparative experiments showed that the proposed approach is comparable with existing local planners: it provides trajectories with outperforming smoothness, comparable path length, and safe distance from obstacles. Experiment on Husky UGV mobile robot showed that our approach allows real-time and safe local planning. The code for our approach is presented at [https://github.com/cog-isa/NPField](https://github.com/cog-isa/NPField) together with demo video. ## I Introduction Obstacle-aware motion planning is essential for autonomous mobile robots. Various methods may solve this task, including numerical optimization, especially nonlinear Model Predictive Control (MPC) [1, 2, 3, 4, 5, 6, 7]. Optimization allows the planner to transform a rough global path into a smooth trajectory, taking into account obstacles and kinodynamic constraints of the robot. Obstacle avoidance may be inserted into trajectory optimization either as a set of constraints (e.g., [1]) or as a penalty term in the cost function (e.g., [4, 8]). The second approach allow for more flexible trajectory planning via finding a balance between safety and following the reference path; in some cases it may even converge from initial guess that intersect obstacles [9]. However, obstacle representation for this second case is more challenging. On the one hand, collision avoidance in constraint-based optimization consists of detecting the fact of collision. This may be done by projecting the robot's footprint onto the obstacle map. On the other hand, if we use penalty-based optimization, we should define a differentiable penalty function. The penalty function forms a repulsive Artificial Potential Field (APF); its gradient directs toward the safer solution [10]. This allows the controller to find a proper balance between the safety of the trajectory and its similarity to the reference path. Therefore, the function which forms the repulsive APF should be differentiable. The _values_ of the repulsive APF may be easily computed based on the signed distance function (SDF) from the robot to the nearest obstacle point on the map. However, SDF is computed by specific _algorithms_. It is not a _differentiable function_ for the general case. It is easy to define it analytically when two requirements are satisfied: first, the robot is pointwise or circular, and second, the obstacles have known simple geometric shapes. If both the robot footprint and obstacle map are arbitrary, finding accurate and differentiable approximation of the SDF is hard. Simplified versions are used e.g. in [8, 11]. We propose a _Neural Potential Field_ (NPField) - a neural network for calculating artificial potential. Our idea is conceptually inspired by the NeRF (Neural Radiance Field) model [12], which takes the position and orientation of the camera as an input and returns image intensity as an Fig. 1: Common scheme of the proposed approach. Our controller (bottom half of the figure) consists of a parameter definition module and an MPC solver, which optimize the trajectory for a defined set of parameters. Our common neural architecture (NPField, top half of the figure) consists of image encoders and a neural potential function (NPFunction). We train NPFunction to predict the obstacle-repulsive potential for a given robot pose and given embeddings of the obstacle map and the robot footprint. The trained neural potential function is used for trajectory optimization within the MPC solver. Map and footprint encoders are removed from the optimization loop to decrease the dimensionality of the MPC problem. They are used for data preparation as both map and footprint are considered constant within the prediction horizon. More precise schemes of this architecture are given in figures 2 (Controller) and 3 (Network). output. Our model takes the position and orientation of the robot together with the obstacle map and robot footprint as input and returns the value of repulsive potential as an output. We aim not to obtain this _values_ themselves but to use the trained model within the optimization loop. There are several works where neural networks provide discrete costmaps in which values are used for search-based [13] or sampling-based planning [14, 15]. Our goal instead is to provide _continuously differentiable function_, which gradient is helpful for optimization. The key conceptual scheme of our approach is shown in Fig. 1. MPC solvers are sensitive to the number of input parameters: a high number of parameters leads to a drastic increase in computational expenses. Map images should not be sent to the solver as they include many cells. We use image encoding within our architecture to reduce the number of parameters by replacing maps and footprints with their more compact embeddings. The top part of the fig. 1 presents our neural network architecture, while the bottom part shows the data flow of our controller. The neural network consists of two main parts: encoder block and Neural Potential Function (NPFunction) - a subnetwork that calculates the potential for a single robot configuration. Our controller consists of two high-level modules. The first module includes the algorithms, which define parameters for the MPC problem based on actual sensor data. Such a definition is made once per each iteration of the control loop. The second block includes a numerical solver for the control problem, which iteratively optimizes the trajectory based on the pre-defined parameters. NPField is trained as a single architecture and then divided into two parts. Image encoders are inserted into the parameter definition block, while NPFunction is integrated into the numerical MPC solver. For each controller step, encoders are called once, while NPFunction is called and differentiated multiple times within the optimization procedure. ### _Contribution_ This work mainly contributes in the following aspects: * Novel architecture for MPC local planner, where the neural model estimates collision cost. * Novel neural architecture for calculating APF based on the obstacle map, robot pose, and footprint. * An approach for generating the dataset for training the neural model. The last subsection of the next section provides a discussion on the place of our approach among the others. ### _Structure_ The rest of the paper is organized as follows. Section II discusses the related works. In section III, we introduce the architecture of our local planner. In section IV, we narrow down to our neural model and describe its architecture and learning. Section V discusses the experiments. Section VI is a conclusion section. ## II Related Works In this section we fist discuss common approaches to motion planning, then narrow down to collision avoidance in optimization based local planners. After that we discuss existing works, which use neural models within the MPC solvers. Finally we specify the differences of our approach compared to similar works. ### _Planning_ Planning task may be solved by various methods, which could be categorized into the following main groups (see review [16]): search-based planning (most of these methods are based on A* graph search algorithm [17], which is an extension of Dijkstra method [18]), sampling-based planning (most of these methods are based either on Rapidly-exploring Random Trees [19] or on Probabilistic RoadMaps [20]), motion primitives and trajectory optimization. We consider optimization-based planning in this work. Depending on the statement, we can define two groups of planning tasks - global path planning (define a reference of intermediate robot configurations based on given initial and destination configuration) and local motion planning (define a smooth trajectory based on a given part of the global plan taking in mind kinodynamic constraints). The artificial potential field was initially proposed [10] for global planning: the robot's path is obtained as a trajectory of the gradient descent in the potential field from the starting point towards the destination point. This planning approach can easily stuck in the local minimum. Therefore, it is less popular than A*, RRT, or PRM. However, it is still useful [21, 22, 23]. To avoid stucking in local minima, trajectory optimization is often done locally together with global planners [1, 7]. Global planner generates a rough suboptimal path, which is then optimized. Trajectory optimization may be considered in two statements [1] - holistic computation (made offline before the motion; no strict real-time constraints) and model predictive control (sequential online optimization of near parts of the path during the motion). In the first case, there are no strict limits for the calculation time as well as for the length and complexity of the trajectory. There are some specifc approaches for this case, such as CHOMP [24], STOMP [25], and TrajOpt [8]. CHOMP consider collision avoidance as constraints and therefore require collision-free initial guess. It may work with row representation of obstacles such as Occupancy Grid or Voxel Map for 3D planning tasks. TrajOpt consider collision avoidance as a penalty and may converge from initial guess, which include collision, however, it require obstacles to be represented as polytops. In the second case, the calculation time is limited according to the replanning rate of the system. ### _Obstacle models for trajectory optimization_ Collision detection itself is considered in many works, e.g., [26, 27, 28]; we are now interested in analytical models suitable for trajectory optimization. The safe path may be guaranteed using convex approximations of the free space [1, 11]. The disadvantage of such an approximation is that the free space outside the approximated region is prohibited. Alternatively, obstacles may be approximated instead of free space [4, 5, 6, 7]. Interception of the trajectory with the borderlines of the approximated regions may be modeled within the MPC solver. The approaches above require modeling either free space or obstacles as simple geometric shapes, such as points [4], circles [1, 5], polygons [6, 7], or polylines [29]. The question of how to obtain this representation from the common obstacle map is often not considered. Also, some approaches can be used only with discrete-time process models [7]. In the case when it is impossible or too complicated to provide differentiable collision models, one can use less stable techniques based on numerical gradients [8], stochastic gradients [25] or gradient-free sampling-based optimization (Model Predictive Path Integral [30, 31]). We consider another option, where a neural model approximates the repulsive potential. A number of works exist [32] on learning Control Barrier Functions for ensuring the safety of mobile systems such as drones [33] and cars [34] within the controller. The work [35] provides differentiable collision distance estimation for a 2D manipulator based on a graph neural network. [9] use the loss function of the network as a collision penalty: the trajectory is optimized during the network training for the fixed obstacle map. ### _Neural Models within MPC Optimization_ Integration of neural models into the MPC control loop was considered in a number of tasks. The challenging aspect here is the high computational cost of deep neural models. Accurate deep models by [36, 37] were not real-time and were presented in simulation as a proof of concept. Real-time inference may be achieved by significantly reducing model capacity [37]. Approaches [38, 39, 40] insert lightweight network into realtime MPC control loop. [41] achieve use of the deep neural model within real-time Acados MPC solver [42] by introducing ML-CasADI framework [43]. Experiments by [41] showed that direct insertion of the neural model into MPC-solver is effective for the networks with up to 50,000 parameters. Most works above use neural networks for approximating the model of process dynamics. On the contrary, we are interested in approximating obstacle-related cost terms and control barrier functions [32, 33, 34]. In [44], a neural network was used to update a cost function for manipulator visual servoing. Its architecture includes a neural encoder for camera images and a cost-update network for quadratic programming. There is also a set of works where a neural network was applied for choosing weighting factors for various terms of the cost function, e.g., [45, 46]. ### _Place of our approach among the others_ We propose a neural model for estimating repulsive potential, which has the following properties. 1) It provides obstacle avoidance for mobile robotic platforms. 2) It is differentiable. 3) It exploits the capacity of deep neural models with more than 50,000 parameters. 4) It reproduces obstacle maps with complicated, non-convex structures and allows for optimization of long trajectories within this map. 5) It is integrated into the MPC controller for online solutions. 6) It includes map encoding, which provides dimensionality reduction and the ability to work with different maps. The following works seem to be the closest to ours concerning these properties. [9] satisfies 1), 2), and 4). However, it learns a trajectory for the single map offline. [44] satisfies 3), 5), and 6) however, it is intended for the different tasks. [33, 34] satisfy 1), 2), 3), and 5), however, they process vision data instead of obstacle map and work with simple-shaped obstacles. Other works satisfy less number of properties. Unique property 7) of our approach is encoding the footprint of the mobile robot. It allows one to use a single model for mobile robots with different shapes. Note that in this work, we only prove a concept of footprint encoding: our training set consists of samples with two various footprints, and we show that the networks learn their collision model. We consider the deeper study of footprint learning (including footprint generalization) to be a part of the future work. ## III Control approach In this section, we discuss a model predictive controller, which is used for local planning. Neural networks are considered to be black boxes, which take inputs and provide outputs within the control architecture. Their internal content is discussed in a further section. We first describe the formal statement of a local optimization problem and then discuss the controller that solves this statement. ### _MPC Statement_ Local trajectory optimization may be formulated as a nonlinear model predictive control problem with continuous dynamics and discrete control: \[\{\mathbf{x}_{opt}[i],\mathbf{u}_{opt}[i]\}_{i=k}^{k+p}=\arg\min\limits_{i=k} ^{k+p}J(\mathbf{x}[i],\mathbf{u}[i],\mathbf{p}[i]),\] (1a) s.t. \[\frac{dx_{1}[i]}{dt} =f_{1}(\mathbf{x}[i],\mathbf{u}[i],\mathbf{p}[i]), \tag{1b}\] \[\frac{dx_{2}[i]}{dt} =f_{2}(\mathbf{x}[i],\mathbf{u}[i],\mathbf{p}[i]),\] \[\ldots\] \[\frac{dx_{\mu}[i]}{dt} =f_{\mu}(\mathbf{x}[i],\mathbf{u}[i],\mathbf{p}[i]),\] \[h_{1}(\mathbf{x}[i],\mathbf{u}[i],\mathbf{p}[i]) \leq 0,\] (1c) \[h_{2}(\mathbf{x}[i],\mathbf{u}[i],\mathbf{p}[i]) \leq 0,\] \[\ldots\] \[h_{\chi}(\mathbf{x}[i],\mathbf{u}[i],\mathbf{p}[i]) \leq 0.\] Here \(p\) is prediction horizon, \(\mathbf{x}[i]\) is \(\mu\)-size state vector (at the beginning of step \([i]\)), \(\mathbf{u}[i]\) is \(\nu\)-size control vector (constant within step \(i\)) \(\mathbf{p}[i]\) is \(\kappa\)-vector of process parameters (relevant for the step \(i\)). (1a) specify the cost function \(J\): a sum of functions \(J[i]\), which are calculated for each node. (1b) define continuous dynamics of the process. (1c) is a set of inequality constraints which must be satisfied within the whole process. Optimization procedure aims to find the reference of \(\{\mathbf{x}_{opt}[i],\mathbf{u}_{opt}[i]\}_{i=1}^{k+p}\) that provide minimum \(J\). Note that in this work we use this statement for defining local trajectory of the robot (i.e. \(\mathbf{x}_{opt}[i]_{i=k}^{k+m}\)). Control of trajectory execution may be either provided by other control method or achieved by direct execution \(u_{opt}\). The view of the equation (1b) depend on the construction of the mobile robot. In this work we consider two different models: a differential drive model (relevant for our real-robot experiments) and a bicycle model (used in numerical experiments). Differential drive model is specified as follows: \[\frac{dx}{dt} =v\cos\theta, \tag{2}\] \[\frac{dy}{dt} =v\sin\theta,\] \[\frac{dv}{dt} =a,\] \[\frac{d\theta}{dt} =\omega.\] State vector \(\mathbf{x}=(x,y,v,\theta)^{T}\) include cartesian position of the robot \(x,y\), its linear velocity \(v\), and its orientation \(\theta\). Control vector \(\mathbf{u}=(a,\omega)^{T}\) include linear acceleration \(a\) and angular velocity \(\omega\). For bicycle model \(\mathbf{u}=(a,\delta)^{T}\) where \(\delta\) is steering angle.. \[\frac{dx}{dt} =v\cos\theta, \tag{3}\] \[\frac{dy}{dt} =v\sin\theta,\] \[\frac{dv}{dt} =a,\] \[\frac{d\theta}{dt} =\frac{v}{L}\tan\delta.\] For the optimal control problem of this model, the following cost function is introduced: \[J[i]=J_{s}(\mathbf{x}[i],\mathbf{u}[i],\mathbf{x}_{t}[i])+J_{o}(\mathbf{x}[i],\mathbf{p}_{o}[i]). \tag{4}\] \(J_{s}\) term enforce the trajectory to follow the reference values \(\mathbf{x}_{r}\) from the global plan, while \(J_{o}\) term push the trajectory farther from obstacles. \(\mathbf{p}_{o}[i]\) is a vector of obstacle-related parameters. Whole parameter vector for the system (1) is \(\mathbf{p}[i]=((\mathbf{x}_{r}[i])^{T},(\mathbf{p}_{o}[i])^{T})^{T}\). In our approach, the neural network is applied to compute \(J_{o}\) while \(J_{s}\) is calculated as follows: \[J_{s}[i]=\sum_{j=1}^{\mu}w_{xj}(x_{j}[i]-x_{j(ref)}[i])^{2}+\sum_{k=1}^{\nu}w _{uk}u_{k}^{2}[i] \tag{5}\] Here \(w_{xj},w_{uk}\) are the weights of the respective terms, \(x_{j(ref)}[i]\) is a reference value of the respective state (taken from the global plan). Constraint-based trajectory optimizers like CIAO [1, 11] use equation (1c) to provide collision avoidance. Contrary, we express collision avoidance in (1a) and use (1c) only for box constraints of the separate variables. ### _Control architecture_ An architecture of our controller is shown in Fig. 2, which is a more detailed version of Fig. 1. Solution of the problem (1) is obtained iteratively using Sequential Quadratic Programming (_optimization loop_ in Fig. 2). MPC controller uses the solution to update the trajectory online (_control loop_ in Fig. 2). At the timestep \(k\) it optimizes the trajectory for the next \(p\) steps, and then the optimized control inputs are sent to the robot for the next \(m\) steps (i.e. \(m\) is the control horizon). After that optimization is repeated for the steps from \(k+m\) to \(k+m+p\). MPC-solver is intended to provide solution of the problem (1) with (5) as (1a) and (2) as (1b). During optimization, it communicates with integrated NPFunction, which provides values and gradients of \(J_{obst}\). The objective of the neural network is to project the robot's footprint, obstacle map, and robot poses onto a differentiable obstacle-repulsive potential surface. Consequently, for each coordinate within the range of the map, the neural network outputs a corresponding potential value. To ensure computational feasibility, we partitioned the neural network into two blocks: a map and footprint encoder, and a final coordinate potential predictor. Encoders compress high-dimensional maps into a compact representation, thereby enabling the computation of Jacobian and Hessian matrices of the control problem within the solver. Encoders work outside the optimization loop: provided embeddings \(E_{map}\) and \(E_{fp}\) are sent to the solver as obstacle-related problem parameters \(\mathbf{p}_{o}\). This means that we assume the local map and robot footprint to be fixed within the prediction horizon. While the robot is following the global plan, the local map slides according to its current position and actual sensor data. ## IV Neural Potential Field In this section we discuss our neural model for calculating obstacle repulsive potential \(J_{o}\). First, we briefly describe Fig. 2: Proposed architecture of the controller. Image encoders work outside the optimization loop and we just need to run them once before each optimization procedure. NPFunction works within the optimization loop and the solver uses its gradients to find a safer trajectory. architecture of our network, then we introduce our strategy for generating the training set. ### _Network architecture_ Proposed neural architecture is shown in Fig. 3. It consists of three primary components: a ResNet Encoder, a Spatial Transformer, and a ResNet Decoder. ResNet blocks are used with the objective of extracting local features from the obstacle map which contains a lot of corners and narrow passages. The Spatial Transformer utilizes the self-attention mechanism [47] to establish global relations among these features, assessing the significance of one feature in relation to others. Consequently, we employ the positional embedding technique from Visual Transformers [48]. Lastly, the ResNet Decoder processes the transformed feature maps to generate the final output. To mitigate the model's tendency to truncate critical details of the obstacle map necessary for navigation, we incorporated a map reconstruction loss based on Cross-Entropy. For the predictions of potential points, we employed the Mean Squared Error (MSE) loss. ### _Training data_ One unit of the training set include \(\{I_{map},I_{fp},x,y,\theta,J_{o}\}\), where \(I_{map}\) and \(I_{fp}\) are 2D images of the obstacle map and the robot footprint. The dataset should include various samples of robot positions from various maps. The maps are cropped from the MovingAI planning dataset [49]. For each map, we generate a set of random robot poses and calculate reference values for them using the following algorithm. 1. Obstacle map is transformed into a costmap: 1. Signed distance function (SDF) is calculated algorithmically for each cell on the map. SDF is equal to the distance from the current cell to the nearest obstacle border. It is positive for free space cells and negative for obstacle cells. 2. Repulsive potential is calculated for each cell: \(J_{o}=w_{1}(\pi/2+arctan(w_{2}-w_{2}SDF))\). This is a sigmoid function, which is low far from obstacles, asymptotically strives to \(w_{1}\) inside obstacles, and has maximum derivative on the obstacle border. 2. Collision potential is calculated for each random pose of the robot within the submap. For this purpose robot's footprint is projected onto the map according to the pose. The maximum potential among the footprint-covered cells is chosen as a collision potential. As an alternative approach, we tried to compute collision potential as an integral potential over footprint. This trial did not provide learning of the useful potential function. A pivotal aspect of the training process was the dataset sampling strategy. Utilizing a random sampling strategy across the map led to the network overfitting to larger values and disregarding narrow passages. This is because of the walls, which are statistically overwhelming compared to free space, but are irrelevant for navigation as we explicitly avoid planning through obstacles. To address this, we modified the sampling strategy such that 80% of points are sampled with intermediate potential values. The figure 4 shows the distribution of point samples in a map. The area with a high density of points represents the area surrounding and close to obstacles, while obstacles have a little effect on the path of movement in the areas with a low density of points. ## V Implementation We consider nonlinear MPC task statement, which may be solved via Interior Point (IP) or Sequential Quadratic Programming (SQP). Modern frameworks provide the possibility for realtime execution of these methods. IPOPT [50] and ForcesPro [51] implement IP, while ACADO [52, 53] and Acados [42] implement SQP. These frameworks rely on a more low-level CasADi framework [54] for algorithmic differentiation. We implement our MPC solver with Acados framework, is which is the newest one and provide the fastest execution. Use of deep neural network within Acados solver require the specific integration tool. Two libraries are relevant for this task: ML-CasADi [43] and L4CasADi [55]. Both provide Fig. 4: Example for distribution of samples in a map. Fig. 3: Proposed architecture of the neural network. The green component represents the robot footprint and map encoder, which generates embeddings that are consistent for all coordinates within the map. The red component signifies the final coordinate potential predictor, which contains an order of magnitude fewer parameters. the CasADi description of Pytorch [56] neural models. However, the first method was proposed and used for replacing complex models with local Taylor approximations to enable real-time optimization procedures, while the second method provides a complete mathematical description of the Pytorch model by CasADi formula. For our Pytorch model which describes the neural potential field of the obstacles surrounding the path of the robot, L4CasADi is more suitable because the description of the whole model is needed and not only at a linearizing point. ML provide lightweight local approximation of the complex neural model. This approximation constrain the use of ML-CasADi for the functions with complicated landscapes. Novel framework L4CasADi do not use such approximations. It provides integration of the deep neural models into real-time CasADi-based optimization. We use the L4CasADi to provide optimization over NPFunction. To our knowledge, our work is the first one, which exploits L4CasADi for neural cost terms instead of the neural dynamic model. Our local planner works together with Theta* [57] global planner, which generates global plans as polylines. Note that Theta* uses a simplified version of the robot footprint (a circle with a diameter equal to the robot width) as it fails to provide a safe path with a complete footprint model. This simplified model does not guarantee the safety of the global plan, therefore the safety of the trajectory is provided by our local planner. We consider obstacle maps to have a 256x256 resolution, where each pixel corresponds to 2x2 centimeters of the real environments (i.e. size of the map is 5.12x5.12 meters). We collected a dataset based on the MovingAI [49] city maps. It includes 4,000,000 samples taken from 200 maps with 2 footprints. Both footprints correspond to a real Husky UGV mobile manipulator with an UR5 robotic arm. The first one is with a folded arm, the second one is with an outstretched arm. 10,000 random poses of the robot were generated for each map with each footprint. Weighting coefficients for reference potential were set to \(w_{1}=15\) and \(w_{2}=10\), while the prediction horizon was set to \(p=30\). Dataset generation took 40 hours on the Intel Core i5-10400F CPU. Our neural network consists of 5 million parameters, with 500,000 allocated to ResNet encoders. Encoders project each (256x256) map and robot footprint into (1x4352) embeddings. The robot's pose, represented as \(X,Y,sin(\theta),cos(\theta)\), is transformed into (1,32) embeddings. The model was trained over a span of 24 hours on a server equipped with a single Nvidia Tesla V100 card with 32GB of memory. ## VI Experiments In this section we first present numerical comparison of our approach with other planning methods. Then we discuss effects of varying some hyperparameters of our method. Finally we show the experiments on a real robot. ### _Comparative studies_ All experiments reported in this subsection were 1) conducted with the bicycle model of process dynamics, and 2) conducted on the maps from the MovingAI dataset [49], which were not used for network training. #### Vi-A1 Illustrative example and comparison with trajectory optimizers An example of the trajectory generated with our planner is shown in Fig. 6 on the left. A global plan in the form of a polyline is turned into a smooth and safe trajectory. Initially, the robot turns from the obstacle and deviates from the global path, then smoothly returns to it, reaching the goal position. As a proof of concept for footprint encoding, we provide the following experiment. Consider the global plan, where the robot first moves towards the flat wall, then turns and moves parallel to the wall. In this case, a robot with a folded arm turns a little later than the one with a folded arm. Such a behavior may be seen in Fig. 6, bottom. The yellow curve relates to the outstretched arm, while the green curve relates to the folded arm. This behavior shows that the model learns different properties of two footprints, which are useful for safer trajectory planning. We compared NPField trajectories with CIAO [1] trajectory optimizer, which is based on convex approximation of the free space around the robot. The CIAO-generated trajectory is shown in Fig. 6 on the right. It may be seen that it keeps the robot near obstacles, nearly touching their edges. When testing on more diverse scenarios, CIAO could not find the feasible path in nearly half of the cases. It may be connected with the fact that CIAO implements collision avoidance as a set of inequality constraints, which are not differentiated during optimization. Therefore, it only checks the fact of the collision and does not balance between safety and path deviation in the cost function. Fig. 5: Example scenario for local planning. Left: NPField trajectory. Right: CIAO trajectory. Bottom: trajectory curves for different footprints. #### Iv-A2 Comparison on BenchMR We compare our algorithm with the baselines on 20 scenarios using BenchMR [58] framework. The tasks include moving through the narrow passages similar to those shown in Fig. 6. We compare standard metrics: planning time, path length, smoothness, and angle-over-length (for all, lower value is better). We also introduce our custom metric, "safety distance" (minimum value of the SDF). The results are given in table I. We compare our stack (Theta* + NPField) with state of the art planners: RRT [19], RRT* [59], Informed RRT [60], SBL [61] and RRT with GRIPS [62] smoothing. We do not provide the results for PRM [20], PRM* [59], BIT* [63], KPIECE1 [64], Theta* with CIAO [1] optimization, Theta* with CHOMP [8] optimization as they were able to generate a successful plan for less than a half of tasks. This result is particularly important for Theta* + CIAO and Theta* + CHOMP, as they are optimization-based planners similar to our approach and use the same global plans. However, they could not handle considered scenarios due to collisions (CHOMP) or failure to find a result (CIAO). Results in the table show that our stack is generally comparable to other planners. It provides nearly the shortest path length, the best smoothness, a good AOL, and a good safety distance. Computation time has the same order of magnitude with the fastest methods. We cannot specify an approach, which is definitely better than ours (RRT with GRIPS is fast and safe but provides less smooth trajectories). Performance measurements were made on Intel Core i5-10400F CPU. Note that Acados solver need to warmup before realtime use: first execution of the optimization procedure may take about one second; after that the solver work faster. One optimization take 60-70 ms, where data encoding take around 10 ms, while Acados solution take the rest 50-60 ms. ### _Ablation studies_ We also compare various versions of our algorithm on the same set of scenarios. These versions are different each other by the weights of the potential function which is used to calculate the potentials in training dataset, the features used for training the model, or the distribution the sampling points in the training maps. First, we consider the situation when the reference collision potential (see subsection IV.B) is calculated as an integral value over footprint instead of choosing the maximum value. We had a hypothesis that such an approach could lead to better learning of the relative geometry of the object. However, it did not lead to an useful model. In our experiments the network provides incorrect results systematically (example is given in Fig. 7). Similar incorrect results were measured for an alternative choice of weighting coefficient \(w_{2}\) for calculating the training values of the repulsive potential. The idea was to make the potential landscape more gentle and provide better optimization from incorrect initial guess. In practice it lead to bad learning of the map properties, see example result for \(w_{2}=1\) in Fig. 8. Our third ablation experiment was connected with the varying resolution of the obstacle map and robot footprint. We consider the situation when one cell of the grid correspond to 10x10 cm instead of 2x2 cm. These two sets are specified by a practical resolution of the global map and the local map respectively. The global map of the environment is stored in the memory of the robot, while the local map include actual data from the sensors. The size of the submap is 50x50 pixels (i.e. 5x5 meters). This size allow us to reduce the complexity of the neural network: total number of parameters is 1.4M instead of 5M; the size of the map embedding is 1161 instead of 4352. This architecture still allow correct solution of the planning task; examples are provided in Fig. 9. Surprisingly reducing the model Fig. 6: Example comparison of NPField (blue curve) and CHOMP (green curve). CHOMP converges to the shortest path ignoring obstacle danger. Fig. 7: Example of path with integral potential complexity did not affect the performance of the solver: it still take around 70 ms to make an optimization. ### _Real Robot Experiments_ We deploy our approach on a real Husky UGV mobile manipulator as a ROS module for MPC local planning and control, which works with Theta* global planner. The testing scenario includes hat transportation through a twisty corridor. The manipulator is holding the hat in an outstretched configuration (see Fig. 10). Acados optimizer run Intel Core i5-10400F CPU and communicate with the robot in real time as a remote ROS-node with control horizon equal to one step. Therefore, a more complicated concave footprint is valid. Scenario execution may be seen in the accompanying video (see [https://github.com/cog-isa/NPField](https://github.com/cog-isa/NPField)). ## VII Conclusions We propose a novel approach to local trajectory planning, where a Model Predictive Controller uses the neural model to estimate collision danger as a differentiable function. Our NPField neural architecture consists of encoders and NPFunction blocks. Encoders provide a compact representation of the obstacle map and robot footprint; this compact representation is sent to the MPC solver as a vector of problem parameters. NPFunction is integrated into the optimization loop, and its gradients are used for trajectory correction. We implement our controller using Acados MPC framework and L4CasADi tool for integrating deep neural models into MPC loop. Our approach allows the robot with a complicated footprint to successfully navigate among the obstacles in real time. A planning stack Theta* + NPField showed comparable results with sample-based planners on the BenchMR testing framework. The code for our approach is presented at [https://github.com/cog-isa/NPField](https://github.com/cog-isa/NPField). We consider our work a starting point for further research on neural potential estimation for kinodynamic planning for various robotic systems in various environments. Trajectory planning on more complex maps (e.g. elevation maps) is a promising topic of the future research. Another important aspect is further research on footprint encoding, which may be useful for planning the trajectories of the robotic system with changing footprints (e.g. mobile manipulators under whole-body control).
2303.01726
Tight bounds for the sensitivity of CDAWGs with left-end edits
Compact directed acyclic word graphs (CDAWGs) [Blumer et al. 1987] are a fundamental data structure on strings with applications in text pattern searching, data compression, and pattern discovery. Intuitively, the CDAWG of a string $T$ is obtained by merging isomorphic subtrees of the suffix tree [Weiner 1973] of the same string $T$, thus CDAWGs are a compact indexing structure. In this paper, we investigate the sensitivity of CDAWGs when a single character edit operation (insertion, deletion, or substitution) is performed at the left-end of the input string $T$, namely, we are interested in the worst-case increase in the size of the CDAWG after a left-end edit operation. We prove that if $e$ is the number of edges of the CDAWG for string $T$, then the number of new edges added to the CDAWG after a left-end edit operation on $T$ does not exceed $e$. Further, we present a matching lower bound on the sensitivity of CDAWGs for left-end insertions, and almost matching lower bounds for left-end deletions and substitutions. We then generalize our lower-bound instance for left-end insertions to leftward online construction of the CDAWG, and show that it requires $\Omega(n^2)$ time for some string of length $n$.
Hiroto Fujimaru, Yuto Nakashima, Shunsuke Inenaga
2023-03-03T06:11:37Z
http://arxiv.org/abs/2303.01726v3
# On Sensitivity of ###### Abstract _Compact directed acyclic word graphs_ (_CDAWGs_) [Blumer et al. 1987] are a fundamental data structure on strings with applications in text pattern searching, data compression, and pattern discovery. Intuitively, the CDAWG of a string \(T\) is obtained by merging isomorphic subtrees of the suffix tree [Weiner 1973] of the same string \(T\), thus CDAWGs are a compact indexing structure. In this paper, we investigate the sensitivity of CDAWGs when a single character edit operation (insertion, deletion, or substitution) is performed at the left-end of the input string \(T\), namely, we are interested in the worst-case increase in the size of the CDAWG after a left-end edit operation. We prove that if \(\mathsf{e}\) is the number of edges of the CDAWG for string \(T\), then the number of new edges added to the CDAWG after a left-end edit operation on \(T\) is less than \(\mathsf{e}\). Further, we present almost matching lower bounds on the sensitivity of CDAWGs for all cases of insertion, deletion, and substitution. ## 1 Introduction _Compact directed acyclic word graphs_ (_CDAWGs_) [4] are a fundamental data structure on strings that have applications in fields including text pattern searching [6, 8], data compression [2, 13], and pattern discovery [14]. Intuitively, the CDAWG of a string \(T\), denoted \(\mathsf{CDAWG}(T)\), is obtained by merging isomorphic subtrees of the suffix tree [15] of the same string \(T\). Thus the size of the CDAWG is always smaller than that of the suffix tree. It is well known that the nodes of \(\mathsf{CDAWG}(T)\) correspond to _maximal repeats_ in \(T\), and the number \(\mathsf{e}\) of right-extensions of maximal repeats in \(T\), which is equal to the number of edges of \(\mathsf{CDAWG}(T)\), has been used as one of repetitiveness measures of strings. Namely, when \(\mathsf{e}\) is small, then the string contains a lot of repetitive substrings hence being well compressible. Indeed, it is known that \(\mathsf{e}\) can be as small as \(\Theta(\log n)\) with highly repetitive strings [11]. Further, one can obtain a _grammar-based compression_ of size \(O(\mathsf{e})\) via the CDAWG of the input string \(T\)[2]. Some relations between \(\mathsf{e}\) and the number \(\mathsf{r}\) of equal-letter runs in the _Burrows-Wheeler transform_ (_BWT_) [5] have also been investigated [3]. Recently, Akagi et al. [1] proposed the notion of _sensitivity_ of string repetitiveness measures and string compressors, including the aforementioned \(\mathsf{e}\) and \(\mathsf{r}\), the smallest _string attractor_ size \(\gamma\)[9], the _substring complexity_\(\delta\)[10], and the Lempel-Ziv parse size \(\mathsf{z}\)[16]. The sensitivity of a repetitiveness measure \(\mathsf{c}\) asks how much the measure size increases when a single-character edit operation is performed on the input string, and thus the sensitivity allows one to evaluate the robustness of the measure/compressor against errors/edits. This paper investigates the sensitivity of CDAWGs when a single character edit operation (insertion, deletion, or substitution) is performed at the left-end of the input string \(T\), namely, we are interested in the worst-case increase in the size of the CDAWG after an left-end edit operation. We prove that if \(\mathsf{e}\) is the number of edges of the CDAWG for string \(T\), then the number of new edges which are added to the CDAWG after an left-edit operation on \(T\) is always less than \(\mathsf{e}\). Further, we present almost matching lower bounds on the sensitivity of CDAWGs for any left-end insertion, deletion, and substitution (see Table 1 for a summary of these results). We generalize our lower-bound instances for left-end insertion to _leftward online construction_ of the CDAWG, and show that it requires \(\Omega(n^{2})\) time. This contrasts with the case of _rightward online CDAWG construction_ for which a linear-time algorithm exists [8]. A full version of this paper can be found in [7]. #### 1.0.1 Related work. Akagi et al. [1] presented lower bounds when a new character is deleted (resp. substituted) in the middle of the string, with a series of strings for which the size \(\mathsf{e}\) of the CDAWG additively increases by \(\mathsf{e}-4\) (resp. \(\mathsf{e}-2\)). They also showed a lower bound when a new character is inserted at the _right-end_ of the string, showing a series of strings for which the size of the CDAWG additively increases by \(\mathsf{e}-2\). While an additive \(\mathsf{e}+O(1)\) upper bound for the case of right-end insertion readily follows from the _rightward_ online construction of CDAWGs [8], no non-trivial upper bounds for the other edit operations, including our case of left-end edit operations, are known. Our \(\Omega(n^{2})\) lower-bound for leftward online construction of the CDAWG extends the quadratic lower-bound for maintaining the CDAWG in the sliding window model [12] (remark that fixing the right-end of the sliding window is equivalent to our leftward online construction). \begin{table} \begin{tabular}{|c||c|c|} \hline edit operation & upper bound & lower bound \\ \hline left-end insertion (\(T\Rightarrow aT\)) & \(\mathsf{e}-1\) & \(\mathsf{e}-2\) \\ \hline left-end deletion (\(T\Rightarrow T[2..|T|]\)) & \(\mathsf{e}-2\) & \(\mathsf{e}-4\) \\ \hline left-end substitution (\(T=aS\Rightarrow bS=T^{\prime}\)) & \(\mathsf{e}\) & \(\mathsf{e}-3\) \\ \hline \end{tabular} \end{table} Table 1: Our results: additive sensitivity of CDAWGs with left-end edit operations. ## 2 Preliminaries Let \(\Sigma\) be an _alphabet_ of size \(\sigma\). An element of \(\Sigma^{*}\) is called a _string_. For a string \(T\in\Sigma^{*}\), the length of \(T\) is denoted by \(|T|\). The _empty string_, denoted by \(\varepsilon\), is the string of length \(0\). Let \(\Sigma^{+}=\Sigma^{*}\setminus\{\varepsilon\}\). If \(T=uvw\), then \(u\), \(v\), and \(w\) are called a _prefix_, _substring_, and _suffix_ of \(T\), respectively. The sets of prefixes, substrings, and suffixes of string \(T\) are denoted by \(\mathsf{Prefix}(T)\), \(\mathsf{Substr}(T)\), and \(\mathsf{Suffix}(T)\), respectively. For a string \(T\) of length \(n\), \(T[i]\) denotes the \(i\)th character of \(T\) for \(1\leq i\leq n\), and \(T[i..j]=T[i]\cdots T[j]\) denotes the substring of \(T\) that begins at position \(i\) and ends at position \(j\) on \(T\) for \(1\leq i\leq j\leq n\). For two strings \(u\) and \(T\), let \(\mathsf{BegPos}(u,T)=\{i\mid T[i..i+|u|-1]=u\}\) and \(\mathsf{EndPos}(u,T)=\{i\mid T[i-|u|+1..i]=u\}\) denote the sets of beginning positions and the set of ending positions of \(u\) in \(T\), respectively. For any substrings \(u,v\in\mathsf{Substr}(T)\) of a string \(T\), we write \(u\equiv^{\mathrm{L}}_{T}v\) iff \(\mathsf{EndPos}(u,T)=\mathsf{EndPos}(v,T)\). Let \([\cdot]^{\mathrm{L}}_{T}\) denote the equivalence class of strings under \(\equiv^{\mathrm{L}}_{T}\). For \(x\in\mathsf{Substr}(T)\), let \(\mathsf{long}([x]^{\mathrm{L}}_{T})\) denote the longest member of \([x]^{\mathrm{L}}_{T}\). Let \(\mathsf{LeftM}(T)=\{\mathsf{long}([x]^{\mathrm{L}}_{T})\mid x\in\mathsf{ Substr}(T)\}\). Any element \(u\in\mathsf{LeftM}(T)\) is said to be _left-maximal_ in \(T\), since there are two distinct characters \(c,d\in\Sigma\) such that \(cu,du\in\mathsf{Substr}(T)\), or \(u\in\mathsf{Prefix}(T)\). For any non-longest element \(y\in[x]^{\mathrm{L}}_{T}\setminus\{\mathsf{long}([x]^{\mathrm{L}}_{T})\}\) there exists a unique non-empty string \(\alpha\) such that \(\alpha y=\mathsf{long}([x]^{\mathrm{L}}_{T})\), i.e. any occurrence of \(y\) in \(T\) is immediately preceded by \(\alpha\). Similarly, we write \(u\equiv^{\mathrm{R}}_{T}v\) iff \(\mathsf{BegPos}(u,T)=\mathsf{BegPos}(v,T)\). Let \([\cdot]^{\mathrm{R}}_{T}\) denote the equivalence class of strings under \(\equiv^{\mathrm{R}}_{T}\). For \(x\in\mathsf{Substr}(T)\), let \(\mathsf{long}([x]^{\mathrm{R}}_{T})\) denote the longest member of \([x]^{\mathrm{R}}_{T}\). Let \(\mathsf{RightM}(T)=\{\mathsf{long}([x]^{\mathrm{R}}_{T})\mid x\in\mathsf{ Substr}(T)\}\). Any element \(u\in\mathsf{RightM}(T)\) is said to be _right-maximal_ in \(T\), since there are two distinct characters \(c,d\in\Sigma\) such that \(uc,ud\in\mathsf{Substr}(T)\), or \(u\in\mathsf{Suffix}(T)\). For any non-longest element \(y\in[x]^{\mathrm{R}}_{T}\setminus\{\mathsf{long}([x]^{\mathrm{R}}_{T})\}\) there exists a unique non-empty string \(\beta\) such that \(y\beta=\mathsf{long}([x]^{\mathrm{R}}_{T})\), i.e. any occurrence of \(y\) in \(T\) is immediately followed by \(\beta\). Let \(\mathsf{M}(T)=\mathsf{LeftM}(T)\cap\mathsf{RightM}(T)\). Any element of \(\mathsf{M}(T)\) is said to be _maximal_ in \(T\). The _compact directed acyclic word graph_ (_CDAWG_) of a string \(T\), denoted \(\mathsf{CDAWG}(T)=(\mathsf{V},\mathsf{E})\), is an edge-labeled DAG such that \[\mathsf{V}_{T} = \{[x]^{\mathrm{L}}_{T}\mid x\in\mathsf{RightM}(T)\},\] \[\mathsf{E}_{T} = \{([x]^{\mathrm{L}}_{T},\beta,[x\beta]^{\mathrm{L}}_{T})\mid\beta \in\Sigma^{+},x,x\beta\in\mathsf{RightM}(T),y\beta\in[x\beta]^{\mathrm{L}}_{T}\text { for any }y\in[x]^{\mathrm{L}}_{T}\}.\] See Fig. 1 for a concrete example of CDAWGs. Intuitively, the strings in \(\mathsf{RightM}(T)\) correspond to the nodes of the suffix tree [15] of \(T\), and the operator \([\cdot]^{\mathrm{L}}_{T}\) merges the isomorphic subtrees of the suffix tree. Recall that the nodes of the suffix tree for \(T\) correspond to the right-maximal substrings of \(T\). Since \(\mathsf{long}([x]^{\mathrm{L}}_{T})\) is a maximal substring of \(T\) for any \(x\in\mathsf{RightM}(T)\), we have the following fact: **Fact 1**.: There is a one-to-one correspondence between the elements of \(\mathsf{M}(T)\) and the nodes of \(\mathsf{CDAWG}(T)\). We can regard each element of \(\mathsf{M}(T)\) as a node of \(\mathsf{CDAWG}(T)\) by Fact 1. We thus sometimes identify \(\mathsf{V}_{T}\) with \(\mathsf{M}(T)\) for convenience. For any \(x\in\mathsf{M}(T)\), \(\mathsf{d}_{T}(x)\) denotes the out-degree of the node \(x\) in \(\mathsf{CDAWG}(T)\). A non-empty substring \(x\) of string \(T\) is called a _maximal repeat_ in \(T\) if \(x\) is maximal in \(T\) and \(|\mathsf{BegPos}(x,T)|=|\mathsf{EndPos}(x,T)|\geq 2\). We remark that the set of maximal repeats in \(T\) coincides with \(\mathsf{M}(T)\setminus\{\varepsilon,T\}\), namely the longest elements of all internal nodes of \(\mathsf{CDAWG}(T)\) are maximal repeats in \(T\), and they are the only maximal repeats in \(T\). The _size_ of \(\mathsf{CDAWG}(T)=(\mathsf{V}_{T},\mathsf{E}_{T})\) for a string \(T\) of length \(n\) is the number \(\mathsf{e}(T)=|\mathsf{E}_{T}|\) of edges in \(\mathsf{CDAWG}(T)\), which is also referred to as the number of right-extensions of maximal repeats in \(T\). Using this measure \(\mathsf{e}\), we define the worst-case additive _sensitivity_ of the CDAWG with left-end edit operations (resp. insertion, deletion, and substitution) by: \[\mathsf{AS}_{\mathrm{LeftIns}}(\mathsf{e},n) = \max_{T\in\Sigma^{n},a\in\Sigma}\{\mathsf{e}(aT)-\mathsf{e}(T)\},\] \[\mathsf{AS}_{\mathrm{LeftDel}}(\mathsf{e},n) = \max_{T\in\Sigma^{n}}\{\mathsf{e}(T[2..n])-\mathsf{e}(T)\},\] \[\mathsf{AS}_{\mathrm{LeftSub}}(\mathsf{e},n) = \max_{T\in\Sigma^{n},a\in\Sigma\setminus\{T[1]\}}\{\mathsf{e}(aT [2..n])-\mathsf{e}(T)\}.\] For the sensitivity of CDAWGs, we first briefly describe the special case where both the original string \(T\) and an edited string \(T^{\prime}\) are unary. Let \(T=a^{n}\). Clearly, every \(a^{i}\) with \(1\leq i<n\) is a maximal substring of \(T\) and it is only followed by \(a\). Thus \(\mathsf{e}(T)=n-1\). In case of insertion, i.e. \(T^{\prime}=aT=a^{n+1}\), we similarly have \(\mathsf{e}(T^{\prime})=n\). Thus \(\mathsf{e}(T^{\prime})-\mathsf{e}(T)=1\) for unary strings. Symmetrically, we have \(\mathsf{e}(T^{\prime})-\mathsf{e}(T)=-1\) in the case of deletion with \(T^{\prime}=a^{n-1}\). There is no substitution when \(\sigma=1\). In what follows, we focus on the case where \(\sigma\geq 2\). ## 3 Sensitivity of CDAWGs with left-end insertions We consider the worst-case additive sensitivity \(\mathsf{AS}_{\mathrm{LeftIns}}(\mathsf{e},n)\) of \(\mathsf{CDAWG}(T)\) when a new character \(a\) is prepended to input string \(T\) of length \(n\), i.e. \(T^{\prime}=aT\). ### Upper bound for \(\mathsf{AS}_{\mathrm{LeftIns}}(\mathsf{e},n)\) on CDAWGs We divide the value \(\mathsf{e}(T^{\prime})-\mathsf{e}(T)\) into two components \(\mathsf{f}_{\mathrm{Ins}}(T)\) and \(\mathsf{g}_{\mathrm{Ins}}(T)\) s.t. Figure 1: Illustration for \(\mathsf{CDAWG}(T)\) of string \(T=(ab)^{4}c(ab)^{3}\). Every substring of \(T\) can be spelled out from a distinct path from the source \(\varepsilon\). There is a one-to-one correspondence between the maximal substrings in \(\mathsf{M}(T)=\{\varepsilon,ab,(ab)^{2},(ab)^{3},(ab)^{4}c(ab)^{3}\}\) and the nodes of \(\mathsf{CDAWG}(T)\). The number of right-extensions of \(\mathsf{CDAWG}(T)\) is the number \(\mathsf{e}(T)\) of edges, which is 9 in this example. * \(\mathsf{f}_{\mathrm{Ins}}(T)\) is the total out-degrees of new nodes that appear in \(\mathsf{CDAWG}(aT)\); * \(\mathsf{g}_{\mathrm{Ins}}(T)\) is the total number of new out-going edges of nodes that already exist in \(\mathsf{CDAWG}(T)\). Clearly \(\mathsf{e}(T^{\prime})-\mathsf{e}(T)\leq\mathsf{f}_{\mathrm{Ins}}(T)+\mathsf{ g}_{\mathrm{Ins}}(T)\). We first consider the above two components separately, and then we merge them to obtain the desired upper bound. #### 3.1.1 \(\mathsf{f}_{\mathrm{Ins}}(T)\): total out-degrees of new nodes. Suppose \(u\) is a new node for \(\mathsf{CDAWG}(aT)\), where \(u\notin\mathsf{M}(T)\) and \(u\in\mathsf{M}(aT)\). This implies that there is a new occurrence of \(u\) in \(aT\) as a prefix. Let \(u=ax\). The following is our key lemma: **Lemma 1**.: _If \(ax\notin\mathsf{M}(T)\) and \(ax\in\mathsf{M}(aT)\) (i.e. \(ax\) is a new node in \(\mathsf{CDAWG}(aT)\)), then \(x\in\mathsf{M}(T)\). Also, \(\mathsf{d}_{aT}(ax)\leq\mathsf{d}_{T}(x)\)._ Proof.: Since \(ax\in\mathsf{Prefix}(aT)\), \(x\in\mathsf{Prefix}(T)\). Thus \(x\) is left-maximal in \(T\). Assume on the contrary that \(x\) is not right-maximal in \(T\). Then there exists a non-empty string \(\beta\in\Sigma^{+}\) such that \(x\beta=\mathsf{long}([x]_{T}^{\mathsf{R}})\), which means that any occurrence of \(x\) in \(T\) is immediately followed by \(\beta\). Thus \(ax\) is also immediately followed by \(\beta\) in \(aT\), however, this contradicts the precondition that \(ax\in\mathsf{M}(aT)\). Thus \(x\) is right-maximal in \(T\). It immediately follows from \(\mathsf{EndPos}(ax,aT)\subseteq\mathsf{EndPos}(x,T)\) that \(\mathsf{d}_{aT}(ax)\leq\mathsf{d}_{T}(x)\). It follows from Lemma 1 that the out-degree of each new node in \(\mathsf{CDAWG}(aT)\) does not exceed the maximum out-degree of \(\mathsf{CDAWG}(T)\). Also, there is an injective mapping from a new node \(ax\) in \(\mathsf{CDAWG}(aT)\) to an existing node \(x\) in \(\mathsf{CDAWG}(T)\) by Lemma 1. Thus \(\mathsf{f}_{\mathrm{Ins}}(T)\leq\mathsf{e}(T)\) for any string \(T\). In the sequel, we give a tighter bound \(\mathsf{f}_{\mathrm{Ins}}(T)\leq\mathsf{e}(T)-1\). For this purpose, we pick up the case where \(x=\varepsilon\), assume that \(ax=a\) becomes a new node in \(\mathsf{CDAWG}(aT)\), and compare the out-degree of the source \(\varepsilon\) of \(\mathsf{CDAWG}(T)\) and the out-degree of the new node \(a\) in \(\mathsf{CDAWG}(aT)\). We consider the cases with \(\sigma=2\) and with \(\sigma\geq 3\) separately: **Lemma 2**.: _Let \(\sigma=2\). If_ 1. \(a\notin\mathsf{M}(T)\)_,_ 2. \(a\in\mathsf{M}(aT)\)_, and_ 3. _there exists a string_ \(x\in\mathsf{M}(T)\setminus\{\varepsilon,T\}\) _such that_ \(ax\notin\mathsf{M}(T)\) _and_ \(ax\in\mathsf{M}(aT)\)_,_ \(\mathsf{d}_{aT}(a)<\mathsf{d}_{T}(\varepsilon)\)_._ Proof.: Let \(\Sigma=\{a,b\}\). We can exclude the case where \(T=b^{n}\) due to the following reason: Since \(ab^{i}\) for each \(1\leq i<n\) is not maximal in \(aT=ab^{n}\), no new nodes are created in \(\mathsf{CDAWG}(ab^{n})\) (only a new edge labeled \(ab^{n}\) from the source to the sink is created). From now on, consider the case where \(T\) contains both \(a\) and \(b\). This means that \(\mathsf{d}_{T}(\varepsilon)=\sigma=2\). Since \(a\in\mathsf{M}(aT)\), \(a\) is a node of \(\mathsf{CDAWG}(aT)\). Assume on the contrary that \(\mathsf{d}_{aT}(a)=\mathsf{d}_{T}(\varepsilon)\). We then have \(\mathsf{d}_{aT}(a)=2\), which means \(aa,ab\in\mathsf{Substr}(aT)\). There are two cases depending on the first character of \(T\): * If \(T[1]=a\), then let \(T=aw\). Then, since \(aT=aaw\), we have \(ab\in\mathsf{Substr}(T)\). Since \(a\notin\mathsf{M}(T)\) (the first precondition), \(b\) is the only character that immediately follows \(a\) in \(T\), meaning that \(aa\notin\mathsf{Substr}(T)\). Recall that the new node \(ax\) must be a prefix of \(aT=aaw\). Since \(x\neq\varepsilon\) (the third precondition), \(|ax|\geq 2\), and thus \(aa\) is a prefix of \(ax\). However, since \(aa\notin\mathsf{Substr}(T)\), \(aa\) occurs in \(aT\) exactly once as a prefix and thus \(ax\) occurs exactly once in \(aT\). This contradicts the third precondition that \(ax\) is a new node in \(\mathsf{CDAWG}(aT)\). * If \(T[1]=b\), then we have that \(ab\notin\mathsf{Substr}(T)\) by similar arguments as above. Thus \(T\) must be of form \(b^{m}a^{n-m}\) with \(1\leq m<n\). Moreover, since \(a\notin\mathsf{M}(T)\) and \(a\in\mathsf{M}(aT)\) (the first and second preconditions), we have \(T=b^{n-1}a\). Then, for the edited string \(aT=ab^{n-1}a\), any new internal node \(ax\) in \(\mathsf{CDAWG}(aT)\) must be in form \(ab^{i}\) with \(1\leq i<n\). However, each \(ax=ab^{i}\) occurs in \(aT\) exactly once, meaning that \(\mathsf{long}([ab^{i}]_{aT}^{\mathrm{R}})=aT\). This contradicts the third precondition that \(ax\) is a new node in \(\mathsf{CDAWG}(aT)\). Consequently, \(\mathsf{d}_{aT}(a)<\mathsf{d}_{T}(\varepsilon)\). **Lemma 3**.: _Let \(\sigma\geq 3\). If \(a\notin\mathsf{M}(T)\) and \(a\in\mathsf{M}(aT)\), then \(\mathsf{d}_{aT}(a)<\mathsf{d}_{T}(\varepsilon)\)._ Proof.: By similar arguments to the proof for Lemma 2, we have that \(T\) contains at least three distinct characters, one of which is \(a\). Thus \(\mathsf{d}_{T}(\varepsilon)=\sigma\geq 3\). Assume on the contrary that \(\mathsf{d}_{aT}(a)=\mathsf{d}_{T}(\varepsilon)=\sigma\geq 3\). Since \(a\notin\mathsf{M}(T)\) (i.e. \(a\) is not maximal in \(T\)), we have the two following cases: * If \(a\) is not left-maximal in \(T\), then \(T[1]\neq a\) and there is a unique character \(b\) (\(\neq a\)) that immediately precedes \(a\) in \(T\), meaning that \(aa\notin\mathsf{Substr}(T)\). Since \(T[1]\neq a\), we also have \(aa\notin\mathsf{Substr}(aT)\). Thus \(\mathsf{d}_{aT}(a)<\sigma=\mathsf{d}_{T}(\varepsilon)\), a contradiction. * If \(a\) is not right-maximal in \(T\), then there is a unique character \(b\) that immediately follows \(a\) in \(T\). The occurrence of \(a\) as a prefix of \(aT\) is followed by \(T[1]\), and thus the number \(\mathsf{d}_{aT}(a)\) of distinct characters following \(a\) in \(aT\) is at most \(2<\sigma=\mathsf{d}_{T}(\varepsilon)\), a contradiction. Consequently, \(\mathsf{d}_{aT}(a)<\mathsf{d}_{T}(\varepsilon)\). By Lemmas 2 and 3, even if there appear new nodes \(ax\) in \(\mathsf{CDAWG}(aT)\) corresponding to all existing nodes \(x\) in \(\mathsf{CDAWG}(T)\), we have a credit \(\mathsf{d}_{T}(\varepsilon)-\mathsf{d}_{aT}(a)\geq 1\) in most cases. The only exception is when \(\sigma=2\) and \(\mathsf{M}(T)=\{\varepsilon,T\}\). However, in this specific case \(\mathsf{CDAWG}(T)\) consists only of the two nodes (source and sink), namely \(\mathsf{e}(T)=2\). Conversely, we have that the above arguments hold for any \(\mathsf{e}(T)\geq 3\), which leads to the following: **Lemma 4**.: _For any string \(T\) with \(\mathsf{e}(T)\geq 3\), \(\mathsf{f}_{\mathrm{Ins}}(T)\leq\mathsf{e}(T)-1\)._ #### 3.1.2 \(\mathsf{g}_{\mathrm{Ins}}(T)\): number of new branches from existing nodes. The following lemma states that the out-degrees of most existing nodes of \(\mathsf{CDAWG}(T)\) do not change in \(\mathsf{CDAWG}(aT)\), except for a single unique node that can obtain a single new out-going edge in \(\mathsf{CDAWG}(aT)\): **Lemma 5**.: _For any \(y\in\mathsf{Substr}(T)\) such that \(y\in\mathsf{M}(T)\) and \(y\in\mathsf{M}(aT)\), \(\mathsf{d}_{aT}(y)\in\{\mathsf{d}_{T}(y),\mathsf{d}_{T}(y)+1\}\). Also, there exists at most one substring \(y\) with \(\mathsf{d}_{aT}(y)=\mathsf{d}_{T}(y)+1\). Consequently \(\mathsf{g}_{\mathrm{Ins}}(T)\leq 1\)._ Proof.: Since \(y\in\mathsf{M}(T)\) and \(y\in\mathsf{M}(aT)\), \(y\) is a node in both \(\mathsf{CDAWG}(T)\) and \(\mathsf{CDAWG}(aT)\). Then we have that: \[\mathsf{d}_{aT}(y)=\begin{cases}\mathsf{d}_{T}(y)+1&\text{if $y\in\mathsf{ Prefix}(aT)$ and $yb$ occurs in $aT$ only as a prefix,}\\ \mathsf{d}_{T}(y)&\text{otherwise,}\end{cases}\] where \(b\) is the character that immediately follows the occurrence of \(y\) as a prefix of \(aT\), namely \(b=T[|y|]\). Assume on the contrary that there exist two distinct substrings \(x,y\in\mathsf{M}(T)\cap\mathsf{M}(aT)\) such that \(\mathsf{d}_{aT}(x)=\mathsf{d}_{T}(x)+1\) and \(\mathsf{d}_{aT}(y)=\mathsf{d}_{T}(y)+1\). Since both \(x\) and \(y\) must be distinct prefixes of \(aT\), we can assume w.l.o.g. that \(|x|<|y|\), which means that \(x\) is a proper prefix of \(y\). Thus the occurrence of \(x\) as a prefix of \(aT\) is immediately followed by the character \(c=y[|x|+1]\). We recall that \(y\) occurs in \(T\) since \(y\in\mathsf{M}(T)\). Therefore there is an occurrence of \(x\) in \(T\) that is immediately followed by \(c\), which leads to \(\mathsf{d}_{aT}(x)=\mathsf{d}_{T}(x)\), a contradiction. #### 3.1.3 Putting all together. Due to Lemma 4 and Lemma 5, we have an upper bound \(\mathsf{e}(T^{\prime})-\mathsf{e}(T)\leq\mathsf{f}_{\mathrm{Ins}}(T)+\mathsf{ g}_{\mathrm{Ins}}(T)\leq\mathsf{e}(T)-1+1=\mathsf{e}(T)\) for \(\sigma\geq 2\). We remark that the equality holds only if both of the following conditions are satisfied: 1. For any \(x\in\mathsf{M}(T)\setminus\{\varepsilon\}\), \(ax\notin\mathsf{M}(T)\), \(ax\in\mathsf{M}(aT)\), and \(\mathsf{d}_{aT}(ax)=\mathsf{d}_{T}(x)\); 2. There exists a unique string \(x\in\mathsf{Substr}(T)\) such that \(\mathsf{d}_{aT}(x)=\mathsf{d}_{T}(x)+1\). However, in the next lemma, we show that no strings \(x\) can satisfy both Conditions (a) and (b) simultaneously: **Lemma 6**.: _If \(ax\notin\mathsf{M}(T)\) and \(ax\in\mathsf{M}(aT)\), then \(\mathsf{d}_{aT}(x)=\mathsf{d}_{T}(x)\)._ Proof.: Assume on the contrary that \(\mathsf{d}_{aT}(x)\neq\mathsf{d}_{T}(x)\). By Lemma 5 we have that \(\mathsf{d}_{aT}(x)=\mathsf{d}_{T}(x)+1\). Then, it also follows from the proof of Lemma 5 that \(x\) is a prefix of \(aT\) and the character \(b=T[|x|]\) that immediately follows the prefix occurrence of \(x\) in \(aT\) differs from any other characters that immediately follow the occurrences of \(x\) in \(T\). Namely, we have \(b\notin\Sigma^{\prime}=\{T[i+1]\mid i\in\mathsf{EndPos}(x,T)\}\). Moreover, by Lemma 1, \(ax\) is also a prefix of \(aT\). This means that \(x\) is a prefix of \(ax\), and hence \(ax=xb\), which means that \(x=a^{|x|}\) and \(a=b\). Because \(\sigma\geq 2\), \(T\neq x\). Since \(ax\in\mathsf{M}(aT)\) and \(x\neq T\), \(ax\) (\(=xb\)) occurs in \(T\). This means that \(b=c\) for some \(c\in\Sigma^{\prime}\), a contradiction. Thus, \(\mathsf{d}_{aT}(x)=\mathsf{d}_{T}(x)\). We have \(\mathsf{e}(T)\geq 3\) only if \(|T|\geq 3\). By wrapping up Lemma 4, Lemma 5, and Lemma 6, we obtain the main result of this subsection: **Theorem 1**.: _For any \(n\geq 3\) and \(\mathsf{e}\geq 3\), \(\mathsf{AS}_{\mathrm{LeftIns}}(\mathsf{e},n)\leq\mathsf{e}-1\)._ ### Lower bound for \(\mathsf{AS}_{\mathrm{LeftIns}}(\mathsf{e},n)\) on CDAWGs The next lower bound for \(\mathsf{AS}_{\mathrm{LeftIns}}(\mathsf{e},n)\) holds (see Appendix A.1 for a proof). **Theorem 2**.: _There exists a family of strings \(T\) such that \(\mathsf{e}(T^{\prime})-\mathsf{e}(T)=\mathsf{e}(T)-2\), where \(T^{\prime}=bT\) with \(b\in\Sigma\). Therefore \(\mathsf{AS}_{\mathrm{LeftIns}}(\mathsf{e},n)\geq\mathsf{e}-2\)._ ## 4 Sensitivity of CDAWGs with left-end deletions In this section we investigate the worst-case additive sensitivity \(\mathsf{AS}_{\mathrm{LeftDel}}(\mathsf{e},n)\) of \(\mathsf{CDAWG}(T)\) when \(T[1]\) is deleted from the original input string \(T\) of length \(n\). ### Upper bound for \(\mathsf{AS}_{\mathrm{LeftDel}}(\mathsf{e},n)\) on CDAWGs Let \(a=T[1]\) be the first character of string \(T\). Let \(T=aS\) and \(T^{\prime}=S\), and we consider left-end deletion \(aS\Rightarrow S\). Since deleting the left-end character from \(T\) never increases the right-contexts of any substring in \(S\), it suffices for us to consider \(\mathsf{f}_{\mathrm{Del}}(T)=\mathsf{f}_{\mathrm{Del}}(aS)\), the total out-degrees of new nodes that appear in \(\mathsf{CDAWG}(T^{\prime})=\mathsf{CDAWG}(S)\), namely \(\mathsf{e}(S)-\mathsf{e}(aS)\leq\mathsf{f}_{\mathrm{Del}}(aS)\). Let \(x\) be a new node in \(\mathsf{CDAWG}(S)\). We have the following: **Lemma 7**.: _If \(x\notin\mathsf{M}(aS)\) and \(x\in\mathsf{M}(S)\), then \(x\in\mathsf{Prefix}(S)\) and \(ax\in\mathsf{M}(aS)\). Also, \(\mathsf{d}_{S}(x)=\mathsf{d}_{aS}(ax)\)._ Proof.: Since \(x\notin\mathsf{M}(aS)\), \(x\) is either not left-maximal or not right-maximal in \(aS\). If \(x\) is not right-maximal in \(aS\), then \(x\) is also not right-maximal in \(S\), hence \(x\notin\mathsf{M}(S)\). However, this contradicts the precondition \(x\in\mathsf{M}(S)\). Thus \(x\) is not left-maximal in \(aS\). Then, there exists a non-empty unique string \(\alpha\in\Sigma^{+}\) such that \(\alpha x=\mathsf{long}([x]^{\mathrm{L}}_{aS})\), which means that any occurrence of \(x\) in \(aS\) is immediately preceded by \(\alpha\). Assume on the contrary that \(x\notin\mathsf{Prefix}(S)\). Since \(x\in\mathsf{M}(S)\), \(x=\mathsf{long}([x]^{\mathrm{L}}_{S})=\mathsf{long}([x]^{\mathrm{L}}_{aS})\), however, this contradicts that \(\alpha\) is a non-empty string. Thus \(x\in\mathsf{Prefix}(S)\), and hence \(ax\in\mathsf{Prefix}(aS)\). Since \(ax\in\mathsf{Prefix}(aS)\) and \(x\) is right-maximal in \(aS\), \(ax\) is a maximal string of \(aS\). Thus \(ax\in\mathsf{M}(aS)\). Since \(x\) is not left-maximal in \(aS\) and since \(ax\in\mathsf{Prefix}(aS)\), \(\mathsf{EndPos}(ax,aS)=\mathsf{EndPos}(x,aS)=\mathsf{EndPos}(x,S)\). This leads to \(\mathsf{d}_{aS}(ax)=\mathsf{d}_{S}(x)\). By Lemma 7, the out-degree of each new node in \(\mathsf{CDAWG}(S)\) does not exceed the maximum out-degree of \(\mathsf{CDAWG}(aS)\). Also by Lemma 7, there is an injective mapping from a new node \(x\) in \(\mathsf{CDAWG}(S)\) to an existing node \(ax\) in \(\mathsf{M}(aS)\setminus\{\varepsilon\}\). Since \(\mathsf{d}_{aS}(\varepsilon)=\sigma\geq 2\), it holds that \(\mathsf{e}(S)\leq 2(\mathsf{e}(T)-\sigma)+\sigma\leq 2\mathsf{e}(aS)-2\), that is: **Theorem 3**.: _For any \(n\), \(\mathsf{AS}_{\mathrm{LeftDel}}(\mathsf{e},n)\leq\mathsf{e}-2\)._ ### Lower bound for \(\mathsf{AS}_{\mathrm{LeftDel}}(\mathsf{e},n)\) on CDAWGs The next lower bound for \(\mathsf{AS}_{\mathrm{LeftDel}}(\mathsf{e},n)\) holds (see Appendix A.2 for a proof). **Theorem 4**.: _There exists a family of strings \(T\) such that \(\mathsf{e}(S)-\mathsf{e}(T)=\mathsf{e}(T)-4\), where \(T=aS\) with \(a\in\Sigma\). Therefore \(\mathsf{AS}_{\mathrm{LeftDel}}(\mathsf{e},n)\geq\mathsf{e}-4\)._ ## 5 Sensitivity of CDAWGs with left-end substitutions We consider the worst-case additive sensitivity \(\mathsf{AS}_{\mathrm{LeftSub}}(\mathsf{e},n)\) of \(\mathsf{CDAWG}(T)\) when \(T[1]\) is substituted by a new character \(b\neq T[1]\), i.e. \(T^{\prime}=bT[2..n]\). ### Upper bound for \(\mathsf{AS}_{\mathrm{LeftSub}}(\mathsf{e},n)\) on CDAWGs Similarly to the case of insertions, we separate \(\mathsf{e}(T^{\prime})-\mathsf{e}(T)\) into the two following components \(\mathsf{f}_{\mathrm{Sub}}(T)\) and \(\mathsf{g}_{\mathrm{Sub}}(T)\) such that * \(\mathsf{f}_{\mathrm{Sub}}(T)\) is the total out-degrees of new nodes that appear in \(\mathsf{CDAWG}(T^{\prime})\); * \(\mathsf{g}_{\mathrm{Sub}}(T)\) is the total number of new out-going edges of nodes that already exist in \(\mathsf{CDAWG}(T)\). We regard a substitution as a sequence of a deletion and an insertion, i.e. two consecutive edit operations such that \(aS\)\((=T)\Rightarrow S\Rightarrow bS\)\((=bT[2..n]=T^{\prime})\). #### 5.1.1 \(\mathsf{f}_{\mathrm{Sub}}(T)\): total out-degrees of new nodes. Let \(u\) be a new node in \(\mathsf{CDAWG}(bS)\) that does not exist in \(\mathsf{CDAWG}(aS)\), namely \(u\in\mathsf{M}(bS)\) and \(u\notin\mathsf{M}(aS)\). We categorize each new node \(u\) to the two following types \(u_{1}\) and \(u_{2}\) as: 1. \(u_{1}\in\mathsf{M}(T)\) so that \(u_{1}\) is generated by deletion \(aS\Rightarrow S\); 2. \(u_{2}\notin\mathsf{M}(T)\) so that \(u_{2}\) is generated by insertion \(S\Rightarrow bS\). Node \(u_{1}\) is a new node that appears in \(\mathsf{CDAWG}(S)\). Thus, it follows from Lemma 7 that node \(au_{1}\) exists in \(\mathsf{CDAWG}(aS)\). Since \(u_{2}\) is not a node in \(\mathsf{CDAWG}(S)\), it follows from Lemma 1 that \(u_{2}=bx\) and \(x\) is a node in \(\mathsf{CDAWG}(S)\). Based on the this observation, we will show that there is an injective mapping from the new nodes in \(\mathsf{CDAWG}(bS)=\mathsf{CDAWG}(T^{\prime})\) to the existing nodes in \(\mathsf{CDAWG}(aS)=\mathsf{CDAWG}(T)\). In so doing, we must resolve the two non-injective situations where: 1. a new node \(bx\) is generated by insertion \(S\Rightarrow bS\), where \(x\) is generated by deletion \(aS\Rightarrow S\) and \(x\) remains as a node in \(\mathsf{CDAWG}(bS)\); 2. a new node \(bax\) generated by insertion \(S\Rightarrow bS\), where \(x\) is generated by deletion \(aS\Rightarrow S\) and \(x\) remains as a node in \(\mathsf{CDAWG}(bS)\). Suppose (on the contrary) that Case (i) happens. Then, a new node \(x\) is generated from an existing node \(ax\), and \(bx\) is generated from \(x\). Therefore, two new nodes could be generated from on existed node \(ax\in\mathsf{M}(aS)\). However, the next lemma shows that this situation (Case (i)) does not occur unless \(x=S\): **Lemma 8**.: _If \(x\neq S\), \(x\notin\mathsf{M}(aS)\), \(x\in\mathsf{M}(S)\), and \(x\in\mathsf{M}(bS)\), then \(bx\notin\mathsf{M}(bS)\)._ Proof.: Since \(x\notin M(aS)\) and \(x\in M(S)\), \(x\in\mathsf{Prefix}(S)\) by Lemma 7. Since \(x\in\mathsf{M}(S)\) and \(ax\in\mathsf{Prefix}(aS)\), \(ax\equiv^{\mathrm{L}}_{aS}x\) and \(ax=\mathsf{long}([x]^{\mathrm{L}}_{aS})\). This means that \(bx\) occurs exactly once in \(bS\) as a proper prefix. Thus, \(bx\notin\mathsf{RightM}(bS)\) which leads to \(bx\notin\mathsf{M}(bS)\). As for Lemma 8, the situation (Case (i)) can occur if \(x=S\). However, if \(x=S\), then \(S\in\mathsf{M}(bS)\) which implies that \(S\) occurs in \(bS\) as prefix \(bS[1..(n-1)]\). Thus, \(S=b^{n}\), \(T=aS=ab^{n}\) and \(T^{\prime}=bS=b^{n+1}\). It is clear that \(\mathsf{e}(aS)=\mathsf{e}(bS)=n+1\). Therefore the size of the CDAWG does not change when \(x=S\). Now we turn our attention to Case (ii) and assume (on the contrary) that it happens. Then, two new nodes \(bax\) and \(x\) could be generated from a single existing node \(ax\). According to the following lemma, however, this situation cannot occur: **Lemma 9**.: _If \(ax\in\mathsf{M}(aS)\), \(x\notin\mathsf{M}(aS)\), \(bax\notin\mathsf{M}(aS)\), \(x\in\mathsf{M}(S)\), and \(bax\notin\mathsf{M}(S)\), then \(bax\notin\mathsf{M}(bS)\)._ Proof.: Assume on the contrary that \(bax\in\mathsf{M}(bS)\). Since \(x\notin M(aS)\) and \(x\in M(S)\), \(x\in\mathsf{Prefix}(S)\) by Lemma 7. Also, since \(bax\notin M(S)\) and \(bax\in M(bS)\), \(ax\in\mathsf{Prefix}(S)\) by Lemma 1. This means that \(x\in\mathsf{Prefix}(ax)\) and \(x=a^{|x|}\). Since \(ax=a^{|x|+1}\) is a maximal substring of \(aS\), \(x\) is also a maximal substring of \(aS\). Thus \(x\in\mathsf{M}(aS)\), however, this contradicts the precondition that \(x\notin\mathsf{M}(aS)\). Thus \(bax\notin\mathsf{M}(bS)\). As a result, there is an injective mapping from the new nodes \(u_{1}\) (resp. \(u_{2}=bx\)) in \(\mathsf{CDAWG}(bS)\) to the existing nodes \(au_{1}\) (resp. \(x\)) in \(\mathsf{CDAWG}(aS)\) by Lemmas 1, 7, 8, and 9. It also follows from these lemmas that the out-degree of each new node in \(\mathsf{CDAWG}(bS)\) does not exceed the maximum out-degree of \(\mathsf{CDAWG}(aS)\). Finally, we consider the source \(\varepsilon\). By Lemmas 2, 3, and 7, if \(b\in\mathsf{M}(bS)\), \(b\notin\mathsf{M}(aS)\), and \(\mathsf{e}(aS)\geq 3\), then \(\mathsf{d}_{bS}(b)\leq\mathsf{d}_{aS}(\varepsilon)\). Thus we have: **Lemma 10**.: _For any string \(T\) with \(\mathsf{e}(T)\geq 3\), \(\mathsf{f}_{\mathrm{sub}}(T)\leq\mathsf{e}(T)-1\)._ #### 5.1.2 \(\mathsf{g}_{\mathrm{Sub}}(T)\): number of new branches from existing nodes. Since left-end deletions do not create new branches from existing nodes (recall Section 4), it is immediate from Lemma 5 that: **Lemma 11**.: _For any string \(T\), \(\mathsf{g}_{\mathrm{Sub}}(T)\leq 1\)._ #### 5.1.3 Wrapping up. Our main result of this section follows from Lemmas 10 and 11: **Theorem 5**.: _For any \(n\geq 4\) and \(\mathsf{e}\geq 3\), \(\mathsf{AS}_{\mathrm{LeftSub}}(\mathsf{e},n)\leq\mathsf{e}\)._ ### Lower bound for \(\mathsf{AS}_{\mathrm{LeftSub}}(\mathsf{e},n)\) on CDAWGs The next lower bound for \(\mathsf{AS}_{\mathrm{LeftSub}}(\mathsf{e},n)\) holds (see Appendix A.3 for a proof). **Theorem 6**.: _There exists a family of strings \(T\) such that \(\mathsf{e}(T^{\prime})-\mathsf{e}(T)=\mathsf{e}(T)-3\), where \(T^{\prime}=bT[2..n]\) with \(b\in\Sigma\setminus\{T[1]\}\). Therefore \(\mathsf{AS}_{\mathrm{LeftSub}}(\mathsf{e},n)\geq\mathsf{e}-3\)._ ## 6 Quadratic-time bound for leftward online construction The leftward online construction problem for the CDAWG is, given a string \(T\) of length \(n\), to maintain \(\mathsf{CDAWG}(T[i..n])\) for decreasing \(i=n,\ldots,1\). By extending our lower bound on the sensitivity with left-end insertions/deletions, a quadratic bound for this online CDAWG construction follows: **Theorem 7**.: _There exists a family of strings \(T_{m}\) for which the total work for building \(\mathsf{CDAWG}(T_{m}[i..n])\) for decreasing \(i=n,\ldots,1\) is \(\Omega(n^{2})\), where \(n=|T_{m}|\)._ _Proof._ Consider string \(T_{m}=(ab)^{2m}cab(ab)^{2m}\$\), where \(a,b,c,\$\in\Sigma\). For \(0\leq k\leq m\), let \(T_{k,m}\) denote a series of suffixes of \(T_{m}\) such that \(T_{k,m}=(ab)^{m+k}cab(ab)^{2m}\$\). Notice \(T_{m,m}=T_{m}\), \(m=\Theta(n)\) with \(n=|T_{m,m}|\), and \(T_{k,m}=T_{m}[2(m-k)+1..n]\). Now, we consider building \(\mathsf{CDAWG}(T_{m}[i..n])\) for decreasing \(i=n,\ldots,1\), and suppose we have already built \(\mathsf{CDAWG}(T_{k,m})\). For this string \(T_{k,m}\), we have that \(\mathsf{M}(T_{k,m})=\{\varepsilon,ab,(ab)^{2},\ldots,(ab)^{2m},T_{k,m}\}\). For any node \(v\) of \(\mathsf{CDAWG}(T_{k,m})=(\mathsf{V}_{T_{k,m}},\mathsf{E}_{T_{k,m}})\), let \(\mathsf{d}_{T_{k,m}}(v)\) denote the out-degree of \(v\). Then, since \(\mathsf{d}_{T_{k,m}}(\varepsilon)=4\), \(\mathsf{d}_{T_{k,m}}((ab)^{i})=3\) for every \(1\leq i\leq m+k\), \(\mathsf{d}_{T_{k,m}}((ab)^{j})=2\) for every \(m+k+1\leq j\leq 2m\), and \(\mathsf{d}_{T_{k,m}}(T_{k,m})=0\). Therefore \(\mathsf{e}(T_{k,m})=5m+k+4\). Let us now prepend character \(b\) to \(T_{k,m}\) and obtain \(T_{k+1,m}=bT_{k,m}=b(ab)^{m+k}c(ab)^{2m}\$\). It is clear that \(bT_{k,m}=T_{m,m}[2(m-k)..n]\). We have that \[\mathsf{M}(bT_{k,m}) = \{\varepsilon,ab,(ab)^{2},...,(ab)^{2m},b,bab,b(ab)^{2},...,b(ab) ^{m+k},bT_{k,m}\}\] \[= (\mathsf{M}(T_{k,m})\setminus\{T_{k,m}\})\cup\{b,bab,b(ab)^{2},...,b(ab)^{m+k}\}\cup\{bT_{k,m}\},\] and that \(\mathsf{d}_{bT_{k,m}}(\varepsilon)=4\), \(\mathsf{d}_{bT_{k,m}}(b)=3\), \(\mathsf{d}_{bT_{k,m}}((ab)^{i})=\mathsf{d}_{bT_{k,m}}(b(ab)^{i})=3\) for every \(1\leq i\leq m+k\), \(\mathsf{d}_{bT_{k,m}}(b(ab)^{j})=2\) for every \(m+k+1\leq j\leq 2m\), and \(\mathsf{d}_{bT_{k,m}}(bT_{k,m})=0\). Thus \(\mathsf{e}(bT_{k,m})=8m+4k+7\). Therefore, building \(\mathsf{CDAWG}(T_{k+1,m})\) from \(\mathsf{CDAWG}(T_{k,m})\) requires to _add_\(|\mathsf{e}(T_{k+1,m})-\mathsf{e}(T_{k,m})|=3m+3k+3=\Omega(m)\) new edges (see the first step of Fig. 2 for illustration). Let us move on to the next step, where we prepend character \(a\) to \(bT_{k,m}\) and obtain \(T_{k+1,m}=abT_{k,m}=ab(ab)^{m+k}c(ab)^{2m}\$\). Note that \(abT_{k,m}=T_{k+1,m}=T_{m}[2(m-k)-1..n]\), and \(\mathsf{M}(T_{k+1,m})=\{\varepsilon,ab,(ab)^{2},...,(ab)^{2m},T_{k+1,m}\}\). We also have \(\mathsf{d}_{T_{k+1,m}}(\varepsilon)=4\), \(\mathsf{d}_{T_{k+1,m}}((ab)^{i})=3\) for every \(1\leq i\leq m+k+1\), \(\mathsf{d}_{T_{k+1,m}}((ab)^{j})=2\) for every \(m+k+2\leq j\leq 2m\), and \(\mathsf{d}_{T_{k+1,m}}(T_{k+1,m})=0\). This leads to \(\mathsf{e}(T_{k+1,m})=5m+k+5\). Therefore, building \(\mathsf{CDAWG}(T_{k+1,m})\) from \(\mathsf{CDAWG}(bT_{k,m})\) requires to _remove_\(|\mathsf{e}(T_{k+1,m})-\mathsf{e}(bT_{k,m})|=3m+3k+2=\Omega(m)\) existing edges (see the second step of Fig. 2 for illustration). This process of adding and removing \(\Omega(m)\) edges in every two steps repeats when we update \(\mathsf{CDAWG}(T_{k,m})\) to \(\mathsf{CDAWG}(T_{k+1,m})\) for every increasing \(k=1,\ldots,m-1\). Since \(m=\Theta(n)\), the total work for building \(\mathsf{CDAWG}(T_{m}[i..n])\) for decreasing \(i=n,\ldots,1\) is \(\Omega(m^{2})=\Omega(n^{2})\). **Remark 1**.: The linear-time algorithm of [8] for _rightward_ online CDAWG construction maintains a slightly modified version of the CDAWG, which becomes isomorphic to our CDAWG when a terminal symbol \(\$\) is appended to the string. Still, our lower bound instance from Theorem 7 shows that \(\$\) does not help improve the time complexity of _leftward_ online CDAWG construction. ## 7 Conclusions and further work This paper investigated the worst-case additive sensitivity of the size of CDAWGs when a single-character edit operation is performed on the left-end of the input string. We proved that the number of new edges that appear after a left-end edit operation is at most the number of existing edges (upper bound). We also showed that there are almost matching lower bounds for all cases of left-end insertions, deletions, and substitutions. An apparent future work is to close the small gap between our upper and lower bounds, which is at most by an additive factor of 3 (recall Table 1). Another intriguing open question is the sensitivity of CDAWGs when an edit operation can be performed at an arbitrary position in the string. Our left-end sensitivity results should partly contribute to the general case, since maximal repeats that touch the edited position can be analyzed in a similar way. What remains is how to deal with maximal repeats which contain the edited position. ## Acknowledgements The authors thank Mitsuru Funakoshi for discussions. This work was supported by KAKENHI grant numbers 21K17705 and 22H03551. Figure 2: Illustration for the CDAWGs of strings \(T_{k,m}=(ab)^{3}cab(ab)^{4}\$\), \(bT_{k,m}=b(ab)^{3}cab(ab)^{4}\$\), and \(T_{k+1,m}=(ab)^{4}cab(ab)^{4}\$\) with \(k=1,m=2\).
2310.19051
A Survey of Methods for Estimating Hurst Exponent of Time Sequence
The Hurst exponent is a significant indicator for characterizing the self-similarity and long-term memory properties of time sequences. It has wide applications in physics, technologies, engineering, mathematics, statistics, economics, psychology and so on. Currently, available methods for estimating the Hurst exponent of time sequences can be divided into different categories: time-domain methods and spectrum-domain methods based on the representation of time sequence, linear regression methods and Bayesian methods based on parameter estimation methods. Although various methods are discussed in literature, there are still some deficiencies: the descriptions of the estimation algorithms are just mathematics-oriented and the pseudo-codes are missing; the effectiveness and accuracy of the estimation algorithms are not clear; the classification of estimation methods is not considered and there is a lack of guidance for selecting the estimation methods. In this work, the emphasis is put on thirteen dominant methods for estimating the Hurst exponent. For the purpose of decreasing the difficulty of implementing the estimation methods with computer programs, the mathematical principles are discussed briefly and the pseudo-codes of algorithms are presented with necessary details. It is expected that the survey could help the researchers to select, implement and apply the estimation algorithms of interest in practical situations in an easy way.
Hong-Yan Zhang, Zhi-Qiang Feng, Si-Yu Feng, Yu Zhou
2023-10-29T15:56:53Z
http://arxiv.org/abs/2310.19051v1
# A Survey of Methods for Estimating Hurst Exponent of Time Sequence ###### Abstract The Hurst exponent is a significant indicator for characterizing the self-similarity and long-term memory properties of time sequences. It has wide applications in physics, technologies, engineering, mathematics, statistics, economics, psychology and so on. Currently, available methods for estimating the Hurst exponent of time sequences can be divided into different categories: time-domain methods and spectrum-domain methods based on the representation of time sequence, linear regression methods and Bayesian methods based on parameter estimation methods. Although various methods are discussed in literature, there are still some deficiencies: the descriptions of the estimation algorithms are just mathematics-oriented and the pseudo-codes are missing; the effectiveness and accuracy of the estimation algorithms are not clear; the classification of estimation methods is not considered and there is a lack of guidance for selecting the estimation methods. In this work, the emphasis is put on thirteen dominant methods for estimating the Hurst exponent. For the purpose of decreasing the difficulty of implementing the estimation methods with computer programs, the mathematical principles are discussed briefly and the pseudo-codes of algorithms are presented with necessary details. Furthermore, the performances of the algorithms discussed are verified and validated by simulation via ideal time sequences with known Hurst exponent generated by fractional Gauss noises as well as reaction time data on human behavior published online. Simulation results show that the accuracy of spectrum-domain methods is superior to that of time-domain methods although the discrete Fourier transform and discrete wavelet transform are necessary for understanding them. It is expected that the survey could help the researchers to select, implement and apply the estimation algorithms of interest in practical situations in an easy way. **Keywords**: Time sequence; Hurst exponent; Fractal Gaussian Noise (FGN); Parameter estimation; Algorithm design ## 1 Introduction The _long-term memory of time sequence_ (LTMTS), also named with the _long range dependence_ (LRD) sequence, was discovered in 1951 by Harold E. Hurst [1, 2]. The LTMTS exists in a wide range of natural phenomena, such as rainfall, tree rings, solar flares and so on [3]. In order to qualitatively explore the the changes of river water level, Hurst proposed an exponent, denoted by \(H\) which is named _Hurst exponent_ or _Hurst index_ for his contribution, to characterize the LTMTS. Mathematically, a time sequence is just a stochastic process with a discrete parameter "time", which can be denoted by \[X:\Omega\times\mathbb{Z} \rightarrow\mathbb{R} \tag{1}\] \[(\omega,t) \mapsto x\] where \(\Omega\) is the sample space of a probability space \((\Omega,\mathscr{F},\Pr)\). The mapping \(X(\omega,t)\) could be denoted by \(X_{t}\) by omitting the variable \(\omega\in\Omega\) according to the custom of data analysis and signals processing. The Hurst exponent is closely related to the stationary time sequence [4]: **Definition 1**.: _Let \(X_{t}\) be a stationary stochastic process and \(k\in\mathbb{N}\) be the time lagging parameter, \(\rho(k)=\mathrm{E}\left\{X_{t}X_{t+k}\right\}\) is the autocorrelation function of \(X_{t}\). If there exists a real number \(\alpha\in(0,1)\) and a constant \(c_{\rho}>0\) such that asymptotically \(\rho(k)\sim c_{\rho}k^{\alpha}\) for \(k\rightarrow\infty\), viz._ \[\lim_{k\rightarrow\infty}\frac{\rho(k)}{c_{\rho}k^{-\alpha}}=1, \tag{2}\] _then \(X_{t}\) is called a stationary process with long memory._ There are various alternatives for the long-term memory of a stationary process such as _long-range dependence_, _strong dependence_, _slowly decaying_ and _long-range correlations_. Long-term memory is the counterpart of short-range dependency. The autocorrelation function of a time sequence with short-range dependency exponentially decay to zero when the time lag increases. In contrast, the autocorrelation function of a time sequence with long-term memory decay moderately, following a power-law decay pattern. The Hurst exponent \(H\) is a measure of the long-term memory capability of a time sequence. Usually, the range of \(H\) is the open interval \((0,1)\)[4]. A value of \(H\in(0.5,1)\) indicates that the sequence has long-term memory, \(H\in(0,0.5)\) indicates short-range dependency, and \(H=0.5\) represents a random sequence in which observations are completely uncorrelated. The essence of the Hurst exponent is to reflect how the range of fluctuation in a time sequence changes with the time span. Let \(\mathrm{E}\left\{\cdot\right\}\) be the expectation operator and \(\mathcal{A}_{t}^{k:m}\left\{\cdot\right\}\) be the arithmetic average operator defined by \[\mathcal{A}_{t}^{k:m}\left\{X_{\alpha t+\beta s}\right\}=\frac{1}{m-k+1}\sum_{ t=k}^{m}X_{\alpha t+\beta s} \tag{3}\] where \(t\) denotes the variable for summation, \(k\) and \(m\) denote the range of \(t\). Particularly, for \((\alpha,\beta,m,k)=(1,0,1,n)\), the notation for the arithmetic average can be simplified by \[\overline{X}=\mathcal{A}(X_{t})=\mathcal{A}_{t}^{1:n}\left\{X_{t}\right\}= \frac{1}{n}\sum_{t=1}^{n}X_{t}. \tag{4}\] Let \(\mathcal{S}_{t}^{1:n}\left\{\cdot\right\}\) be the standard deviation operator, then \[\mathcal{S}_{t}^{1:n}\left\{X_{t}\right\}=\sqrt{\frac{1}{n-1}\sum_{t=1}^{n}(X _{t}-\mathcal{A}(X_{t}))^{2}}. \tag{5}\] For the \(n\) consecutive samples \(\left\{X_{t}\right\}_{t=1}^{n}\), we define the range and standard deviation by \[R(n)=\max_{1\leq t\leq n}X_{t}-\min_{1\leq t\leq n}X_{t} \tag{6}\] and \[S(n)=\mathcal{S}_{t}^{1:m}\left\{X_{t}\right\} \tag{7}\] respectively. The R/S statistic is defined by the ratio of \(R(n)\) to \(S(n)\), viz. \[\mathscr{R}_{X}(n)=\frac{R(n)}{S(n)}. \tag{8}\] It was Hurst who discovered that there exists a constant \(H\) such that the \(\mathscr{R}_{X}(n)\) is dominated by the power \(n^{H}\) asymptotically. It is a fundamental problem that how to estimate the Hurst exponent of time sequences robustly, precisely and efficiently. At present, there are various methods for estimating the Hurst exponent. According to the equivalent representation domain for the time sequence, there are two fundamental categories for the available estimation methods: * time-domain method, in which the time sequence \(\left\{X_{t}:1\leq t\leq N\right\}\) is used directly and the Hurst exponent is estimated by revealing the characteristics of some statistical properties change with the variation of observation period. * spectrum-domain method, in which the _discrete Fourier transform_ (DFT) and _discrete wavelet transform_ (DWT) are taken for the time sequence \(\{X_{t}:1\leq t\leq N\}\). Mathematically, Hurst's observation can be expressed by \[\lim_{n\rightarrow\infty}\mathrm{E}\left\{\mathscr{R}_{X}(n)\right\}\propto n^{H}. \tag{9}\] The method of estimating the Hurst exponent \(H\) according to (9) is called the _R/S analysis_ or _range rescaled analysis_ since \(R(n)/S(n)\) means rescaling the range \(R(n)\) with the standard deviation \(S(n)\). The R/S analysis was popularized by Mandelbrot with his great work on the theory of fractal [5, 6, 7] and it was the first kind of time-domain method. An important result obtained by Mandelbrot is the relation of fractal dimension \(D\) and the Hurst exponent \(H\), which can be expressed by \[D+H=2. \tag{10}\] In the following years, lots of time-domain methods were proposed by more and more researchers. For instance, Beran constructed the _aggregate variance_ (AV) method for estimating the Hurst exponent using the sample variance [4]. After that, Taqqu proposed the _absolute moments_ (AM) method in 1995 [8]. The AV method and AM method are unified as the _central estimation_. In 1991, Barab'asi proposed the _generalized Hurst exponent_ (GHE) method via the \(q\)-th order moments for estimating the Hurst exponent [9], which is similar to the central estimation. In 1988, the _Higuchi method_ was constructed to calculate the fractal dimension of a time sequence [10], due to the constant offset between the fractal dimension and the Hurst exponent, this method can be applied to estimate the Hurst exponent of a time sequence. In 1994, Peng et al. proposed the _detrended fluctuation analysis_ (DFA) method [11] which was also called _residuals of regression_ method [8]. The DFA method was initially utilized to analyze whether non-coding regions in DNA sequences have long-range correlations, and the resulting value is the Hurst exponent. In 2020, Lotfalinezhad et al. were inspired by the DFA method and proposed the _triangles total areas_ (TTA) method to estimate the Hurst exponent via triangles total areas [12]. Subsequently, Gomez et al. provided an effective theoretical framework for the TTA method in 2021, and proposed a slightly different method called _triangle area_ (TA) method [13]. In addition to these conventional time-domain methods, Tyralis and Koutsoyiannis have proposed two Bayesian statistics estimation methods -- _least squares via standard deviation_ (LSSD) and _least squares via variance_ (LSV) -- based on serial variance and standard deviation in 2003 [14, 15]. In 2023, Likens compared the performances of these two Bayesian methods and the DFA method, which shows that the Bayesian methods are better than the DFA method, especially when time sequences are short [16]. The spectrum-domain estimators have been studied since the beginning of the era of 1980s. For example, Hosking et al. discovered that certain time sequences have similar spectral density features in the low-frequency range in 1981 [17]. Subsequently, this property was used by Geweke and Porter-Hudak to propose the _periodogram method_ (PM) for estimating the Hurst exponent [18]. Moreover, this property was also used by Kunsch [19] and Robinson [20] to develop the _local whittle_ (LW) method which is also named _Gaussian semi-parametric estimaiton_. Phillip systematically analyzed and compared these two spectrum-domain methods in 2004 [21]. In 1989, Flandrin discovered the relation of wavelet transform to fractional Brownian motion [22]. Three years later, he gave the relationship between the variance of wavelet coefficients and the Hurst exponent based on the discrete wavelet transform [23]. In 1998, Ingve Simonsen et al. proposed the _average wavelet coefficient method_ (AWC) for estimating the Hurst exponent using the DWT [24], and then applied this method to measure the anti-correlation in the Nordic electricity spot market in 2003 [25]. In recent years, researcher have focused on the applications of Hurst exponent and performance evaluation of the estimation methods. For example, Lahmiri used GHE to estimate the Hurst exponent of electrocardiogram data, providing auxiliary features for the classification of heart disease data [26]. Moumdjian et al. applied the DFA method to calculate the fractal statistical properties of the gait time-series to quantify gait dynamics by the outcome measure alpha [27]. Zhang et al. analyzed the reaction time data in 2017 [28] and studied the features clustering problem in 2018 [29] with the R/S analysis. In 2020, Han et al. compared the limited sample variance, variance and mean square deviation of some Hurst exponent estimators in time and spectrum domain with the ARFIMA model [30]. In 2021, Hamza et al. gave the _mean absolute scaled error_ (MASE) index of different Hurst exponent estimators based on the _Fractional Brownian motion_ (FBM) [31]. Although various methods are discussed in literature, there are still some deficiencies: the descriptions of the estimation algorithms are just mathematics-oriented and the pseudo-codes are missing; the effectiveness and accuracy of the estimation algorithms are not clear; the classification of estimation methods is not considered; there is a lack of guidance for selecting the estimation methods. In this paper, our emphasis is put on the three perspectives: 1. the general description of the important algorithms for estimating the Hurst exponent with pseudo-codes, which can be implemented with the high-level computer programming languages such as C/C++, Octave/MATLAB, SciLab, Python, Julia, R, Java, Rust and so on; 2. verification and validation of the algorithms by ideal time sequences with given Hurst exponent and practical time sequences with long-range property; 3. guidance for selecting the algorithms for estimating the Hurst exponent are discussed based on the performance evaluation. For the convenience of reading, the abbreviations used in this paper has been summarized in **Table** 1. ## 2 Preliminaries ### Integers and Division For the positive integers \(a,b,m\in\mathbb{N}\), we always have \[a=bq+r,\quad 0\leq r\leq m-1 \tag{11}\] If \(r=0\), then \(b\) divides \(a\) and \(b\) is a factor of \(a\), which can be denoted by \(b\mid a\). If \(r\neq 0\), then we denote it as \(b\mid\!\!/a\). For \(a\geq 2\), if \(b\mid a\) implies that \(b\) must be \(1\) or \(a\), then \(a\) is called a _prime number_, otherwise it is called a _composite number_. The factor \(b\) of the composite number \(a\) such that \(b\not\in\{1,a\}\) is called a _proper factor_. The integer \(q\) in (11) can be calculated by \[q=\left\lfloor\frac{a}{b}\right\rfloor \tag{12}\] where \(\left\lfloor x\right\rfloor\) denotes the lower integer of \(x\) such that \[\left\lfloor x\right\rfloor\leq x<\left\lfloor x\right\rfloor+1. \tag{13}\] Similarly, the \(\left\lceil x\right\rceil\) denotes the upper integer of \(x\) such that \[x\leq\left\lceil x\right\rceil<x+1. \tag{14}\] For illustration, we have \(\left\lfloor\pi\right\rfloor=3\), \(\left\lfloor-\pi\right\rfloor=-4\), \(\left\lceil\pi\right\rceil=4\) and \(\left\lceil-\pi\right\rceil=-3\). The set of proper factors of the composite number \(a\in\mathbb{N}\) can be denoted by \[S_{\mathrm{pf}}(a)=\left\{d\in\mathbb{N}:d\mid a,d\geq 2,d\neq a\right\}. \tag{15}\] Let \(a\) be composite number and \(w\in\{2,\cdots,\lfloor\sqrt{a}\rfloor\}\) be a integer for lower bound, then \[S_{\mathrm{bpf}}(a,w)=\left\{d\in S_{\mathrm{pf}}(a):w\leq d\leq\left\lfloor \frac{a}{w}\right\rfloor\right\} \tag{16}\] \begin{table} \begin{tabular}{c l l} \hline \hline **No.** & **Abbreviation** & **Interpretation** \\ \hline \hline 1 & LTMTS & Long-term memory of time sequence, also named with the long range dependence (LRD) sequence \\ 2 & FGN & Fractal Gaussian Noise, which is used as sample sequence for verification and validation \\ 3 & FFT & Fast Fourier Transform \\ 4 & DWT & Discrete Wavelet Transform \\ \hline 5 & AM & Absolute Moments, where the first order central moment is used in Central estimation \\ 6 & AV & Aggregate Variances, where the second order central moment is used in Central estimation \\ 7 & GHE & Generalized Hurst Exponent \(H(q)\), where \(q\) is the order \\ 8 & HM & Higuchi method, which was constructed by Higuchi in 1988 \\ 9 & DFA & Detrended Fluctuation Analysis, which used the residual of regression \\ 10 & R/S & Rescaled Range Analysis, the ratio of \(R\) to \(S\), viz. \(\mathscr{R}_{X}=R/S\), where \(R=\max\limits_{t}X_{t}-\min\limits_{t}X_{t}\) denotes the range and \(S=\text{Std}(X_{t})\) denotes the standard deviation of the time sequence \(\{X_{t}:t=0,1,2,\cdots\}\) \\ 11 & TTA & Triangles Total Areas, where each triangle was constructed by difference initial time and interval time \\ 12 & PM & Periodogram method \\ 13 & AWC & Average Wavelet Coefficient \\ 14 & VVL & Variance Versus Level \\ 15 & LW & Local Whittle, also named Gauss semiparametric estimation \\ 16 & LSSD & Least Squares via Standard Deviation, where constructed the Bayesian statistic use standard deviation \\ 17 & LSV & Least Squares via Variance \\ \hline \hline \end{tabular} \end{table} Table 1: Nomenclatures is called the set of _bounded proper factors_ of \(a\). The cardinality of the set \(S_{\rm bpf}(a,w)\) is denoted by \(|S_{\rm bpf}(a,w)|\), which means the number of the elements in the set. For example, we have \(\left\lfloor\sqrt{48}\right\rfloor=6\) and \(S_{\rm pf}(48)=\{2,3,4,6,8,12,24\}\). Thus for \(w\in\{2,3,4,5,6\}\) we have \[\left\{\begin{array}{ll}S_{\rm bpf}(48,2)=\{2,3,4,6,8,12,16,24\}\,,&|S_{\rm bpf }(48,2)|=8;\\ S_{\rm bpf}(48,3)=\{3,4,6,8,12,16\}\,,&|S_{\rm bpf}(48,3)|=6;\\ S_{\rm bpf}(48,4)=\{4,6,8,12\}\,,&|S_{\rm bpf}(48,4)|=4;\\ S_{\rm bpf}(48,5)=\{6,8\}\,,&|S_{\rm bpf}(48,5)|=2;\\ S_{\rm bpf}(48,6)=\{6,8\}\,,&|S_{\rm bpf}(48,6)|=2.\end{array}\right.\] ### Discrete Fourier Transform For a discrete time sequence \(\{X_{1},\cdots,X_{N}\}\) with length \(N\), its _discrete Fourier transform_ (DFT) is defined by [32] \[\hat{X}_{k}=\sum_{t=1}^{N}X_{t}{\rm e}^{-\frac{2\pi{\rm i}}{N}(k-1)(t-1)}, \quad 1\leq k\leq N \tag{17}\] where \({\rm i}=\sqrt{-1}\). The sequence \(\left\{\hat{X}_{k}:1\leq k\leq N\right\}\) is called the frequency spectrum of \(\{X_{t}:1\leq t\leq N\}\). The inverse discrete Fourier transform (IDFT) is defined by \[X_{t}=\frac{1}{N}\sum_{k=1}^{N}\hat{X}_{t}{\rm e}^{+\frac{2\pi{\rm i}}{N}(k-1) (t-1)},\quad 1\leq t\leq N \tag{18}\] Usually the DFT is implemented with the _fast Fourier transform_ (FFT) proposed by Cooley and Tukey in 1965 [33] in order to reduce the computational complexity from \(\mathcal{O}(N^{2})\) to \(\mathcal{O}(N\log_{2}N)\). ### Discrete Wavelet Transform A _discrete wavelet transform_ (DWT) is a wavelet transform that decomposes the host signal into wavelets that are discontinuous. Its temporal resolution makes it more attractive over DFT since it contains more information carried both in time and frequency [34]. For the length \(N\) of the time sequence, we can set \[J=\left\lceil\log_{2}N\right\rceil \tag{19}\] as the number of transform levels. We now introduce the scaling parameter \[a\in\left\{1,2^{1},2^{2},\cdots,2^{J-1}\right\}, \tag{20}\] and position parameter \[b\in I_{a}=\left\{0,1,2,\cdots,a-1\right\}. \tag{21}\] Given the scaling functions \(\varphi_{a,b}(t)\) and wavelet functions \(\psi_{a,b}(t)\) for \(t\in\{1,2,\cdots,N\}\) \[\begin{cases}\varphi_{a,b}(t)=\sqrt{a}\cdot\varphi(a(t-1)-b)\\ \psi_{a,b}(t)=\sqrt{a}\cdot\psi(a(t-1)-b)\end{cases} \tag{22}\] Then the DWT of the discrete time signal \(\left\{X_{t}:1\leq t\leq N\right\}\) is defined with the scale coefficients \(W_{\mathbf{X}}^{\varphi}(a,b)\) and detail coefficients \(W_{\mathbf{X}}^{\psi}(a,b)\), viz. [35]: \[\left\{\begin{array}{ll}W_{\mathbf{X}}^{\varphi}(a,b)=\frac{1}{\sqrt{N}}\sum_{t =1}^{N}\varphi_{a,b}(t)X_{t}\\ W_{\mathbf{X}}^{\psi}(a,b)=\frac{1}{\sqrt{N}}\sum_{t=1}^{N}\psi_{a,b}(t)X_{t}\end{array}\right. \tag{23}\] In this paper, only the detail coefficients of the DWT is concerned and we set \[\text{DWT}^{a}_{b}(\mathbf{X},\psi)=W^{\psi}_{\mathbf{X}}(a,b). \tag{24}\] Note that the choice of the wavelet function \(\psi(t)\) is not unique for the continuous and discrete wavelet transforms. Both the Harr wavelet [36] and Daubechies wavelet [37] are satisfactory for estimating the Hurst exponent of the time sequences. ### Fractal Gaussian Noise The _fractal Gaussian noise_ (FGN), which is closely associated with the FBM [38], is a special type of time sequences with the self-similarity characterized by its autocorrelation. The autocorrelation function of the FGN sequence \(U=\{u_{n}:n=0,1,2,\cdots\}\) is [39] \[\begin{split}\phi_{U}(\tau)&=\text{E}\left\{u_{n} u_{n+\tau}\right\}\\ &=\frac{1}{2}(|\tau+1|^{2H}\!-\!2|\tau|^{2H}\!+\!|\tau-1|^{2H})\\ &=\phi_{U}(-\tau)\end{split} \tag{25}\] where \(\tau\in\mathbb{Z}^{+}\) is the time lag, \(H\) is the Hurst exponent and E is the operator of expectation in the sense of probability and mathematical statistics. With the help of Newton's binomial theorem \[(1+x)^{\alpha}=\sum_{k=0}^{+\infty}\binom{\alpha}{k}x^{k},\quad|x|<1 \tag{26}\] where \[\binom{\alpha}{k}=\frac{\alpha(\alpha-1)\cdots(\alpha-k+1)}{k!},\quad\forall \alpha\in\mathbb{R} \tag{27}\] is the binomial coefficient, we can deduce that \[\begin{split}&(1\pm\tau^{-1})^{2H}\\ &=1\pm\binom{2H}{1}\cdot\tau^{-1}+\binom{2H}{2}(\pm\tau^{-1})^{2} +O(\tau^{-3})\\ &=1\pm\binom{2H}{1}\cdot\tau^{-1}+H(2H-1)(\pm\tau^{-1})^{2}+O( \tau^{-3})\end{split} \tag{28}\] for sufficiently large \(\tau\). Consequently, the autocorrelation function can be expressed by \[\begin{split}\phi_{U}(\tau)&=\frac{\tau^{2H}}{2} \left[(1+\tau^{-1})^{2H}-2+(1-\tau^{-1})^{2H}\right]\\ &=H(2H-1)\tau^{2H-2}+O(\tau^{-3}).\end{split} \tag{29}\] Thus we have the asymptotic property \[\phi_{U}(\tau)\propto H(2H-1)\tau^{2H-2} \tag{30}\] for the sufficiently large time lag \(\tau\). Theoretically, the FGN sequence is strictly self-similarity. This property can be utilized to generate FGN sequences with given Hurst exponent [40]. The procedure GenTimeSeqFGN described in **Algorithm**1 is of great significance for verifying and validating the methods and algorithms for estimating the Hurst exponent. **Input**: Sequence length \(\ell\), hurst exponent \(H\). **Output**: The FGN sequence \(U=\{u_{t}:0\leq t\leq\ell\}\). ``` 1:functionGenTimeSeqFGN(\(\ell,H\)) ``` **Algorithm 1** Generating the FGN sequence as the ideal time sequence with specified \(H\). \(\mathbf{\rho}\leftarrow\mathbf{0}\in\mathbb{R}^{1\times\ell}\); // For the autocorrelation function 3. **for**\(k\in\langle 0,1,\cdots,\ell-1\rangle\)**do** 4. \(\rho_{k+1}\gets 0.5\cdot(|k-1|^{2H}-2|k|^{2H}+|k+1|^{2H})\); 5. **endfor** 6. \(\mathbf{g}\leftarrow\text{FFT}\left([\mathbf{\rho}(1:\ell),0,\mathbf{\rho}(\ell:2)]\right)\); // FFT 7. \(\mathbf{V}=\sqrt{\mathbf{g}}\); // Eigenvalues of the correlation sequence 8. \(\mathbf{m}\leftarrow\mathbf{0}\in\mathbb{R}^{1\times\ell},\mathbf{n}\leftarrow\mathbf{0}\in \mathbb{R}^{1\times\ell}\); 9. \(\mu\gets 0,\sigma\gets 1\); // for \(\mathcal{N}(0,1)\) 10. \(\mathbf{m}\leftarrow\text{GenTimeSeqGauss}(\mu,\sigma,\ell)\); 11. \(\mathbf{n}\leftarrow\text{GenTimeSeqGauss}(\mu,\sigma,\ell)\); 12. \(\mathbf{w}\leftarrow\mathbf{0}\in\mathbb{R}^{1\times 2\ell}\); 13. \(w_{1}\leftarrow\frac{V_{1}}{\sqrt{2\ell}}\cdot m_{1}\); 14. **for**\(j\in(2,3,\cdots,\ell)\)**do** 15. \(w_{j}\leftarrow\frac{V_{j}}{\sqrt{4\ell}}\cdot(m_{j}+\mathrm{i}\cdot n_{j})\); // i = \(\sqrt{-1}\) 16. \(w_{\ell+j}\leftarrow\frac{V_{\ell+j}}{\sqrt{4\ell}}\cdot(m_{\ell-j+2}- \mathrm{i}\cdot n_{\ell-j+2})\); 17. **endfor** 18. \(w_{\ell+1}\leftarrow\frac{V_{\ell+1}}{\sqrt{2\ell}}\cdot n_{1}\); 19. \(\mathbf{f}\leftarrow\Re(\text{FFT}(\mathbf{w}))\); // taking the real part 20. \(U\leftarrow\ell^{-H}\mathbf{f}(1:j)\); 21. **return**\(U\); 22. **endfunction** Note that for the \(d\)-dimensional vector \(\mathbf{v}\in\mathbb{R}^{d\times 1}\) or \(\mathbf{v}\in\mathbb{R}^{1\times d}\), the notation \(\mathbf{v}(i:r)\) means taking the sub-vector of \(\mathbf{v}\): \[\mathbf{v}(i:r)=\begin{cases}(v_{i},v_{i+1},\cdots,v_{r-1},v_{r}),\quad i<r;\\ (v_{i},v_{i-1},\cdots,v_{r+1},v_{r}),\quad i>r.\end{cases} \tag{31}\] Furthermore, the procedure \(\text{GenTimeSeqGauss}(\mu,\sigma,\ell)\) is used to generate the time sequence with normal distribution in which \(\mu\) is the expectation, \(\sigma\) is the standard deviation and \(\ell\) is the length of the sequence. The sequences \(\mathbf{m}\) and \(\mathbf{n}\) should be generated independently. ### Linear Regression and Parameters Estimation The estimation of Hurst exponent is built on the method of linear regression for parameter estimation. Suppose the asymptotic behavior of data set \(\left\{(x_{i},y_{i}):1\leq i\leq n\right\}\) can be expressed by the power law \[y_{i}\propto x_{i}^{\beta}, \tag{32}\] then we have \[\ln y_{i}\sim\alpha+\beta\cdot\ln x_{i} \tag{33}\] by taking the logarithms of the two sides of (32). Let \[\mathbf{A}=\begin{bmatrix}1&\ln(x_{1})\\ 1&\ln(x_{2})\\ \vdots&\vdots\\ 1&\ln(x_{n})\end{bmatrix},\quad\mathbf{p}=\begin{bmatrix}\alpha\\ \beta\end{bmatrix},\quad\mathbf{b}=\begin{bmatrix}\ln(y_{1})\\ \ln(y_{2})\\ \vdots\\ \ln(y_{n})\end{bmatrix}, \tag{34}\] then we can obtain the over-determined linear system \[\mathbf{A}\mathbf{p}=\mathbf{b}. \tag{35}\] Thus, the parameter vector \(\mathbf{p}=[\alpha,\beta]^{\top}\) can be estimated by solving the following convex optimization problem \[\mathbf{p}_{\mathrm{opt}}=\arg\min_{\mathbf{p}\in\mathbb{R}^{2\times 1}}\left\|\mathbf{A}\mathbf{p }-\mathbf{b}\right\|_{r},\quad r\in\mathbb{N} \tag{36}\] where \[\left\|\mathbf{x}\right\|_{r}=\sqrt[r]{\left|x_{1}\right|^{r}+\left|x_{2}\right|^{ r}+\cdots+\left|x_{m}\right|^{r}} \tag{37}\] denotes the Euclidean norm of the \(m\)-dim vector \(\mathbf{x}\in\mathbb{R}^{m\times 1}\). For \(r=2\), we can take the _least squares_ (LS) approach to solve the \(\mathbf{p}_{\mathrm{opt}}\) by \[\mathbf{p}_{\mathrm{LS}}=\mathbf{A}^{\dagger}\mathbf{b} \tag{38}\] where \((\cdot)^{\dagger}\) denotes the Moore-Penrose inverse of a matrix [41]. Various least squares methods can be attempted to obtain the minimum \(\ell_{2}\)-norm solution, such as _data least squares_ (DLS), _total least squares_ (TLS), and _scaled total least squares_ (STLS) [42, 43, 44]. For better estimation property, the \(\ell_{1}\)-norm for the cost could be used in optimization with the help of residual vector [45]. **Input**: Vectors \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n\times 1}\) such that \(y_{i}\propto x_{i}^{\beta}\), sample capacity \(n\in\mathbb{N}\) for the data set. **Output**: Pair \(\langle\mathbf{A},\mathbf{b}\rangle\) such that \(\mathbf{A}=(a_{ij})_{2\times n}\in\mathbb{R}^{2\times n},\mathbf{b}=(b_{i})_{n\times 1 }\in\mathbb{R}^{n\times 1}\); ``` 1:functionFormatPowLawData(\(\mathbf{x},\mathbf{y},n\)) 2:for\(i\in\langle 1,2,\cdots,n\rangle\)do 3:\(a_{i1}\gets 1\); 4:\(a_{i2}\leftarrow\ln(x_{i})\); 5:\(b_{i}\leftarrow\ln(y_{i})\); 6:endfor 7:return\(\langle\mathbf{A},\mathbf{b}\rangle\); 8:endfunction ``` **Algorithm 2** Converting primitive data set with power law to matrix-vector pair In the sense of computer programming, the linear regression method of solving the parameter vector \(\mathbf{p}\) in (35) can be encapsulated into a procedure named by LinearRegSolver in order to be reused for different applications. The interface can be expressed by \[\mathbf{p}\leftarrow\textsc{LinearRegSolver}(\mathbf{A},\mathbf{b},n,\texttt{flag}) \tag{39}\] where \(n\) is number of data and flag is used for selecting the method for solving linear regression problem. For example, \(\texttt{flag}=2\) implies the \(\ell_{2}\)-norm optimization and \(\texttt{flag}=1\) implies the \(\ell_{1}\)-norm optimization. ### Algorithm for Solving Fixed-Point in Euclidean Space For the fixed-point equation \[\mathbf{x}=T(f,\mathbf{x},\lambda_{1},\cdots,\lambda_{r}),\quad\mathbf{x}\in\mathbb{R}^{m \times 1} \tag{40}\] where \(T\) is a contractive mapping used as the _updating function_ in programming language, \(\lambda_{1},\cdots,\lambda_{r}\) are possible extra parameters. The fixed-point can be solved with an iterative scheme \[\mathbf{x}_{i+1}=T(f,\mathbf{x}_{i},\lambda_{1},\cdots,\lambda_{r}),\quad i=0,1,2,\cdots \tag{41}\] when the initial value \(\mathbf{x}_{0}\) and the distance \(d(\mathbf{x}_{i+1},\mathbf{x}_{i})\), such as the Euclidean norm, are provided properly according to the Cauchy's criteria for convergence. The pseudo-code for the fixed-point algorithm is listed in **Algorithm**3, in which the concepts of high order function and function object are utilized for the abstraction and flexibility. Note that the order of arguments can be configured by programmers. **Input**: Contractive mapping \(T\) as the updator which is a high order function, function object \(f\), function object \(d\) for the distance \(d(\mathbf{x}_{i},\mathbf{x}_{i+1})\), precision \(\epsilon\), initial value \(\mathbf{x}_{\text{guess}}\) and possible extra parameters \(\lambda_{1},\cdots,\lambda_{r}\) with the same or different data types. **Output**: Fixed-point \(\mathbf{x}\) such that \(\mathbf{x}=T(f,\mathbf{x},\lambda_{1},\cdots,\lambda_{r})\) ``` 1:functionFixedPointSolver(\(T,f,d,\epsilon,\mathbf{x}_{\text{guess}},\lambda_{1},\cdots,\lambda_{r}\)) 2:\(\mathbf{x}_{\text{improve}}\gets T(f,\mathbf{x}_{\text{guess}},\lambda_{1},\cdots, \lambda_{r})\); 3:while\(d(\mathbf{x}_{\text{improve}},\mathbf{x}_{\text{guess}})\geq\epsilon\)do 4:\(\mathbf{x}_{\text{guess}}\leftarrow\mathbf{x}_{\text{improve}}\); 5:\(\mathbf{x}_{\text{improve}}\gets T(f,\mathbf{x}_{\text{guess}},\lambda_{1},\cdots, \lambda_{r})\); 6:endwhile 7:return\(\mathbf{x}_{\text{improve}}\); 8:endfunction ``` **Algorithm 4** Calculate the Euclidean distance of \(\mathbf{x}\) and \(\mathbf{y}\) In the sense of programming language and discrete mathematics, \(f\) is an ordinary (first order) function, \(T\) is a second order function and FixedPointSolve is a third order function. The procedure EuclidDist described by **Algorithm**4 is designed for calculating the Euclidean distance. For our problem, we have \(d(x_{i},x_{i+1})=|x_{i}-x_{i+1}|\) since it is a 1-dim distance. **Input**: \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{m\times 1}\) **Output**: The Euclidean distance of \(\mathbf{x}\) and \(\mathbf{y}\), i.e., \(d(\mathbf{x},\mathbf{y})=\left\|\mathbf{x}-\mathbf{y}\right\|_{2}=\sqrt{\sum_{i=1}^{m}|x_{i}-y _{i}|^{2}}\). ``` 1:functionEuclidDist(\(\mathbf{x},\mathbf{y}\)) 2:\(\texttt{sum}\gets 0\); 3:for\(\gets i\in\langle 1,\cdots,m\rangle\)do 4:\(\texttt{sum}\leftarrow\texttt{sum}+|x_{i}-y_{i}|^{2}\); 5:endfor 6:\(\texttt{dist}\leftarrow\sqrt{\texttt{sum}}\); // it will be \(d(x,y)=|x-y|\) if \(m=1\); 7:returndist; 8:endfunction ``` **Algorithm 5** Calculate the Euclidean distance of \(\mathbf{x}\) and \(\mathbf{y}\) ### Algorithm for Searching a Local Minimum of Single-variable Function Brent provides a line-search method which is a combination of golden section search and successive parabolic interpolation [46]. It can be used for finding a local minimum of a single-variable function, see the **Algorithm**24 in Appendix A for details. ## 3 Optimal Sequence Partition ### Fundamental Operations on Time Sequences #### 3.1.1 Cumulative Sum of Sequence For a sequence \(\{x_{i}:1\leq i\leq n\}\), its cumulative sequence \(\{c_{i}:1\leq i\leq n\}\) is defined by the action of cumulative sum operator on the original sequence. Formally, we have \[c_{i}=\mathcal{C}_{j}^{1:i}\left\{x_{j}\right\}=\sum_{j=1}^{i}x_{i},\quad 1 \leq i\leq n. \tag{42}\] In consequence, the relation between the cumulative sum operator and arithmetic average operator is \[\mathcal{A}_{j}^{1:i}\left\{x_{j}\right\}=\frac{1}{i}\cdot\mathcal{C}_{j}^{1:i} \left\{x_{j}\right\},\quad 1\leq i\leq n. \tag{43}\] Logically, the cumulative sum is more fundamental than the arithmetic average. For the time sequence \(\left\{X_{t}:1\leq t\leq N\right\}\), its cumulative sequence is \[\mathcal{C}_{j}^{1:i}\left\{X_{j}\right\}=\sum_{j=1}^{i}X_{j},\quad 1\leq i\leq n \tag{44}\] and its cumulative bias sequence is \[\mathcal{C}_{j}^{1:i}\left\{X_{j}-\overline{X}\right\}=\sum_{j=1}^{i}(X_{j}- \overline{X}),\quad 1\leq i\leq n \tag{45}\] where \(\overline{X}=\mathcal{A}_{t}^{1:n}\left\{X_{t}\right\}\) is the global arithmetic average. #### 3.1.2 Sequence Partition For computing the Hurst exponent, it is valuable to split the original sequence into several subsequences and construct the statistics of interest. A fundamental task is to find a suitable length for the subsequence, which helps in efficiently splitting the sequence as well as preserving the intrinsic characteristics. A time sequence \(\boldsymbol{X}=\left\{X_{t}\right\}_{t=1}^{N}\) with length \(N\) can be partitioned into \(k\) subsequences or segments of equal size \(m\) such that \[N=km+r,\quad k=\left\lfloor N/m\right\rfloor. \tag{46}\] If \(m\not\!\left\lvert N\right\rvert\) or equivalently \(r\neq 0\), we just ignore the following subsequence with length \(r\in\left\{1,2,\cdots,m-1\right\}\), viz. \[\left\{X_{km+1},X_{km+2},\cdots,X_{km+r}\right\},\quad 1\leq r\leq m-1.\] The \(k\) subsequences of size \(m\) obtained can be expressed by \[\bigcup_{\tau=1}^{k}X_{(\tau)}=\left\{X_{(\tau)}:1\leq\tau\leq k\right\} \tag{47}\] where \[X_{(\tau)} =\left\{X_{(\tau-1)m+j}:1\leq j\leq m\right\},\quad 1\leq\tau\leq k \tag{48}\] \[=\left\{X_{(\tau-1)m+1},\cdots,X_{(\tau-1)m+j},\cdots,X_{(\tau-1)m +m}\right\}\] is the \(\tau\)-th subsequence. For simplicity, we can denote the partition operation of the sequence \(\boldsymbol{X}\) as \[\left\{X_{(\tau)}:1\leq\tau\leq k\right\}=\textsc{SeqPartition}(\boldsymbol{X},m,k). \tag{49}\] ### Optimal Sequence Partition According to (47), the partition of \(\left\{X_{t}:1\leq N\right\}\) depends on the size \(m\) for the subsequences. It is a key issue that how to specify the size \(m\). There are three steps for determining the positive integer \(m\): * generating the set of bounded proper factors for the candidate length and calculating the cardinality of the set; * searching an optimal length for replacing the original sequence length; * set the \(m\) in the bounded proper factors of the optimal length. #### 3.2.1 Brute-force Searching Method for Specifying Bounded Proper Factors of Composite Integer The procedure GenSbpf listed in **Algorithm 5** is designed for finding the set of bounded proper factors of a composite \(a\) specified by the integer \(w\in\{2,\cdots,\lfloor\sqrt{a}\rfloor\}\). ``` 0: Composite number \(a\in\mathbb{N}\), lower bound \(w\in\{2,3,\cdots,\lfloor\sqrt{a}\rfloor\}\). 0: The set \(S_{\mathrm{bpf}}(a,w)\). 1:functionGenSbpf\((a,w)\) 2:\(S_{\mathrm{bpf}}\leftarrow\emptyset\); 3:for\(i\in\langle w,w+1,\cdots,\lfloor a/w\rfloor\rangle\)do 4:if\(i\mid a\)then 5:\(S_{\mathrm{bpf}}\gets S_{\mathrm{bpf}}\cup\{i\}\); 6:endif 7:endfor 8:return\(S_{\mathrm{bpf}}\); 9:endfunction ``` **Algorithm 5** Generate the set of bounded proper factors of the composite \(a\in\mathbb{N}\) with the lower bound \(w\) and the upper bound \(a/w\) such that \(w\in\{2,\cdots,\lfloor\sqrt{a}\rfloor\}\) with the brute-force searching method. #### 3.2.2 Searching Optimal Approximate Length of Sequence The procedure SearchOptSeqLen for searching the optimal length for the subsequence is listed in **Algorithm 6**. ``` 0: Sequence length \(N\), lower bound \(w\in\left\{2,\cdots,\left\lfloor\sqrt{N}\right\rfloor\right\}\), percentage \(\alpha\in[0.95,1]\) with default value \(\alpha=0.99\). 0: Optimal sequence length \(N_{\mathrm{opt}}\) such that \(N_{\mathrm{opt}}\leq N\) and \(S_{\mathrm{pf}}(N_{\mathrm{opt}})\) is not empty. 1:functionSearchOptSeqLen\((N,w,\alpha)\) 2:\(L_{\mathrm{factors}}\leftarrow\emptyset\); // initialize with empty set 3:\(n_{0}\leftarrow\lceil\alpha N\rceil\); 4:for\(i\in\langle n_{0},n_{0}+1,\cdots,N\rangle\)do 5:\(S_{\mathrm{bpf}}\leftarrow\)GenSbpf\((i,w)\); 6:\(L_{\mathrm{factors}}\gets L_{\mathrm{factors}}\cup|S_{\mathrm{bpf}}|\); 7:endfor 8:\(\langle i_{\max},v_{\max}\rangle\leftarrow\textsc{SearchMax}(L_{\mathrm{ factors}})\); 9:\(N_{\mathrm{opt}}\gets n_{0}+i_{\max}-1\); 10:return\(N_{\mathrm{opt}}\); 11:endfunction ``` **Algorithm 6** Searching the optimal length of sequence. #### 3.2.3 Specifying the Parameters of Subsequence The \(N_{\mathrm{opt}}\) obtained from the length \(N\) must be a composite number. The set of bounded proper factors of \(N_{\mathrm{opt}}\) with lower bound \(w\) gives the possible values for the size \(m\) required in sequence partition. In other words, we have \[m\in S_{\mathrm{bpf}}(N_{\mathrm{opt}},w). \tag{50}\] Once the size \(m\) is obtained, the \(k=N_{\mathrm{opt}}/m\) will be determined simultaneously. In consequence, the sequence partition is specified completely. As an illustration, we take \(N=997,w=20,\alpha=0.99\), which implies \(N_{\rm opt}=990\) by **Algorithm**\(6\). Consequently, \[S_{\rm bpf}(N_{\rm opt},w)=S_{\rm bpf}(990,22)=\{22,30,33,45\}\,.\] This implies that there are four candidate values for the pair \(\left\langle m,k\right\rangle\) of interest, i.e. \[\left\langle m,k\right\rangle\in\{\left\langle 22,45\right\rangle,\left\langle 30,33 \right\rangle,\left\langle 33,30\right\rangle,\left\langle 45,22\right\rangle\}\,.\] **Figure**\(1\) demonstrates the principle and implementation of optimal sequence partition intuitively. ## 4 Dominant Methods For Estimating Hurst Exponent There are various algorithms for estimating the Hurst exponent \(H\) of time sequences based on different principles. For the convenience of various applications, it is necessary to describe the algorithms with pseudo-codes and provide the code for the implementation with popular high level programming languages such as C/C++, Python and Octave/MATLAB. In this section, we will cope with the principles briefly and present pseudo-codes for the algorithms for the estimation methods. ### Global View of Methods for Estimating Hurst Exponents **Figure**\(2\) illustrates the dominant \(13\) methods discussed in this paper for estimating the Hurst exponent and their relations. These algorithms can be classified with different categories based on two criteria: * time-domain methods and spectrum-domain methods based on the representation of time sequences; * linear regression methods and Bayesian methods based on the parameter estimation method. In consequence, we have four types of estimation methods, see **Table**\(2\). Figure 1: Principle and implementation of optimal sequence partition ### Central Estimation #### 4.2.1 Principle of Central Estimator For the time sequence \(\boldsymbol{X}=\left\{X_{t}:1\leq t\leq N\right\}\), we can decompose it into \(k\) subsequences with segment size \(m\) by (47). Now we compute the moment for each subsequence. Let \[C_{\tau}=\mathcal{A}_{j}^{1:m}\left\{X_{(\tau-1)m+j}\right\},\quad 1\leq\tau\leq k, \tag{51}\] then \(C_{\tau}\) is the arithmetic average of the subsequence \(X_{(\tau)}\). Thus we get a new sequence \(\left\{C_{\tau}:1\leq\tau\leq k\right\}\). The \(r\)-th central moment of this sequence can be written by \[\nu(r,m)=\mathcal{A}_{\tau}^{1:k}\left\{\left|C_{\tau}-\overline{X}\right|^{ \tau}\right\}. \tag{52}\] where \(\overline{X}\) is specified by (4). If the sequence \(\left\{X_{t}\right\}_{t=1}^{N}\) is a Gaussian sequence or its variance is finite, then for large \(k\) and \(m\) the asymptotic property \[\nu(r,m)\propto m^{r(H-1)} \tag{53}\] holds [47]. Obviously, it is a typical example of power law presented in (32). By taking the logarithms of both sides we immediately have \[\ln\nu(r,m)\sim\alpha_{\text{Central}}(r)+\beta_{\text{Central}}(r)\cdot\ln m \tag{54}\] \begin{table} \begin{tabular}{|l|l l|} \hline Method & Para. Est. & **Linear Regression Method** & **Bayesian Method** \\ \hline **Time-domainb** & AM, AV, GHE, HM, R/S, DFA, TTA & LSSD, LSV \\ **Spectrum-domain** & PM, AWC, VVL & LW \\ \hline \end{tabular} \end{table} Table 2: Classification of 13 dominant estimation methods Figure 2: Typical Algorithms for Estimating Hurst Exponent where \[\beta_{\text{Central}}(r)=r(H-1). \tag{55}\] Thus the estimator for the Hurst exponent \[\hat{H}_{\text{Central}}(r)=1+\frac{1}{r}\hat{\beta}_{\text{Central}}(r) \tag{56}\] can be obtained with the linear regression. Particularly, for \(r=1\) and \(r=2\), we have two subtypes of estimation method: 1. **Absolute Moments** (AM) method [8], in which \(r=1\) and we have \[\hat{H}_{\text{AM}}=\hat{H}_{\text{Central}}(1)=1+\hat{\beta}_{\text{Central}}( 1).\] (57) 2. **Aggregate Variance** (AV) method [8], in which \(r=2\) and we have \[\hat{H}_{\text{AV}}=\hat{H}_{\text{Central}}(2)=1+\frac{1}{2}\hat{\beta}_{\text{ Central}}(2).\] (58) #### 4.2.2 Algorithm for Central Estimator The key step of central estimation is constructing the corresponding central moments \(\nu(r,m)\) according to (52). The procedure EstHurstCentral listed in **Algorithm** 7 is used to estimate the Hurst exponent by (56), which relis on two procedures: * the procedures GenSbpf listed in **Algorithm** 5 for generating factor set, and * the procedure SearchOptSeqLen listed in **Algorithm** 6 for searching the optimal sequence length. For time-domain methods, the parameter estimation for linear model is fundamental for the estimation, please see the subsection 2.5 about linear regression. ``` 1:Time sequence \(\boldsymbol{X}\), window size \(w\), order \(r\in\{1,2\}\), indicator flag for the optimization method in linear regression. 2:Hurst exponent of the sequence \(\boldsymbol{X}\). 3:functionEstHurstCentral(\(\boldsymbol{X},w,r,\texttt{flag}\)) 4:\(N\leftarrow\textsc{GetLength}(\boldsymbol{X})\); 5:\(N_{\text{opt}}\leftarrow\textsc{SearchOptSeqLen}(N,w)\); 6:\(\boldsymbol{T}\leftarrow\textsc{GenSbpf}(N_{\text{opt}},w)\); // \(S_{\text{bpf}}(N_{\text{opt}},w)\) 7:\(n\leftarrow\textsc{GetLength}(\boldsymbol{T})\); // \(|S_{\text{bpf}}(N_{\text{opt}},w)|\) 8:\(\boldsymbol{S}\leftarrow\boldsymbol{0}\in\mathbb{R}^{n\times 1}\); // For the statistics 9:\(\overline{X}\leftarrow\mathcal{A}_{t}^{1:N}\left\{X_{t}\right\}\); 10:for\(\texttt{idx}\in\left\langle 1,2,\cdots,n\right\rangle\)do 11:\(m\gets T_{\text{idx}}\); // \(m\) is the interval time 12:\(k\leftarrow\left\lfloor N_{\text{opt}}/m\right\rfloor\); // number of subsequences 13:\(\boldsymbol{Y}\leftarrow\boldsymbol{0}\in\mathbb{R}^{k\times 1}\); // \(\boldsymbol{Y}=[Y_{1},\cdots,Y_{\tau},\cdots,Y_{k}]^{\mathsf{T}}\); 14:\(\left\{X_{\tau}:1\leq\tau\leq k\right\}\leftarrow\textsc{SeqPartition}( \boldsymbol{X},N,m)\); 15:for\(\tau\in\left\{1,2,\cdots,k\right\}\)do 16:\(Y_{\tau}\leftarrow\mathcal{A}_{j}^{1:m}\left\{X_{(\tau-1)m+j}\right\}\); // arithmetic ave 17:endfor 18:if\(r=1\)then 19:\(\nu\leftarrow\left\|\boldsymbol{Y}-\overline{X}\right\|_{1}/k\); // \(\ell_{1}\)-norm here 20:else ``` **Algorithm 7** Central Estimator for Hurst exponent * \(\nu\leftarrow\operatorname{Var}(\boldsymbol{Y})\); // \(r=2\) * **endif** * \(S_{\text{idx}}\leftarrow\nu\); * **endfor** * \(\langle\boldsymbol{A},\boldsymbol{b}\rangle\leftarrow\textsc{FormatPowLawData}( \boldsymbol{T},\boldsymbol{S},n)\); * \(\boldsymbol{p}\leftarrow\textsc{LinearRegSolver}(\boldsymbol{A},\boldsymbol{b},n, \texttt{flag})\); * \(\beta_{\text{Central}}\gets p_{2}\) ; // \(\boldsymbol{p}=[\alpha,\beta]^{\mathsf{T}}\); * \(H\leftarrow\beta_{\text{Central}}/r+1\); * **return**\(H\); * **endfunction** ### Generalized Hurst Exponent Method (GHE) #### 4.3.1 Principle of GHE Estimator For time sequence \(\left\{X_{t}\right\}_{t=1}^{N}\), the \(q\)-th order moment of the distribution of the increments based on the time lag \(\tau\) can be written by [9] \[\mu_{q}(\tau)=\mathcal{A}_{i}^{1:N-\tau}\left\{|X_{i+\tau}-X_{i}|^{q}\right\} \tag{59}\] The _generalized Hurst exponent_ (GHE), denoted by \(H(q)\), can be deduced from the asymptotic scaling behavior of \(\mu_{\tau}(q)\)[48]: \[\mu_{q}(\tau)\propto\tau^{qH(q)}\Longleftrightarrow\ln\mu_{q}(\tau)\sim \alpha_{\text{GHE}}+\beta_{\text{GHE}}\cdot\ln\tau \tag{60}\] Thus the generalized Hurst exponent \(H(q)\) can be obtained by [48] \[\beta_{\text{GHE}}=qH(q), \tag{61}\] which implies that \[\hat{H}_{\text{GHE}}=\frac{\hat{\beta}_{\text{GHE}}}{q}. \tag{62}\] #### 4.3.2 Algorithm for GHE Estimator The procedure EstHurstGHE described in **Algorithm** 8 can be used to compute the Hurst exponent via the GHE method (60). ``` 1:Time series data \(\boldsymbol{X}\), order \(q\), indicator flag for the optimization method in linear regression. 2:Hurst exponent of the sequence \(\boldsymbol{X}\). 3:functionEstHurstGHE(\(\boldsymbol{X},q,\texttt{flag}\)) 4:\(n\gets 10\); 5:\(N\leftarrow\textsc{GetLength}(\boldsymbol{X})\); 6:\(\boldsymbol{T}\leftarrow[1,2,\cdots,n]^{\mathsf{T}}\in\mathbb{R}^{n\times 1}\); 7:\(\boldsymbol{S}\leftarrow\boldsymbol{0}\in\mathbb{R}^{n\times 1}\); // For the statistics 8:\(\overline{X}\leftarrow\mathcal{A}_{i}^{1:N}\left\{X_{i}\right\}\); 9:\(\boldsymbol{Y}\leftarrow\boldsymbol{0}\in\mathbb{R}^{N\times 1}\); 10:for\(i\in(1,2,\cdots,N)\)do 11:\(Y_{i}\leftarrow\mathcal{C}_{j}^{1:i}\left\{X_{j}-\overline{X}\right\}\); 12:endfor 13:for\(\texttt{idx}\in\langle 1,2,\cdots,n\rangle\)do 14:\(\mu_{q}\leftarrow\mathcal{A}_{i}^{1:N-\texttt{idx}}\left\{|Y_{i+\texttt{idx} }-Y_{i}|^{q}\right\}\); 15:\(S_{\texttt{idx}}\leftarrow\mu_{q}\); ``` **Algorithm 8** Generalized Hurst Exponent Method * [14]**endfor** * [15] \(\langle\mathbf{A},\mathbf{b}\rangle\leftarrow\text{FormatPowLawData}(\mathbf{T},\mathbf{S},n)\); * [16]\(\mathbf{p}\leftarrow\text{LinearRegSolver}(\mathbf{A},\mathbf{b},n,\text{flag})\); * [17]\(\beta_{\text{GHE}}\gets p_{2}\ ;\ //\ \mathbf{p}=[\alpha,\beta]^{\mathsf{T}}\); * [18]\(H\leftarrow\beta_{\text{GHE}}/q\); * [19]\(\mathbf{return}\ H\); * [20]\(\mathbf{endfunction}\) ### Higuchi Method (HM) #### 4.4.1 Principle of Higuchi Estimator For the time sequence \(\{X_{t}\}_{t=1}^{N}\), its cumulative bias sequence is defined by [8] \[Y_{i}=\mathcal{C}_{j}^{1:i}\left\{X_{j}-\overline{X}\right\},\quad 1\leq i\leq N \tag{63}\] where \(\overline{X}=\mathcal{A}_{t}^{1:N}\left\{X_{t}\right\}\) and the normalized length of each sample can be calculated according to \[L_{k}(m)=\frac{\gamma}{m}\sum_{i=1}^{\lfloor(N-k)/m\rfloor}\left|Y_{k+im}-Y_{ k+(i-1)m}\right| \tag{64}\] where \(k=1,2,\ldots,m\). The integers \(k\) and \(m\) indicate the initial time and the interval time respectively, and \[\gamma=\frac{N-1}{\lfloor(N-k)/m\rfloor\cdot m} \tag{65}\] represents the normalization factor. Then we have [10] \[L(m)=\mathcal{A}_{k}^{1:m}\left\{L_{k}(m)\right\}\propto m^{-D} \tag{66}\] or equivalently \[\ln L(m)\sim\alpha_{\text{HM}}+\beta_{\text{HM}}\ln m \tag{67}\] where the parameter \(D\) is the fractal dimension of the time sequence. With the help of (10) and \(\beta_{\text{HM}}=-D\), we can obtain the Higuchi estimator for the Hurst exponent \[\hat{H}_{\text{HM}}=2+\hat{\beta}_{\text{HM}}. \tag{68}\] #### 4.4.2 Algorithm for Higuchi Estimator In **Algorithm** 9, the arithmetic average operator defined in (3) is taken for reducing the structure complexity of the algorithm. ``` 1:Time series data \(\mathbf{X}\), indicator flag for the optimization method in linear regression. 2:Hurst exponent of the sequence \(\mathbf{X}\). 3:functionEstHurstHiguchi(\(\mathbf{X},\text{flag}\)) 4:\(n\gets 10\); 5:\(N\leftarrow\text{GetLength}(\mathbf{X})\); 6:\(\mathbf{T}\leftarrow\mathbf{0}\in\mathbb{R}^{n\times 1}\); // For the interval time 7:\(\mathbf{S}\leftarrow\mathbf{0}\in\mathbb{R}^{n\times 1}\); // For the statistics 8:\(\overline{X}\leftarrow\mathcal{A}(X_{i})\); 9:\(\mathbf{Y}\leftarrow\mathbf{0}\in\mathbb{R}^{N\times 1}\); 10:for\(i\in\langle 1,2,\cdots,N\rangle\)do 11:\(Y_{i}\leftarrow\mathcal{C}_{j}^{1:i}\left\{X_{j}-\overline{X}\right\}\); 12:endfor ``` **Algorithm 9** Higuchi method for estimating the Hurst exponent 11:for\(\mathtt{idx}\in\left\langle 1,2,\ldots,n\right\rangle\)do 12:if\(\mathtt{idx}>4\)then 13:\(m\leftarrow\left\lfloor 2^{(\mathtt{idx}+5)/4}\right\rfloor\); 14:else 15:\(m\leftarrow\mathtt{idx}\); 16:endif 17:\(T_{\mathtt{idx}}\gets m\); 18:\(k\leftarrow\left\lfloor N/m\right\rfloor\); 19:\(L_{k}\leftarrow\mathcal{A}_{i}^{i:k-1}\left\{\mathcal{A}_{j}^{(i-1)m:im} \left\{\left|Y_{j+m}-Y_{j}\right|\right\}\right\}\); 20:\(S_{\mathtt{idx}}\leftarrow(N-1)\cdot L_{k}/m^{2}\); 21:endfor 22:\(\left\langle\boldsymbol{A},\boldsymbol{b}\right\rangle\leftarrow\textsc{ FormatPowLawData}(\boldsymbol{T},\boldsymbol{S},n)\); 23:\(\boldsymbol{p}\leftarrow\textsc{LinearRegSolver}(\boldsymbol{A}, \boldsymbol{b},n,\mathtt{flag})\); 24:\(\beta_{\mathrm{HM}}\gets p_{2}\) ; // \(\boldsymbol{p}=[\alpha,\beta]^{\mathsf{T}}\); 25:\(H\leftarrow\beta_{\mathrm{HM}}+2\); 26:return\(H\); 27:endfunction ``` ### Detrended Fluctuation Analysis (DFA) #### 4.5.1 Principle of DFA Estimator The DFA method for computing the Hurst exponent is based on the sequence partition. For the time sequence \(\left\{X_{t}\right\}_{t=1}^{N}\), with the configuration of parameter \(\alpha\in[0.95,1]\) and minimal size \(w\) of the subsequence, then optimal sequence length \(N_{\mathrm{opt}}\) can be solved with **Algorithm**\(6\), the size \(m\) can be calculated by (50) and the number of subsequence will be \(k=N_{\mathrm{opt}}/m\). For the purpose of clarity and intuition, the principle DFA method is shown in **Figure**3. We now give some interpretations for the steps for the DFA estimator: 1. Pre-conditioning: Partitioning the sequence \(\left\{X_{t}:1\leq t\leq N\right\}\) into \(k\) subsequences with minimal size \(w\) such that \(N_{\mathrm{opt}}=mk\) and \[m\in S_{\mathrm{bpf}}(N_{\mathrm{opt}},w)=\left\{m_{1},m_{2},\cdots,m_{n}\right\}\] (69) where \[n=\left|S_{\mathrm{bpf}}(N_{\mathrm{opt}},w)\right|.\] (70) 2. Computing the global arithmetic average of the optimal sequence \(\left\{X_{t}:1\leq t\leq N_{\mathrm{opt}}=mk\right\}\) by \[\overline{X_{\mathrm{opt}}}=\mathcal{A}_{t}^{1:N_{\mathrm{opt}}}\left\{X_{t}\right\}\] (71) and construct the cumulative bias sequence \[Z_{i}=\mathcal{C}_{j}^{1:i}\left\{X_{j}-\overline{X_{\mathrm{opt}}}\right\}, \quad 1\leq i\leq N_{\mathrm{opt}}.\] (72) 3. Constructing the \(k\) subsequences \(\left\{Y_{\tau}^{i}:1\leq i\leq m\right\}\) for \(1\leq\tau\leq k\) by \[\boldsymbol{Y}_{\tau}=\left[Y_{\tau}^{1},Y_{\tau}^{2},\cdots,Y_{\tau}^{m} \right]^{\mathsf{T}},\quad 1\leq\tau\leq k\] (73) where \[Y_{\tau}^{i}=Z_{(\tau-1)m+i},\quad 1\leq i\leq m.\] (74) 4. Performing the linear regression for each subsequence \(\left\{Y_{\tau}^{i}:1\leq i\leq m\right\}\) for \(1\leq\tau\leq k\)[49, 50] \[Y_{\tau}^{i}\sim\alpha_{\tau}+\beta_{\tau}\cdot i,\quad 1\leq i\leq m\] (75) or equivalently \[\boldsymbol{q}_{\tau}\leftarrow\textsc{LinearRegSolver}(\boldsymbol{M}, \boldsymbol{Y}_{\tau},m,\texttt{flag})\] (76) for \(\boldsymbol{q}_{\tau}=\left[\alpha_{\tau},\beta_{\tau}\right]^{\mathsf{T}}\) such that \[\boldsymbol{M}=\begin{bmatrix}1&1&\cdots&1\\ 1&2&\cdots&m\end{bmatrix}^{\mathsf{T}}\] (77) and \[\begin{cases}\boldsymbol{\varepsilon}_{\tau}=\boldsymbol{Y}_{\tau}- \boldsymbol{M}\boldsymbol{q}_{\tau}=\left[\varepsilon_{\tau}^{1},\cdots, \varepsilon_{\tau}^{m}\right]^{\mathsf{T}}\\ \varepsilon_{\tau}^{i}=Y_{\tau}^{i}-\alpha-\beta\cdot i,\quad 1\leq i\leq m\end{cases}\] (78) 4. Calculating the unbiased standard deviation of the residual sequence \[s_{\tau}(m)=\mathcal{S}_{i}^{1:m}\left\{\varepsilon_{\tau}^{i}\right\}.\] (79) 5. Calculating the arithmetic average of each standard deviation \[S(m)=\mathcal{A}_{\tau}^{1:k}\left\{s_{\tau}(m)\right\}.\] (80) in order to get the asymptotic relation \[S(m)\propto m^{H}\Longleftrightarrow\ln S(m)\sim\gamma+H\cdot\ln m\] (81) Figure 3: Partition of time sequence and structure of cumulative sequence for DFA method * Repeating the steps i) \(\sim\) v) for \(n\) times for different choices of \(m\) and setting \[\begin{cases}\mathbf{T}=[m_{1},\cdots,m_{n}]^{\mathsf{T}}\\ \mathbf{S}=[S(m_{1}),\cdots,S(m_{n})]^{\mathsf{T}}\\ \langle\mathbf{A},\mathbf{b}\rangle\leftarrow\textsc{FormatPowLawData}(\mathbf{T},\mathbf{S},n) \end{cases}\] (82) * Estimating the Hurst exponent with linear regression \[\mathbf{p}_{\textsc{dfa}}\leftarrow\textsc{LinearRegrSolver}(\mathbf{A},\mathbf{b},k, \texttt{flag})\] (83) where \(\mathbf{p}_{\textsc{dfa}}=[\hat{\alpha}_{\textsc{dfa}},\hat{\beta}_{\textsc{dfa}} ]^{\mathsf{T}}\), which implies that \[\hat{H}_{\textsc{dfa}}=\hat{\beta}_{\textsc{dfa}}.\] (84) #### 4.5.2 Algorithm for DFA Estimator The curve fitting method is a crucial step in the DFA-method, we need to frequently solve for the slope and intercept to construct the corresponding residual vectors. ``` 1:Time sequence \(\mathbf{X}\), window size \(w\), indicator flag for the optimization method in linear regression. 2:Hurst exponent of the sequence \(\mathbf{X}\). 3:functionEstHurstDFA(\(\mathbf{X},w,\texttt{flag}\)) 4:\(N\leftarrow\textsc{GetLength}(\mathbf{X})\); 5:\(N_{\mathrm{opt}}\leftarrow\textsc{SearchOptSeqLen}(N,w)\); 6:\(\mathbf{T}\leftarrow\textsc{GenSbpf}(N_{\mathrm{opt}},w)\); // \(S_{\mathrm{bpf}}(N_{\mathrm{opt}},w)\) 7:\(n\leftarrow\textsc{GetLength}(\mathbf{T})\); // \(|S_{\mathrm{bpf}}(N_{\mathrm{opt}},w)|\) 8:\(\mathbf{S}\leftarrow\mathbf{0}\in\mathbb{R}^{n\times 1}\); // For the statistics 9:\(\mathbf{Z}\leftarrow\mathbf{0}\in\mathbb{R}^{N\times 1}\); // global cumulative sequence 10:\(\overline{X_{\mathrm{opt}}}\leftarrow\mathcal{A}_{i}^{1:N_{\mathrm{opt}}}\left\{X _{i}\right\}\); // global arithmetic average 11:for\(i\in\langle 1,2,\cdots,N\rangle\)do 12:\(Z_{i}\leftarrow\mathcal{C}_{j}^{1:i}\left\{X_{j}-\overline{X_{\mathrm{opt}}} \right\}\); 13:endfor 14:for\(\texttt{idx}\in\langle 1,2,\cdots,n\rangle\)do 15:\(m\gets T_{\texttt{idx}}\); 16:\(k\gets N_{\mathrm{opt}}/m\); 17:\(\mathbf{s}_{\tau}\leftarrow\mathbf{0}\in\mathbb{R}^{k\times 1}\); // vector of standard deviation 18:\(\mathbf{\varepsilon}_{\tau}\leftarrow\mathbf{0}\in\mathbb{R}^{m\times 1}\); // vector of regression residuals 19:\(\mathbf{M}\leftarrow\begin{bmatrix}1&1&\cdots&1\\ 1&2&\cdots&m\end{bmatrix}^{\mathsf{T}}\); // for linear regression 20:\(\mathbf{q}_{\tau}\leftarrow\mathbf{0}\in\mathbb{R}^{2\times 1}\); // \(\mathbf{q}=[\alpha,\beta]^{\mathsf{T}}\) 21:for\(\tau\in\langle 1,2,\ldots,k\rangle\)do 22:\(\mathbf{Y}_{\tau}\leftarrow[Z_{(\tau-1)m+1},Z_{(\tau-1)m+2},\cdots,Z_{\tau m}]^{ \mathsf{T}}\); 23:\(\mathbf{q}_{\tau}\leftarrow\textsc{LinearRegression}(\mathbf{M},\mathbf{Y}_{\tau},m, \texttt{flag})\); 24:\(\mathbf{\varepsilon}_{\tau}\leftarrow\mathbf{Y}_{\tau}-\mathbf{M}\mathbf{q}_{\tau}\); 25:\(s_{\tau}\leftarrow\mathbf{\mathcal{S}}_{i}^{l:m}\left\{\varepsilon_{\tau}^{i}\right\}\); // standard deviation 26:endfor 27:\(S_{\texttt{idx}}\leftarrow\mathcal{A}_{\tau}^{1:k}\left\{s_{\tau}\right\}\); 28:endfor ``` **Algorithm 10** Detrended Fluctuation Analysis Estimator for Hurst exponent \(\langle\mathbf{A},\mathbf{b}\rangle\leftarrow\text{FormatPowLawData}(\mathbf{T},\mathbf{S},n)\); * \(\mathbf{p}\leftarrow\text{LinearRegSolver}(\mathbf{A},\mathbf{b},n,\text{flag})\); * \(\beta_{\text{DFA}}\gets p_{2}\) ; // \(\mathbf{p}=[\alpha,\beta]^{\text{T}}\); * \(H\leftarrow\beta_{\text{DFA}}\); * **return**\(H\); * **endfunction** ### Rescaled Range Analysis (R/S Analysis) #### 4.6.1 Principle of R/S Estimator Similar to the DFA method, for the time sequence \(\{X_{t}\}_{t=1}^{N}\), with the configuration of parameter \(\alpha\in[0.95,1]\) and minimal size \(w\) of the subsequence, the optimal sequence length \(N_{\text{opt}}\) can be solved with **Algorithm** 6, the size \(m\) can be calculated by (50) and the number of subsequence will be \(k=N_{\text{opt}}/m\). The way for finding the Hurst exponent [50] by R/S method is illustrated in **Figure** 4. Here we give some interpretations for the steps of the R/S estimator: * Pre-conditioning: Partitioning the sequence \(\{X_{t}:1\leq t\leq N\}\) into \(k\) subsequences with minimal Figure 4: Partition of time sequence and structure of cumulative sequence for R/S analysis size \(w\) such that \(N_{\text{opt}}=mk\) and \[m\in S_{\text{bpf}}(N_{\text{opt}},w)=\left\{m_{1},m_{2},\cdots,m_{n}\right\}\] (85) where \[n=\left|S_{\text{bpf}}(N_{\text{opt}},w)\right|.\] (86) 1. For each subsequence, computing its local arithmetic average by \[E_{\tau}(m)=\mathcal{A}_{j}^{1:m}\left\{X_{(\tau-1)m+j}\right\}.\] (87) 2. Construct the \(\tau\)-th local bias/detrend sequence \[B_{\tau}=\left\{B_{\tau}^{j}:1\leq j\leq m\right\}\] (88) in which \[B_{\tau}^{j}=X_{m(\tau-1)+j}-E_{\tau}(m),\quad 1\leq j\leq m\] (89) and the cumulative bias sequence \[Y_{\tau}=\left\{Y_{\tau}^{i}:1\leq i\leq m\right\}\] (90) where \[Y_{\tau}^{i}=\mathcal{C}_{j}^{1:i}\left\{B_{\tau}^{j}\right\}=\sum_{j=1}^{i} B_{\tau}^{j},\quad 1\leq i\leq m.\] (91) 3. Calculating the unbiased standard deviation of the local bias sequence \[s_{\tau}(m)=\mathcal{S}_{j}^{1:m}\left\{B_{\tau}^{j}\right\},\quad 1\leq \tau\leq k\] (92) 4. Calculating the range for the \(\tau\)-th cumulative bias sequence \(Y_{\tau}\) \[r_{\tau}(m)=\max_{1\leq i\leq m}Y_{\tau}^{i}-\min_{1\leq i\leq m}Y_{\tau}^{i}, \quad 1\leq\tau\leq k\] (93) 5. Computing the R/S statistics of the sequence \(Y_{\tau}\) \[\mathscr{R}_{Y_{\tau}}(m)=\frac{r_{\tau}(m)}{s_{\tau}(m)}\] (94) 6. Calculating the arithmetic average of each R/S statistics \[\overline{\mathscr{R}_{Y}(m)}=\mathcal{A}_{\tau}^{1:k}\left\{\mathscr{R}_{Y _{\tau}}(m)\right\}=\frac{1}{k}\sum_{\tau=1}^{k}\frac{r_{\tau}(m)}{s_{\tau}(m )}.\] (95) in order to get the asymptotic relation \[\overline{\mathscr{R}_{Y}(m)}\propto m^{H}\Longleftrightarrow\ln\overline{ \mathscr{R}_{Y}(m)}\sim\gamma+H\cdot\ln m\] (96) 7. Repeating the steps i)\(\sim\) v) for \(n\) times for different choices of \(m\) and setting \[\begin{cases}\boldsymbol{T}=[m_{1},\cdots,m_{n}]^{\mathsf{T}}\\ \boldsymbol{S}=[S(m_{1}),\cdots,S(m_{n})]^{\mathsf{T}}\\ \langle\boldsymbol{A},\boldsymbol{b}\rangle\leftarrow\text{FormatPowLawData}( \boldsymbol{T},\boldsymbol{S},n)\end{cases}\] (97) * Estimating the Hurst exponent with linear regression \[\mathbf{p}_{\text{RS}}\leftarrow\text{LinearRegrSolver}(\mathbf{A},\mathbf{b},k,\texttt{flag})\] (98) where \(\mathbf{p}_{\text{RS}}=\left[\hat{\alpha}_{\text{RS}},\hat{\beta}_{\text{RS}} \right]^{\mathsf{T}}\), which implies that \[\hat{H}_{\text{RS}}=\hat{\beta}_{\text{RS}}.\] (99) Particularly, the theoretical values of the R/S statistics of white noise are usually approximated by [51]: \[\text{E}\left\{\mathscr{R}_{Y}(m)\right\} =\begin{cases}\frac{m-\frac{1}{2}}{m}\cdot\frac{\Gamma\left(\frac{ m-1}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{m}{2}\right)}\cdot\sum\limits_{i=1}^{m-1} \sqrt{\frac{m-i}{i}},&n\leq 340\\ \frac{m-\frac{1}{2}}{m}\cdot\sqrt{\frac{2}{\pi m}}\cdot\sum\limits_{i=1}^{m-1} \sqrt{\frac{m-i}{i}},&n>340\end{cases} \tag{100}\] where \[\Gamma(x)=\int_{0}^{+\infty}t^{x-1}\text{e}^{-t}\,\text{d}\,t\] is the Gamma function and the factor \((m-1/2)/m\) was added by Peters [52] to improve the performance for small \(m\). For this purpose, we can construct the revised R/S statistics [50]: \[\mathscr{R}_{Y}^{\text{AL}}(m)=\mathscr{R}_{Y}(m)-\text{E}\left\{\mathscr{R} _{Y}(m)\right\}+\sqrt{\frac{\pi m}{2}} \tag{101}\] for \(Y_{\tau}\) which has the asymptotic behavior \[\mathscr{R}_{Y}^{\text{AL}}(m)\propto m^{H}\Longleftrightarrow\ln\mathscr{R} _{Y}^{\text{AL}}(m)\sim\gamma+H\cdot\ln m. \tag{102}\] It should be noted that the estimation for the Hurst exponent via revised statistic \(\mathscr{R}_{Y_{\tau}}^{\text{AL}}\) will be smaller than the true value when \(H>0.5\). #### 4.6.2 Algorithm for R/S Estimator **Algorithm** 11 provides a procedure for calculating the Hurst exponent of a time sequence where used the unrevised R/S statistic. The revised statistic constructed by equation (100) and (101) can be designed by readers. ``` 1:Time sequences data \(\mathbf{X}\), window size \(w\), indicator flag for the optimization method in linear regression. 2:Hurst exponent of the sequence \(\mathbf{X}\). 3:functionEstHurstRS(\(\mathbf{X},w,\texttt{flag}\)) 4:\(N\leftarrow\text{GetLength}(\mathbf{X})\); 5:\(N_{\text{opt}}\leftarrow\text{SearchOptSeqLen}(N,w)\); 6:\(\mathbf{T}\leftarrow\text{GenSbpF}(N_{\text{opt}},w)\); 7:\(n\leftarrow\text{GetLength}(\mathbf{T})\); 8:\(\mathbf{S}\leftarrow\mathbf{0}\in\mathbb{R}^{n\times 1}\); // For the statistics 9:for\(\texttt{idx}\in\left\langle 1,2,\cdots,n\right\rangle\)do 10:\(m\gets T_{\texttt{idx}}\); 11:\(k\gets N_{\text{opt}}/m\); 12:\(\mathbf{L}\leftarrow\mathbf{0}\in\mathbb{R}^{n\times 1}\); // For the rescaled range 13:for\(\tau\in\left\langle 1,2,\ldots,k\right\rangle\)do 14:\(E_{\tau}\leftarrow\mathcal{A}_{i}^{1:m}\left\{X_{(\tau-1)m+i}\right\}\); 15:\(\mathbf{B}_{\tau}\leftarrow\mathbf{0}\in\mathbb{R}^{m\times 1}\); ``` **Algorithm 11** Rescaled Range Analysis Estimator for Hurst exponent ### Triangles Total Areas (TTA) Method #### 4.7.1 Principle of TTA Estimator For time sequence \(\left\{X_{t}\right\}_{t=1}^{N}\), we can derive the _triangles total areas_ (TTA) method with the cumulative sequence \[Y_{i}=\mathcal{C}_{t}^{1:i}\left\{X_{t}-\overline{X}\right\},\quad 1\leq i\leq n. \tag{103}\] For the fixed time lag \(\tau\in\mathbb{N}\) and \(i\)-th group of vertices \(\left\{P_{i},Q_{i},R_{i}\right\}\subset\mathbb{R}^{2}\) such that \[\left\{\begin{array}{l}P_{i}=(i,Y_{i})\\ Q_{i}=(i+\tau,Y_{i+\tau})\quad\quad 1\leq i\leq\left\lfloor\frac{N-1}{2\tau}\right\rfloor \\ R_{i}=(i+2\tau,Y_{i+2\tau})\end{array}\right. \tag{104}\] for the triangle \(\Delta P_{i}Q_{i}R_{i}\), its area can be calculated with the 3-order determinant, viz. \[\begin{split} A_{i}&=\frac{1}{2}\left|\det\left(\begin{array} []{ccc}i&i+\tau&i+2\tau\\ Y_{i}&Y_{i+\tau}&Y_{i+2\tau}\\ 1&1&1\end{array}\right)\right|\\ &=\frac{\tau}{2}\left|Y_{i+2\tau}-2Y_{i+\tau}+Y_{i}\right|,\quad 1\leq i \leq\left\lfloor\frac{N-1}{2\tau}\right\rfloor\end{split} \tag{105}\] then the total area of the triangles is \[A_{\text{total}}(\tau)=\frac{\tau}{2}\sum_{j=1}^{\left\lfloor\frac{N-1}{2\tau} \right\rfloor}A_{i}. \tag{106}\] **Figure** 5 illustrates the relevant details of the construction for each triangle and the total area. Lotfalinezhad and Maleki showed that [12] \[A_{\text{total}}(\tau)\propto\tau^{H}\Longleftrightarrow\ln A_{\text{total}}( \tau)\sim\alpha_{\text{TTA}}+\beta_{\text{TTA}}\cdot\ln\tau \tag{107}\] Consequently, we have \[\hat{H}_{\text{TTA}}=\hat{\beta}_{\text{TTA}} \tag{108}\] by linear regression. The _triangles areas method_ (TA), a modification of TTA method, proposed in [13] has similar principle, and the modification is just to consider the distribution of the area of the triangles instead of the distribution of the sum of the areas of all the triangles. #### 4.7.2 Algorithm for TTA Estimator Similar to **Algorithm** 8, we set \(\tau_{\text{max}}=10\) in **Algorithm** 12, which indicates the maximum scale for each sample sequence. ``` 1:Time series data \(\mathbf{X}\), indicator flag for the optimization method in linear regression. 2:Hurst exponent of the sequence \(\mathbf{X}\). 3:functionEstHurstTTA(\(\mathbf{X},\texttt{flag}\)) 4:\(n\gets 10\); // \(\tau_{\text{max}}=10\) 5:\(N\leftarrow\textsc{GetLength}(\mathbf{X})\); 6:\(\mathbf{T}\leftarrow\langle 1,2,\cdots,n\rangle\); 7:\(\mathbf{S}\leftarrow\mathbf{0}\in\mathbb{R}^{n\times 1}\); // For the statistics 8:\(\overline{X}\leftarrow\mathcal{A}_{i}^{1:N}\left\{X_{i}\right\}\); 9:for\(i\in\langle 1,2,\cdots,N\rangle\)do 10:\(Y_{i}\leftarrow\mathcal{C}_{j}^{\text{\tiny{List}}}\left\{X_{j}-\overline{X}\right\}\); 11:endfor 12:for\(\texttt{idx}\in\langle 1,2,\cdots,n\rangle\)do 13:\(\texttt{sum}\gets 0\); 14:for\(i\in\langle 1,2,\cdots,\lfloor\frac{N-1}{2\tau}\rfloor\rangle\)do 15:\(j\gets 2(i-1)\tau+1\); 16:\(\texttt{sum}\leftarrow\texttt{sum}+|Y_{j+2\tau}-2Y_{j+\tau}+Y_{j}|\); 17:endfor 18:\(S_{\texttt{idx}}\leftarrow\texttt{idx}\cdot\texttt{sum}/2\); 19:endfor 20:\(\langle\mathbf{A},\mathbf{b}\rangle\leftarrow\textsc{FormatPowLawData}(\mathbf{T},\mathbf{S},n)\); ``` **Algorithm 12** Total Triangle Area Method for Estimating Hurst Exponent Figure 5: Construction of triangles with four different lags \(\tau=1,2,3,4\). 19:\(\boldsymbol{p}\leftarrow\text{LinearRegSolver}(\boldsymbol{A},\boldsymbol{b},n, \text{flag})\); 20:\(\beta_{\text{TTA}}\gets p_{2}\ ;\ //\ \boldsymbol{p}=[\alpha,\beta]^{\text{T}}\); 21:\(H\leftarrow\beta_{\text{TTA}}\); 22:\(\text{\bf return}\ H\); 23:endfunction ``` **Algorithm 13** Periodogram Estimator for Hurst exponent ### Periodogram Method (PM) #### 4.8.1 Principle of Periodogram Estimator Geweke and Porter-Hudak proposed the _periodogram method_ (PM) for estimating the Hurst exponent [18]. The periodogram for a time sequence \(\{X_{t}:1\leq t\leq N\}\) can be calculated by \[I(k)=\frac{1}{N}\left|\sum_{t=1}^{N}X_{t}e^{-\frac{2\pi\text{i}}{N}(k-1)(t-1)} \right|^{2},\quad 1\leq k\leq N \tag{109}\] where \(\text{i}=\sqrt{-1}\) and \(I(k)\) is the squared absolute value of the DFT of the sequence \(X_{t}\). Weron et al. showed that [50] \[I(k)\propto\left[4\text{sin}^{2}\left(\frac{k}{2N}\right)\right]^{\frac{1}{2}- H},\quad 1\leq k\leq\left\lfloor\frac{N}{2}\right\rfloor \tag{110}\] or equivalently \[\ln I(k)\sim\alpha_{\text{PM}}+\beta_{\text{PM}}\cdot\ln\left[4\text{sin}^{2} \left(\frac{k}{2N}\right)\right],\quad 1\leq k\leq\left\lfloor\frac{N}{2}\right\rfloor \tag{111}\] In consequence, we have \[\hat{H}_{\text{PM}}=\frac{1}{2}-\hat{\beta}_{\text{PM}} \tag{112}\] by linear regression. #### 4.8.2 Algorithm for Periodogram Estimator Spectrum-domain method relies more advanced mathematical concepts and tools. **Algorithm 13** we use the procedure FFT to transform a time sequence into spectrum-domain. Since there are lots of toolboxes for the FFT in C/C++/MATLAB/Python/R, the details for the principle and algorithm implementation are omitted here. ``` 1:Time sequence \(\boldsymbol{X}\), the cut-off frequency \(f_{\text{cutoff}}\), indicator flag for the optimization method in linear regression. 2:Hurst exponent of the sequence \(\boldsymbol{X}\). 3:functionEstHurstPeriodDiagram(\(\boldsymbol{X},f_{\text{cutoff}},\text{flag}\)) 4:\(N\leftarrow\text{GetLength}(\boldsymbol{X})\); 5:\(\boldsymbol{Y}\leftarrow\)FFT(\(\boldsymbol{X}\)); 6:\(\boldsymbol{T}\leftarrow\emptyset\); // for the \(4\sin^{2}(k/(2N))\) 7:\(\boldsymbol{S}\leftarrow\emptyset\); // for the periodogram \(I(k)\) 8:for\(k\in\left\langle 2,\cdots,\lfloor\frac{N}{2}\rfloor\right\rangle\)do // Attention, please! 9:\(f\gets k/N\); // Calculate frequencies 10:if\(f\leq f_{\text{cutoff}}\)then 11:\(\boldsymbol{T}\leftarrow\boldsymbol{T}\cup\left\{4\sin^{2}(f/2)\right\}\); 12:\(\boldsymbol{S}\leftarrow\boldsymbol{S}\cup\left\lfloor Y_{k}\right\rfloor^{2} /N\right\}\); 13:endif 14:endfor 15:\(n\leftarrow\text{GetLength}(\boldsymbol{T})\); ``` **Algorithm 14** Periodogram Estimator for Hurst exponent * \(\langle\mathbf{A},\mathbf{b}\rangle\leftarrow\text{FormatPowLawData}(\mathbf{T},\mathbf{S},n)\); * \(\mathbf{p}\leftarrow\text{LinearRegSolver}(\mathbf{A},\mathbf{b},n,\text{flag})\); * \(\beta_{\text{PD}}\gets p_{2}\); // \(\mathbf{p}=[\alpha,\beta]^{\mathsf{T}}\); * \(H\gets 0.5-\beta_{\text{PD}}\); * **return**\(H\); * **endfunction** ### Discrete Wavelet Transform (DWT) Method It is also feasible to estimate the Hurst exponent with discrete wavelet transform. The time sequence \(\mathbf{X}=\{X_{t}\}_{t=1}^{N}\) can be transformed into the spectrum-domain by \[W_{\mathbf{X}}(a,b)=\text{DWT}_{b}^{a}(\mathbf{X},\psi) \tag{113}\] where \(a\) is the scale parameter, \(b\) is the location parameter, \(\psi\) is the wavelet function and \(\text{DWT}_{b}^{a}\) is the DWT for the details of the time sequence according to (24). For the given scale \(a\), we can find a representation of the wavelet "energy" or amplitude and study its scaling [53] for exploring the power law of interest when estimating the Hurst exponent. #### 4.9.1 Average Wavelet Coefficient Method (AWC) The _average wavelet coefficient_ (AWC) method is based on the self-affine correlations of the DWT of a time sequence, which can be used for estimating the Hurst exponent. Simonsen showed that [24] \[W_{\mathbf{X}}^{\text{ave}}(a)=\frac{1}{|I_{a}|}\sum_{b\in I_{a}}|W(a,b)|\sim a^{H -0.5} \tag{114}\] where \(I_{a}\) is defined by (21). Taking the logarithms of both sides, we immediately have \[\ln W_{\mathbf{X}}^{\text{ave}}(a)\sim\alpha_{\text{AWC}}+\beta_{\text{AWC}}\cdot \ln a. \tag{115}\] Consequently, we can obtain \[\hat{H}_{\text{AWC}}=\hat{\beta}_{\text{AWC}}+\frac{1}{2} \tag{116}\] by linear regression. #### 4.9.2 Variance Versus Level (VVL) Method Similar to the AWC method, we can construct the _variance versus level_ VVL spectrum over all of the location parameters \(b\in I_{a}\) for the given scale \(a\). Let \[W_{\mathbf{X}}^{\text{vl}}(a)=\frac{1}{|I_{a}|-1}\sum_{b\in I_{a}}\left[|W(a,b)|-W_ {\mathbf{X}}^{\text{ave}}(a)\right]^{2} \tag{117}\] be the variance of \(|W(a,b)|\) respect to the discrete location variable \(b\). Flandrin showed that [23] \[W_{\mathbf{X}}^{\text{vl}}(a)\sim a^{2H-1} \tag{118}\] Equivalently, we have \[\ln W_{\mathbf{X}}^{\text{vl}}(a)\sim\alpha_{\text{VVL}}+\beta_{\text{VVL}}\cdot \ln a, \tag{119}\] which implies that \[\hat{H}_{\text{VVL}}=(1+\hat{\beta}_{\text{VVL}})/2 \tag{120}\] with the help of linear regression. #### 4.9.3 Algorithm for DWT Estimator It is easy to find there is a unified formula for the AWC and VVL methods. Actually, we have \[\hat{H}_{\text{DWT}}=\frac{1}{2}+\frac{\hat{\beta}_{\text{DWT}}}{r}=\left\{ \begin{array}{ll}0.5+\hat{\beta}_{\text{AWC}}&r=1\\ 0.5+0.5\cdot\hat{\beta}_{\text{VVL}},&r=2\end{array}\right. \tag{121}\] Thus it is convenient for us to design a unified interface for estimating the Hurst exponent with AWC method and VVL method. **Algorithm 14** Discrete Wavelet Transform Estimator for Hurst exponent ``` 1:Time sequences data \(\boldsymbol{X}\), integer \(r\in\langle 1,2\rangle\) for the AWC/VVL method, indicator flag for the optimization method in linear regression. 2:Hurst exponent of the sequence \(\boldsymbol{X}\). 3:functionEstHurstDWT(\(\boldsymbol{X},r,\texttt{flag}\)) 4:\(N\leftarrow\texttt{GetLength}(\boldsymbol{X})\); 5:\(n\leftarrow\lfloor\log_{2}(N)\rfloor\); // Calculate appropriate decomposition level 6:if\(r=1\)then 7:\(\boldsymbol{W}\leftarrow\textsc{Wavedec}(\boldsymbol{X},\text{"db24"},n)\); // 24-th order Daubechies DWT 8:else 9:\(\boldsymbol{W}\leftarrow\textsc{Wavedec}(\boldsymbol{X},\text{"haar"},n)\); // Haar DWT 10:endif 11:\(\boldsymbol{T}\leftarrow\boldsymbol{0}\in\mathbb{R}^{n\times 1}\); // For the scale 12:\(\boldsymbol{S}\leftarrow\boldsymbol{0}\in\mathbb{R}^{n\times 1}\); // For the AWC-spectrum 13:for\(\texttt{idx}\in\langle 1,2,\ldots,n\rangle\)do 14:\(T_{\texttt{idx}}\gets 2^{\texttt{idx}}\); // Add scale coefficient 15:\(\boldsymbol{L}_{\text{pos}}\leftarrow\left|W_{\texttt{idx}}\right|\); // All location parameters corresponding to scale \(2^{\texttt{idx}}\) 16:if\(r=1\)then 17:\(S_{\texttt{idx}}\leftarrow\textsc{Mean}(\boldsymbol{L}_{\text{pos}})\); // AWC-spectrum 18:else 19:\(S_{\texttt{idx}}\leftarrow\operatorname{Var}(\boldsymbol{L}_{\text{pos}})\); // VVL-spectrum 20:endif 21:endfor 22:\(\langle\boldsymbol{A},\boldsymbol{b}\rangle\leftarrow\textsc{FormatPowLawData}( \boldsymbol{T},\boldsymbol{S},n)\); 23:\(\boldsymbol{p}\leftarrow\textsc{LinearRegSolver}(\boldsymbol{A},\boldsymbol{b},n, \texttt{flag})\); 24:\(\beta_{\text{DWT}}\gets p_{2}\) ; // \(\boldsymbol{p}=[\alpha,\beta]^{\mathsf{T}}\); 25:\(H\leftarrow\beta_{\text{DWT}}/r+0.5\); 26:return\(H\); 27:endfunction ``` **Algorithm 14** Discrete Wavelet Transform Estimator for Hurst exponent ### Local Whittle (LW) Method #### 4.10.1 Principle of LW Estimator As the similar process stated in the PM method, for the vector \[\boldsymbol{\lambda}=[\lambda_{1},\cdots,\lambda_{n}]^{\mathsf{T}} \tag{122}\] such that \[\lambda_{j}=\frac{2\pi j}{N},\quad j=1,2,\cdots,n=\left\lfloor\frac{N}{2}\right\rfloor \tag{123}\] we can define \[\omega(\lambda_{j})=\frac{1}{N}\sum_{t=1}^{N}X_{t}e^{\mathsf{i}(t-1)\lambda_{j}} \tag{124}\] and \[\mathbf{I}(\mathbf{\lambda})=\left[I_{1},\cdots,I_{n}\right]^{\mathsf{T}}=\left[\left| \omega(\lambda_{1})\right|^{2},\cdots,\left|\omega(\lambda_{n})\right|^{2} \right]^{\mathsf{T}} \tag{125}\] Kunsch et al [19] showed that the Hurst exponent can be estimated by solving the following optimization problem \[\hat{H}_{\text{LW}}=\arg\min_{H\in(0,1)}\psi(H) \tag{126}\] with the objective function \[\psi(H)=\ln\left[\frac{1}{n}\sum_{j=1}^{n}\lambda_{j}^{2H-1}I_{j}\right]-\frac {2H-1}{n}\sum_{j=1}^{n}\ln\lambda_{j}. \tag{127}\] For more details, please see Robinson's work [20]. #### 4.10.2 Algorithm for LW Estimator The procedure EstHurstLW listed in **Algorithm**15 is designed for estimating the Hurst exponent with the LW method. We remarked that there are two procedures that are involved in **Algorithm**15: * the procedure ObjFunLW is used to compute the value of \(\psi(H)\), and * the procedure LocMinSolver to find the minimum of \(\psi(H)\), please see the **Algorithm**24 in the Appendix A for more details. ``` 0: Time sequence \(\mathbf{X}\). 0: Hurst exponent of the sequence \(\mathbf{X}\). 1:functionEstHurstLW(\(\mathbf{X}\)) 2:\(N\leftarrow\textsc{GetLength}(\mathbf{X})\); 3:\(\mathbf{Y}\leftarrow\textsc{FFT}(\mathbf{X})\); // Fast Fourier Transform 4:\(n\leftarrow\lfloor N/2\rfloor\); 5:\(\mathbf{T}\leftarrow\mathbf{0}\in\mathbb{R}^{n\times 1}\); // For the frequencies 6:\(\mathbf{S}\leftarrow\mathbf{0}\in\mathbb{R}^{n\times 1}\); // For the periodogram 7:for\(\texttt{idx}\in\langle 1,\cdots,n\rangle\)do // Attention, please! 8:\(T_{\texttt{idx}}\leftarrow\texttt{idx}/N\); // Calculate frequencies 9:\(S_{\texttt{idx}}\leftarrow\left|Y_{\texttt{idx}+1}\right|^{2}\); // Periodogram 10:endfor 11:\(H\leftarrow\textsc{LocMinSolver}(\textsc{ObjFunLW},\ [0.001,\\ 0.999],10^{-8},\langle\mathbf{T},\mathbf{S}\rangle)\); 12:return\(H\); 13:endfunction ``` **Algorithm 16** Computing the objective function \(\psi(H)\) in Local Whittle method **Input**: Variable \(x\in(0,1)\), frequency vector \(\mathbf{T}\) and periodogram vector \(\mathbf{S}\). **Output**: \(\psi(h)\). ``` 1:functionObjFunLW(\(x,\langle\mathbf{T},\mathbf{S}\rangle\)) 2:\(n\leftarrow\textsc{GetLength}(\mathbf{T})\); // \(n=\lfloor N/2\rfloor\) \(y\leftarrow\ln\left(\frac{1}{n}\sum\limits_{i=1}^{n}T_{i}^{2x-1}S_{i}\right)- \frac{2x-1}{n}\sum\limits_{i=1}^{n}\ln T_{i}\); 4. **return**\(y\); 5. **end function** ### Least Squares via Standard Deviation (LSSD) #### 4.11.1 Principle of LSSD Estimator The Hurst exponent can also be estimated with the _least squares via standard deviation_ (LSSD). The steps are summarized as follows: 1. Pre-conditioning: Dividing the sequence \(\left\{X_{t}\right\}_{t=1}^{N}\) into \(k=\left\lfloor N/m\right\rfloor\) subsequences with the same size \(m\) according to (47) where \[m\in\left\{1,2,\cdots,m_{\max}\right\}\] (128) such that \[m_{\max}\geq\left\lfloor\frac{N}{10}\right\rfloor\] (129) and each \(m\) corresponds to a partition scheme. 2. Calculating the cumulative sum of the \(i\)-th subsequence \(X_{(i)}=\left\{X_{(i-1)m+j}:1\leq j\leq m\right\}\), viz. \[Z_{i}^{m}=\mathcal{C}_{j}^{1:m}\left\{X_{(i-1)m+j}\right\},\quad 1\leq i \leq\left\lfloor\frac{N}{m}\right\rfloor\] (130) 3. Constructing the standard deviation sequences \(\left\{s_{m}\right\}_{m=1}^{m_{\max}}\) as follows \[s_{m}=\mathcal{S}_{i}^{1:m}\left\{Z_{i}^{m}\right\},\quad 1\leq m\leq m_{ \max}\] (131) Supposing that \[\operatorname{E}\left\{\overline{s_{m}}\right\}=\operatorname{E}\left\{ \mathcal{A}_{m}^{1:m_{\max}}\left\{s_{m}\right\}\right\}=\sigma\] (132) then the self-similarity property of the sequence implies that [14] \[\operatorname{E}\left\{s_{m}\right\}\approx\sigma\cdot c_{\text{LSSD}}(m,H) \cdot m^{H}\] (133) where \(H\) is the Hurst exponent and \[c_{\text{LSSD}}(m,H)=\sqrt{\frac{N/m-(N/m)^{2H-1}}{N/m-1/2}}\] (134) 4. Constructing optimization problem. Koutsoyiannis et al. [54] introduced the following function for fitting error \[\mathcal{E}_{\text{LSSD}}^{2}(\sigma,H)\] (135) \[=\sum_{m=1}^{m_{\max}}\frac{\left[\ln\frac{\operatorname{E} \left\{s_{m}\right\}}{s_{m}}\right]^{2}}{m^{p}}+\frac{H^{q+1}}{q+1}\] \[=\sum_{m=1}^{m_{\max}}\frac{\left[\ln\sigma+H\cdot\ln m+\ln c_{ \text{LSSD}}(m,H)-\ln s_{m}\right]^{2}}{m^{p}}\] \[\qquad+\frac{H^{q+1}}{q+1}\] where \(p\in\left\{0,1,2,\cdots\right\}\) is a weight parameter and \(H^{q+1}/(q+1)\) is a penalty factor with default value \(q=50\). With the help of least squares, we can obtain a fixed-point equation for the Hurst exponent, which can be written by \[H=\Phi_{\text{LSSD}}(H) \tag{136}\] where \[\Phi_{\text{LSSD}}(H)=\frac{a_{11}[b_{2}(H)-H^{q}]-a_{21}(H)b_{1}(H)}{a_{11}a_{22 }(H)-a_{21}(H)a_{12}} \tag{137}\] in which \[\begin{cases}a_{11}=\sum\limits_{m=1}^{m_{\max}}\frac{1}{m^{p}}\\ a_{12}=\sum\limits_{m=1}^{m_{\max}}\frac{\ln m}{m^{p}}\\ a_{21}(H)=\sum\limits_{m=1}^{m_{\max}}\frac{d_{m}(H)}{m^{p}}\\ a_{22}(H)=\sum\limits_{m=1}^{m_{\max}}\frac{d_{m}(H)\ln m}{m^{p}}\\ b_{1}(H)=\sum\limits_{m=1}^{m_{\max}}\frac{[\ln s_{m}-\ln c_{\text{LSSD}}(m,H)]} {m^{p}}\\ b_{2}(H)=\sum\limits_{m=1}^{m_{\max}}\frac{d_{m}(H)\left[\ln s_{m}-\ln c_{ \text{LSSD}}(m,H)\right]}{m^{p}}\\ d_{m}(H)=\ln m+\frac{\ln(N/m)}{1-(N/m)^{2-2H}}\end{cases} \tag{138}\] Obviously, we can use the Newton's iterative method or direct iterative method to solve the fixed-point in order to obtain the estimation exponent. Koutsoyiannis et al. [54] pointed out that there is a unique fixed-point for the equation (136), which can be solved with **Algorithm** 3. For more details of fixed-point algorithm, please see Chen et al. [55] or the toolbox of MATLAB, Python, and so on. #### 4.11.2 Algorithm for LSSD Estimator The procedure EstHurstLSSD listed in **Algorithm** 17 is designed to estimate the Hurst exponent with the LSSD method. Note that the procedure CtmLSSD is used to compute the contractive mapping \(\Phi_{\text{LSSD}}(H)\) and the procedure FixedPointSolver listed in **Algorithm** 3 provides a general interface for solving the fixed-point of some nonlinear equation. ``` 0: Time sequence \(\boldsymbol{X}\), weight \(p\), penalty parameter \(q\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 0: Hurst exponent of the sequence \(\boldsymbol{X}\). 1:functionEstHurstLSSD(\(\boldsymbol{X},p,q,\epsilon\)) 2:\(N\leftarrow\textsc{GetLength}(\boldsymbol{X})\); 3:\(m_{\max}\leftarrow\lfloor N/10\rfloor\); 4:\(\boldsymbol{T}\leftarrow\langle 1,2,\cdots,m_{\max}\rangle\); 5:\(\boldsymbol{S}\leftarrow\boldsymbol{0}\in\mathbb{R}^{m_{\max}\times 1}\); // For the standard deviation 6:for\(\texttt{idx}\in\langle 1,2,\cdots,m_{\max}\rangle\)do 7:\(m\leftarrow\texttt{idx}\); 8:\(k\leftarrow\lfloor N/m\rfloor\); 9:\(\boldsymbol{Z}\leftarrow\boldsymbol{0}\in\mathbb{R}^{k\times 1}\); 10:for\(i\in\langle 1,2,\cdots,k\rangle\)do 11:\(Z_{i}\leftarrow\sum\limits_{j=1}^{m}X_{(i-1)m+j}\); ``` **Algorithm 17** LSSD Estimator ``` 12:endfor 13:\(S_{\texttt{idx}}\leftarrow\mathcal{S}_{i}^{1:k}\left\{Z_{i}\right\}\); 14:endfor 15:\(H\leftarrow\textsc{FixedPointSolver}(\textsc{CtmLSSD},0.5,\) 16:\(\epsilon,N,p,q,\mathbf{T},\mathbf{S})\); 17:return\(H\); 18:endfunction ``` **Algorithm 18** Computing the \(c_{\text{LSSD}}(m,H)\) The procedures FunCMLSSD listed in **Algorithm 18** and FunDMH listed in **Algorithm 19** are used to compute the \(c_{\text{LSSD}}(m,H)\) and \(d_{m}(H)\) respectively. ``` 1:Positive integer \(m\in\{1,2,\cdots,m_{\max}\}\), parameter \(H\in(0,1)\), Positive integer \(N\) 2:The value of \(c_{\text{LSSD}}(m,H)\) 3:functionFunCMLSSD\((m,N,H)\) 4:\(u\gets N/m\); 5:\(c\leftarrow\sqrt{(u-u^{2H-1})/(u-0.5)}\); 6:return\(c\); 7:endfunction ``` **Algorithm 19** Computing the \(d_{m}(H)\) ``` 1:Positive integer \(m\in\{1,2,\cdots,m_{\max}\}\), parameter \(H\in(0,1)\), Positive integer \(N\) 2:The value of \(d_{m}(H)\) 3:functionFunDMH\((m,N,H)\) 4:\(u\gets N/m\); 5:\(d\leftarrow\ln m+\ln u/(1-u^{2-2H})\); 6:return\(d\); 7:endfunction ``` **Algorithm 20** Contractive Mapping for the LSSD method ``` 1:Parameter \(H\in(0,1)\), length \(N\), weight \(p\), penalty parameter \(q\), scale vector \(\mathbf{T}=[1,2,\cdots,m_{\max}]^{\mathsf{T}}\), standard deviation vector \(\mathbf{S}=[s_{1},s_{2},\cdots,s_{m_{\max}}]^{\mathsf{T}}\). 2:The value of \(\Phi_{\text{LSSD}}(H)\). 3:functionCtmLSSD\((H,\langle N,p,q,\mathbf{T},\mathbf{S}\rangle)\) 4:\(m_{\max}\leftarrow\textsc{GetLength}(\mathbf{T})\); 5:\(a_{11}\gets 0,a_{12}\gets 0\); 6:\(a_{21}\gets 0,a_{22}\gets 0\); 7:\(b_{1}\gets 0,b_{2}\gets 0\); 8:foridx\(\in\langle 1,2,\cdots,m_{\max}\rangle\)do 9:\(m\gets T_{\texttt{idx}}\); 10:\(s_{m}\gets S_{\texttt{idx}}\); 11:\(c_{m}\leftarrow\textsc{FunCMLSSD}(m,N,H)\); 12:\(d_{m}\leftarrow\textsc{FunDMH}(m,N,H)\); 13:\(u\gets m^{p}\); 14:\(a_{11}\gets a_{11}+1.0/u\); 15:\(a_{12}\gets a_{12}+\ln m/u\); 16:\(a_{21}\gets a_{21}+d_{m}/u\); 17:\(a_{22}\gets a_{22}+d_{m}\cdot\ln m/u\); 18:\(b_{1}\gets b_{1}+(\ln s_{m}-\ln c_{m})/u\); ``` **Algorithm 21** Contractive Mapping for the LSSD method 17:\(b_{2}\gets b_{2}+d_{m}\cdot(\ln s_{m}-\ln c_{m})/u\); 18:endfor 19:\(g\leftarrow\frac{a_{11}\cdot(b_{2}-H^{q})-a_{21}\cdot b_{1}}{a_{11}\cdot a_{22}-a_{2 1}\cdot a_{12}}\); 20:return\(g\); 21:endfunction ``` **Algorithm 21** Estimating the Hurst exponent with the LSV method ### Least Squares via Variance (LSV) #### 4.12.1 Principle of LSV Estimator Similar to the LSSD-method, we can construct the variance sequences according to (130): \[s_{m}^{2}=\left(\mathcal{S}_{i}^{1:m}\left\{Z_{i}^{m}\right\}\right)^{2},\quad 1 \leq m\leq m_{\max} \tag{139}\] The self-similarity property implies that \[\mathrm{E}\left\{s_{m}^{2}\right\}=c_{\mathrm{LSV}}(m,H)\cdot m^{2H}\cdot \sigma^{2} \tag{140}\] where \[c_{\mathrm{LSV}}(m,H)=\frac{N/m-(N/m)^{2H-1}}{N/m-1} \tag{141}\] Tyralis et al. [15] introduced the fitting error function \[\mathcal{E}_{\mathrm{LSV}}^{2}(\sigma,H) \tag{142}\] \[=\sum_{k=1}^{k_{\max}}\frac{\left[c_{\mathrm{LSV}}(m,H)k^{2H} \sigma^{2}-s_{m}^{2}\right]^{2}}{m^{p}}+\frac{H^{q+1}}{q+1}\] where \(p\in\{0,1,2,\cdots\}\) is a weight parameter and \(H^{q+1}/(q+1)\) is a penalty factor with default value \(q=50\). With the help of least squares method, Tyralis et al. showed that the Hurst exponent can be estimated by solving the following optimization problem [15]: \[\hat{H}_{\mathrm{LSV}}=\arg\min_{H\in(0,1)}\Phi_{\mathrm{LSV}}(H) \tag{143}\] where \[\Phi_{\mathrm{LSV}}(H)=\sum_{m=1}^{m_{\max}}\frac{s_{m}^{4}}{m^{p}}-\frac{a_{1 2}^{2}(H)}{a_{11}(H)}+\frac{H^{q+1}}{q+1} \tag{144}\] in which \[\begin{cases}a_{11}(H)=\sum_{m=1}^{m_{\max}}\frac{[c_{\mathrm{LSV}}(m,H)]^{2} \cdot m^{4H}}{m^{p}}\\ a_{12}(H)=\sum_{m=1}^{m_{\max}}\frac{c_{\mathrm{LSV}}(m,H)\cdot m^{2H}\cdot s _{m}^{2}}{m^{p}}\end{cases} \tag{145}\] #### 4.12.2 Algorithm for LSV Estimator The procedure EstHurstLSV listed in **Algorithm 21** is designed for estimating the Hurst exponent with the LSV method. For the high order procedure LocMinSolver involved in EstHurstLSV, please see **Algorithm 24**. Note that the procedure ObjFunLSV, the first argument of LocMinSolver, is given in **Algorithm 23**. ``` 0: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 0: Hurst exponent of the sequence \(\mathbf{X}\). 1: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 2: Hurst exponent of the sequence \(\mathbf{X}\). 3: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 4: Hurst exponent of the sequence \(\mathbf{X}\). 5: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 6: Hurst exponent of the sequence \(\mathbf{X}\). 7: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 8: Hurst exponent of the sequence \(\mathbf{X}\). 9: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 10: Hurst exponent of the sequence \(\mathbf{X}\). 11: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 12: Hurst exponent of the sequence \(\mathbf{X}\). 13: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 14: Hurst exponent of the sequence \(\mathbf{X}\). 15: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). 16: Hurst exponent of the sequence \(\mathbf{X}\). 17: Time sequence \(\mathbf{X}\), weight \(p\in\{0,1,2,\cdots\}\), penalty parameter \(q\in\mathbb{N}\) with default value \(q=50\), precision \(\epsilon\) with default value \(\epsilon=10^{-4}\). [MISSING_PAGE_POST] ``` 1:functionEstHurstLSV(\(\mathbf{X},p,q,\epsilon\)) 2:\(N\leftarrow\textsc{GetLength}(\mathbf{X})\); 3:\(m_{\max}\leftarrow\lfloor N/10\rfloor\); 4:\(\mathbf{T}\leftarrow\langle 1,2,\cdots,m_{\max}\rangle\); 5:\(\mathbf{S}\leftarrow\mathbf{0}\in\mathbb{R}^{m_{\max}\times 1}\); // For the standard deviation 6:foridx\(\in\langle 1,2,\cdots,m_{\max}\rangle\)do 7:\(m\leftarrow\texttt{idx}\); // size of the subsequences 8:\(k\leftarrow\lfloor N/m\rfloor\); // number of subsequences 9:\(\mathbf{Z}\leftarrow\mathbf{0}\in\mathbb{R}^{k\times 1}\); 10:for\(i\in\langle 1,2,\cdots,k\rangle\)do 11:\(Z_{i}\leftarrow\sum\limits_{j=1}^{m}X_{(i-1)m+j}\); 12:endfor 13:\(S_{\texttt{idx}}\leftarrow\mathcal{S}_{i}^{1:m}\left\{Z_{i}\right\}\); 14:endfor 15:\(H\leftarrow\textsc{LocMinSolver}(\textsc{ObjFunLSV},\ [0.001,\\ 0.999],\epsilon,N,p,q,\mathbf{T},\mathbf{S})\); 16:return\(H\); 17:endfunction ``` **Algorithm 22** Computing the \(c_{\mathrm{LSV}}(m,H)\) The procedures FunCLSV listed in **Algorithm** 22 is used to compute the \(c_{\mathrm{LSV}}(m,H)\). ``` 1:Positive integer \(m\in\{1,2,\cdots,m_{\max}\}\), parameter \(H\in(0,1)\), Positive integer \(N\) 2:The value of \(c_{\mathrm{LSV}}(m,H)\) 3:functionFunCLSV(\(m,N,H\)) 4:\(u\gets N/m\); 5:\(c\leftarrow(u-u^{2H-1})/(u-1)\); 6:return\(c\); 7:endfunction ``` **Algorithm 23** Objective function \(\Phi_{\mathrm{LSV}}(H)\) for the LSV method The procedure ObjFunLSV is used to compute the objective function \(\Phi_{\mathrm{LSV}}(H)\) in the LSV method for estimating the Hurst exponent. ``` 1:Hurst exponent \(H\), length \(N\), weight \(p\), penalty parameter \(q\), scale vector \(\mathbf{T}=\left[1,2,\cdots,m_{\max}\right]^{\mathsf{T}}\) and standard deviation vector \(\mathbf{S}=\left[s_{1},s_{2},\cdots,s_{\max}\right]^{\mathsf{T}}\). 2:The value of \(\Phi_{\mathrm{LSV}}(H)\) for the LSV method 3:functionObjFunLSV(\(H,N,p,q,\mathbf{T},\mathbf{S}\)) 4:\(m_{\max}\leftarrow\textsc{GetLength}(\mathbf{T})\); 5:\(b_{2}\leftarrow\dfrac{H^{q+1}}{q+1}\); 6:\(a_{11}\gets 0,a_{12}\gets 0,b_{1}\gets 0\); 7:foridx\(\in\{1,2,\cdots,m_{\max}\}\)do 8:\(m\gets T_{\texttt{idx}}\); 9:\(s_{m}\gets S_{\texttt{idx}}\); 10:\(c_{m}\leftarrow\textsc{FunCLSV}(m,N,H)\); 11:\(u\gets m^{p}\); 12:\(b_{1}\gets b_{1}+s_{m}^{4}/u\); 13:\(a_{11}\gets a_{11}+c_{m}^{2}\cdot m^{4H}/u\); 14:\(a_{12}\gets a_{12}+c_{m}\cdot m^{2H}\cdot s_{m}^{2}/u\); ``` **Algorithm 24** Objective function \(\Phi_{\mathrm{LSV}}(H)\) for the LSV method * [13] **end for** * [14] \(g\gets b_{1}-\frac{a_{12}\cdot a_{12}}{a_{11}}+b_{2}\); * [15] **return**\(g\); * [16] **end function** ## 5 Verification and Validation ### Random Sequence and Hurst Exponent For the short-correlated random sequences, their Hurst exponents fluctuate around the constant \(0.5\)[47]. We performed experiments with six types random distributions, including normal distribution \(\mathcal{N}(0,1)\), Chi-square distribution \(\chi^{2}(1)\), geometric distribution \(\mathrm{GE}(0.25)\), Poisson distribution \(\mathcal{P}(5)\), exponential distribution \(\mathrm{Exp}(1)\), and uniform distribution \(\mathcal{U}(0,1)\), to assess the performance of our algorithms when the sample sequences are random. For each distribution, we generated \(n=30\) sets of sample sequences with a length of \(N=10^{4}\) using the built-in random number generators in Python. Formally, the data set for the verification and validation is \[\left\{X_{j}^{(\star,i)}:1\leq j\leq 10^{4}\right\},\quad 1\leq i\leq n \tag{146}\] where \[\star\in\left\{\mathcal{N}(0,1),\chi^{2}(1),\mathrm{GE}(0.25),\mathcal{P}(5),\mathrm{Exp}(1),\mathcal{U}(0,1)\right\}\] denotes the type of distribution. Let \(\hat{H}_{\diamond}^{(\star,i)}\) be the Hurst exponent estimated from the \(i\)-th sample sequence \(\left\{X_{j}^{(\star,i)}\right\}\) with the estimation method \(\diamondsuit\), then we have \[\overline{\hat{H}_{\diamond}^{\star}}=\frac{1}{n}\sum_{i=1}^{n}\hat{H}_{ \diamond}^{(\star,i)} \tag{147}\] where \[\diamondsuit\in\left\{\texttt{AM},\texttt{AV},\texttt{GHE},\texttt{HM}, \texttt{DFA},\texttt{RS},\texttt{TTA},\texttt{PM},\cdots,\texttt{LSSD}, \texttt{LSV}\right\}.\] denotes the estimation method. For \(n=30\) times repetitive experiments, the values of \(\overline{\hat{H}_{\diamond}^{\star}}\) are shown in **Table**3 for the short-correlated random sequences. Note that we set the window size \(w=50\) for calculating the optimal length \(N_{\mathrm{opt}}\) and \(\texttt{flag}=2\) for the minimum \(\ell_{2}\)-norm method in linear regression. Obviously, the Hurst exponents estimated with the thirteen algorithms mentioned above fluctuates in the range \(0.45\sim 0.55\), which coincides with the conclusion presented of by Chen et al [47]. \begin{table} \begin{tabular}{|c|c c c c c c c c c c c c c|} \hline \(\diamondsuit\) & AM & AV & GHE & HM & DFA & R/S & TTA & PM & AWC & VVL & LW & LSSD & LSV \\ \hline \(\mathcal{N}(0,1)\) & 0.4942 & 0.4980 & 0.5012 & 0.4994 & 0.4919 & 0.4992 & 0.4971 & 0.5067 & 0.4971 & 0.5258 & 0.5002 & 0.4977 & 0.4987 \\ \(\chi^{2}(1)\) & 0.4985 & 0.4975 & 0.4970 & 0.5234 & 0.4931 & 0.4741 & 0.5133 & 0.4992 & 0.5075 & 0.4642 & 0.4982 & 0.5001 & 0.4995 \\ \(\mathrm{GE}(0.25)\) & 0.5007 & 0.4966 & 0.5003 & 0.5155 & 0.4992 & 0.4923 & 0.5101 & 0.5105 & 0.5086 & 0.4888 & 0.5003 & 0.5009 & 0.5007 \\ \(\mathcal{P}(5)\) & 0.4971 & 0.4931 & 0.4999 & 0.5020 & 0.4974 & 0.4966 & 0.4940 & 0.5019 & 0.4978 & 0.5071 & 0.5001 & 0.4999 & 0.5001 \\ \(\mathrm{Exp}(1)\) & 0.4837 & 0.4749 & 0.5014 & 0.5128 & 0.5030 & 0.4839 & 0.5105 & 0.4784 & 0.4946 & 0.4947 & 0.5010 & 0.5024 & 0.5021 \\ \(\mathcal{U}(0,1)\) & 0.4853 & 0.4822 & 0.5002 & 0.4896 & 0.5049 & 0.4959 & 0.4987 & 0.4975 & 0.5013 & 0.5266 & 0.5001 & 0.5022 & 0.5017 \\ \hline \end{tabular} \end{table} Table 3: Estimation results of Hurst exponent value for short-correlated random sequences such that \(H\sim 0.5\) ### Estimation with Fractal Gaussian Noise Sequence To test the accuracy of the algorithm in estimating the Hurst exponent values for sequences, we constructed multiple sets of experiments using FGN sequences with Hurst exponent values between \(0.3\) and \(0.8\). In each set of experiments, we generated \(n=30\) sets of sample sequences with a length of \(N=3\times 10^{4}\) using the FGN sequence generator provided by **Algorithm** 1. Formally, we have \[\overline{\hat{H}_{\diamond}^{\text{fgn}}}=\frac{1}{n}\sum_{i=1}^{n}\hat{H}_{ \diamond}^{(\text{fgn},i)}. \tag{148}\] For \(n=30\) times repetitive experiments, the values of \(\overline{\hat{H}_{\diamond}^{\text{fgn}}}\) are shown in **Table** 4. The parameters \(w=50\) and \(\texttt{flag}=2\) are set the same as that for **Table** 3. Now please recall the **Figure** 2 for the classification of estimation methods. In **Table** 4, it can be observed that: * The TTA method exhibits excellent accuracy in the time-domain. * The spectrum-domain methods are superior to the time-domain methods in general. The PM, AWC and VVL methods give similar accuracies, however the LW method is a little inferior. * For Bayesian methods, the LSSD and LSV produce results of high accuracies. * When the sequence has long-term autocorrelation (\(H>0.5\)), time-domain algorithms produce underestimated values for the Hurst exponent, whereas the spectrum-domain algorithms work very well, which is consistent with the results discovered by Chen et al [47]. ### Relative Error of Estimating Hurst Exponent With the help of the controllable Hurst exponent \(H^{\text{fgn}}\) for the FGN sequences, we can compare the difference of various estimation methods mentioned above. For the sequence \(\left\{X_{j}:1\leq j\leq N\right\}\) generated from the FGN sequences, suppose the estimated Hurst exponent in the \(i\)-th experiment in \(n\) repeatable experiments with estimation method \(\diamondsuit\) is \(\hat{H}_{\diamond}^{i}\). \begin{table} \begin{tabular}{|c|c c c c c c c c|c c c c|} \hline \multirow{2}{*}{\(H^{\text{fgn}}\)} & \multirow{2}{*}{AM} & \multirow{2}{*}{AV} & \multirow{2}{*}{GHE} & \multirow{2}{*}{HM} & \multirow{2}{*}{DFA} & \multirow{2}{*}{R/S} & \multirow{2}{*}{TTA} & \multirow{2}{*}{PM} & \multirow{2}{*}{AWC} & \multirow{2}{*}{VVL} & \multirow{2}{*}{LW} & \multirow{2}{*}{LSSD} & \multirow{2}{*}{LSV} \\ \hline 0.30 & 0.3023 & 0.2984 & 0.3006 & 0.3006 & 0.3078 & 0.3692 & 0.3003 & 0.3099 & 0.2919 & 0.3126 & 0.2629 & 0.3002 & 0.3003 \\ 0.35 & 0.3521 & 0.3495 & 0.3501 & 0.3500 & 0.3502 & 0.4056 & 0.3500 & 0.3614 & 0.3426 & 0.3592 & 0.3239 & 0.3496 & 0.3497 \\ 0.40 & 0.4074 & 0.4044 & 0.3994 & 0.3999 & 0.3981 & 0.4468 & 0.3988 & 0.4047 & 0.3952 & 0.4085 & 0.3844 & 0.4001 & 0.3998 \\ 0.45 & 0.4365 & 0.4353 & 0.4507 & 0.4511 & 0.4544 & 0.4856 & 0.4472 & 0.4369 & 0.4372 & 0.4402 & 0.4429 & 0.4503 & 0.4504 \\ 0.50 & 0.5000 & 0.4956 & 0.4991 & 0.4991 & 0.4995 & 0.5293 & 0.4994 & 0.5037 & 0.4989 & 0.4993 & 0.4994 & 0.4983 & 0.4985 \\ 0.55 & 0.5425 & 0.5397 & 0.5511 & 0.5512 & 0.5516 & 0.5702 & 0.5510 & 0.5561 & 0.5431 & 0.5430 & 0.5577 & 0.5521 & 0.5519 \\ 0.60 & 0.5869 & 0.5849 & 0.5999 & 0.6001 & 0.5991 & 0.6102 & 0.6017 & 0.5851 & 0.5985 & 0.6005 & 0.6127 & 0.6000 & 0.6003 \\ 0.65 & 0.6411 & 0.6394 & 0.6478 & 0.6474 & 0.6485 & 0.6539 & 0.6482 & 0.6524 & 0.6491 & 0.6432 & 0.6652 & 0.6492 & 0.6488 \\ 0.70 & 0.6679 & 0.6648 & 0.6984 & 0.6988 & 0.7057 & 0.6853 & 0.6994 & 0.6824 & 0.6955 & 0.6964 & 0.7213 & 0.7002 & 0.7000 \\ 0.75 & 0.7192 & 0.7169 & 0.7470 & 0.7472 & 0.7516 & 0.7226 & 0.7484 & 0.7459 & 0.7519 & 0.7447 & 0.7747 & 0.7496 & 0.7496 \\ 0.80 & 0.7636 & 0.7635 & 0.7948 & 0.7948 & 0.7978 & 0.7551 & 0.7990 & 0.7955 & 0.8052 & 0.7998 & 0.8288 & 0.7997 & 0.8002 \\ \hline \end{tabular} \end{table} Table 4: Comparison of estimation accuracy of the 13 algorithms by using FGN sequences with given Hurst exponent The relative error of estimation can be defined by \[\eta_{\Diamond}^{i}=\frac{\left|\hat{H}_{\Diamond}^{i}-H^{\text{fgn}}\right|}{H^ {\text{fgn}}}\times 100\%,\quad 1\leq i\leq n \tag{149}\] then the average relative error must be \[\eta_{\Diamond}=\frac{1}{n}\sum_{i=1}^{n}\eta_{\Diamond}^{i}. \tag{150}\] In consequence, for different estimation \(\Diamond\), we can compare their relative error to evaluate their performances. **Figure** 6 illustrates the values of \(\eta_{\Diamond}\) in \(n=100\) repeatable experiments with various estimation methods for the Hurst exponent of the FGN sequences. The results of the time-domain methods and the spectrum-domain methods are displayed in **Figure** 6(a) and **Figure** 6(b) respectively. Please note that the relative errors for the two Bayesian statistical methods (LSSD and LSV) and the TTA-method are not shown in the figure due to their small values (about \(1.5\times 10^{-3}\)). In the **Figure** 6(a) for the time-domain methods, as the Hurst exponent value of the FGN sample sequence increases, the estimation error of most methods exhibits an upward trend. On the contrary, the relative error for the R/S method decreases with the nominal value of Hurst exponent. An interesting phenomena is that the relative error of the DFA method remains stable and its value is smaller than \(5\times 10^{-3}\). In the **Figure** 6(b) for the spectrum-domain methods, all methods achieve relatively accurate estimation results. However, the error curve of the LW method exhibits a symmetric distribution around \(H=0.5\). To address the issue of larger errors in the lower range of \(H^{\text{fgn}}\) for the R/S method, we can improve the estimation precision by constructing the revised statistical \(\mathscr{R}_{\text{T}}^{\text{JL}}\) as given in equation (101). ### Impacts of Norm and Optimization Method on Estimation Performance In the experiment mentioned in sub-section 5.2, we also explored the application of different linear regression methods in the estimation of the Hurst exponent. For example, in three sets of experiments with \(H^{\text{fgn}}\in\{0.3,0.5,0.8\}\), we selected the \(\ell_{1}\)-norm and \(\ell_{2}\)-norm as the optimization methods for linear regression. We fitted the parameters \(\mathbf{A},\mathbf{b}\) obtained by each algorithm and calculated the relative errors by equation (149), the results as shown in Figure 7. Figure 6: Comparison of Hurst Estimators with FGN sequence It can be observed that most algorithms for estimating the Hurst exponent have relative estimation errors controlled at below 6%. In general, the choice off \(\ell_{1}\)-norm and \(\ell_{2}\)-norm optimizaiton is not essential. When sample size is small, we recommend using the minimum \(\ell_{1}\)-norm fitting. ### Hurst Exponent of Reaction Time Sequence In the study conducted by Lauren et al. in 2021 [56], a series of speech testing experiments were designed, which included the _human-speaker_ (HS) test and the _text-to-speech_ (TTS) test. The _reaction time_ (RT) data from 20 participants in both test groups were collected and made publicly accessible for retrieval1. In 2023, Likens applied two Bayesian methods and the DFA method to estimate the Hurst exponent of the reaction time data [16]. The results showed that all these reaction time sequences exhibited long-range memory characteristics (\(H>0.5\)). In this study, we also employed these data to evaluate the accuracy of the 13 methods discussed above (with a window size parameter set to \(w=50\) and the \(\ell_{2}\)-norm optimization for the linear regression). The experimental results are illustrated in **Figure** 8. Footnote 1: [https://royalsocietypublishing.org/doi/suppl/10.1098/rsif.2021.0272](https://royalsocietypublishing.org/doi/suppl/10.1098/rsif.2021.0272) In **Figure** 8, the box-body illustrates the Hurst exponent values of the reaction time sequences exhibited by 20 experimental subjects in different tests under the estimation. It is obvious that both the HS-test data set and the TTS-test data set demonstrate long-term memory characteristics in the estimated results(\(H>0.5\)). This experimental result aligns with the conclusion obtained by Zhang et al. in their earlier work in 2017 [28]. ### Discussion The results from subsection 5.3 indicates that the estimation accuracy of spectrum-domain methods is significantly superior to that of time-domain methods, with the two Bayesian methods demonstrating the highest precision. In terms of method selection strategy, we have the following observations: * Time-domain methods exhibit good interpretation and no advanced programming skills are needed for implementing the estimation algorithms, whereas the implementation process of spectrum-domain methods relies on more advanced mathematical tools such as the FFT and wavelet analysis. * With the time-domain methods, we can effectively demonstrate the correspondence between partial statistical properties of sequences and sample scales (typically represented by a straight line). This is also why time-domain methods are highly popular and widely applied. * The Bayesian methods provides good accuracy, but the optimization algorithms or fixed-point algorithms are necessary. In addition to the 13 estimation methods mentioned above, there are other methods not mentioned in this paper. For example, the _maximum likelihood estimation_ (MLE) method for estimating the Hurst exponent is known for its implementation difficulty and high time complexity. For the interested readers, please refer to the references [57, 58, 59]. In 2022, Gomez et al. proposed the _Kolmogorov-Smirnov_ (KS) method based on the GHE method and TTA method [60]. The KS method estimates the Hurst exponent by calculating the Kolmogorov-Smirnov (KS) statistic distance between the empirical distributions of samples [61]. Gomez et al. [60] provided the Python code for the KS method and it is omitted here. In our experiment, it has been observed that for certain shorter time sequences [29], most estimation methods fail to produce accurate results. However, there exist a few methods that have good estimation performance, such as the GHE method, LW method, and LSVmethod. As for other time-domain methods, such as the R/S method, we can take the linear interpolation technique and use interpolation points to construct the sample sequences of lengths \(\{N/2,N/4,\cdots\}\). This approach enables the application of the R/S method to shorter sequence lengths. Figure 7: Relative Error for two linear regression method ### Code Availability The code for the implementations of the algorithms discussed in this paper can be downloaded from the following GitHub website [https://github.com/GrAbsRD/HurstExponent](https://github.com/GrAbsRD/HurstExponent) For the convenience of easy usage, both Python and Octave/MATLAB codes are provided. ## 6 Conclusion In this paper, we summarized 13 methods for estimating the Hurst exponent and categorized them into different classes based on different strategies: 1. time-domain methods and spectrum-domain methods based on the the representation of time sequence; 2. linear regression methods and Bayesian methods based on the parameter estimation method. Both the mathematical principle and algorithmic pseudo-codes are provided for these 13 methods, which helps the researchers and potential users to implement these methods with concrete programming language based on a unified framework. Our contributions are summarized as follows: * A general sequence partition framework was proposed for various time-domain methods based on the optimal approximate length and feasible sequence grouping approach. * The fixed-point algorithm, local minimum search algorithm, and linear regression method based on \(\ell_{1}\)-norm are applied to improve the accuracy of estimating the Hurst exponent with available estimation methods. * The estimation methods are classified with two perspectives, viz. the sequence representation and parameter estimation. Figure 8: Estimation of reaction time sequence in HS-test and TTS-test by each method * The sequences generated via FGN and pure random sequences are used to design a series of experiments to test the accuracy of the 13 estimators discussed above. * The flowcharts of R/S method and DFA method are provided for helping the readers to understand the essence and steps of the algorithms concerned. The numerical experiments and error analysis for the 13 estimation methods shows that: * The estimation accuracy of spectrum-domain methods is superior to time-domain methods, with a relative error of less than 6% in general. * When the value of Hurst exponent is small (say \(H<0.35\)), the relative errors of the estimation obtained by the R/S method and LW method are significantly larger than 5%; * The choice of \(\ell_{1}\)-norm and \(\ell_{2}\)-norm has little impact on the estimation accuracy. * The estimation with the practical data captured from the human behavioral experiment available online implies that each estimation method can effectively reveal the long-term memory features of the sequences, which confirming the suitability of the 13 methods. For the off-line applicaitons where the Hurst exponent in involved, we recommend the TTA method for the time-domain methods, the LSSD and LSV for the Bayesian methods, and the PM method for the spectrum-domain methods. For the real-time applications in which the Hurst exponent should be estimated dynamically, we recommend the R/S method to reduce the computational complexity since both the range and standard deviation can be estimated iteratively with the time clock. #### Acknowledgments This work was supported in part by the National Natural Science Foundation of China under grant number 62167003, and in part by the Hainan Provincial Natural Science Foundation of China under grant number 720RC616. ## Appendix A Algorithm for Local Minimization on Interval \([a,b]\) The algorithm for finding a local minimum of a function of a single variable is as follows: ``` 0: Function \(\phi\); Interval \([a,b]\), precision \(\epsilon\), extra parameters \(\langle\cdots\rangle\). 0: A local minimum for \(\phi\) in \([a,b]\). 1:functionLocMinSolver\((\phi,[a,b],\epsilon,\langle\cdots\rangle)\) 2:\(c\leftarrow(3-\sqrt{5})/2;d\gets 0;e\gets 0\); 3:\(v,w,x\gets a+c\cdot(b-a)\); 4:\(f_{v},f_{w},f_{x}\leftarrow\phi(x,\langle\cdots\rangle)\); 5:while True do 6:\(m\leftarrow(a+b)/2\); 7:\(t_{1}\leftarrow\epsilon^{2}\cdot|x|+\epsilon/3\); 8:if\(|x-m|\leq t_{1}^{2}-(b-a)/2\)then 9: break; 10:endif 11:if\(|e|>t_{1}\)then 12:\(r\leftarrow(x-w)\cdot(f_{x}-f_{v})\); 13:\(q\leftarrow(x-v)\cdot(f_{x}-f_{w})\); 14:\(p\leftarrow(x-v)\cdot q-(x-w)\cdot r\); 15:\(q\gets 2(q-r)\); 16:if\(q>0\)then ``` **Algorithm 24** A local minimum of real valued function, say \(\phi:[a,b]\times\cdots\rightarrow\mathbb{R},(x,\cdots)\mapsto y\). \(p\leftarrow-p\); * **else** * \(q\leftarrow-q\); * **endif** * \(r\gets e;\ e\gets d\); * **endif** * **if**\(|p|\geq\frac{|qr|}{2}\lor p\leq q(a-x)\lor p\geq q(b-x)\)**then** * **if**\(\chi_{\mbox{im}}\)**then** * \(e\gets b-x\); * **else** * \(e\gets a-x\); * **endif** * \(d\gets c\cdot e\); * **else** * \(d\gets p/q\); \(u\gets x+d\); * **if**\((u-a)<t_{1}^{2}\vee(b-u)<t_{1}^{2}\)**then** * **if**\(x<m\)**then** * \(d\gets t_{1}\); * **else** * \(d\leftarrow-t_{1}\); * **endif** * **endif** * **if**\(|d|\geq t_{1}\)**then** * \(u\gets x+d\); * **else** * \(u\gets x+t_{1}\); * **else** * \(u\gets x-t_{1}\); * **endif** * \(f_{u}\leftarrow\phi(u,\langle\cdots\rangle)\); * **if**\(f_{u}\leq f_{x}\)**then** * **if**\(u<x\)**then** * \(b\gets x\); * **else** * \(a\gets x\); * **endif** * \(v\gets w;f_{v}\gets f_{w};w\gets x;f_{w}\gets f_{x}\); * \(x\gets u;f_{x}\gets f_{u}\); * **else** * **if**\(u<x\)**then** * \(a\gets u\); * **else** * \(b\gets u\); * **endif** * **if**\(f_{u}\leq f_{w}\lor w=x\)**then** * \(v\gets w;f_{v}\gets f_{w};w\gets u;f_{w}\gets f_{u}\); * **else if**\(f_{u}\leq f_{v}\lor v=x\lor v=w\)**then** * \(v\gets u;f_{v}\gets f_{u}\); * **else** * [66]**endif** * [67]**endif** * [68]**end while** * [69]**return**\(x\); * [70]**end function**
2303.06049
Affordable Artificial Intelligence -- Augmenting Farmer Knowledge with AI
Farms produce hundreds of thousands of data points on the ground daily. Farming technique which combines farming practices with the insights uncovered in these data points using AI technology is called precision farming. Precision farming technology augments and extends farmers' deep knowledge about their land, making production more sustainable and profitable. As part of the larger effort at Microsoft for empowering agricultural labor force to be more productive and sustainable, this paper presents the AI technology for predicting micro-climate conditions on the farm. This article is a chapter in publication by Food and Agriculture Organization of the United Nations and International Telecommunication Union Bangkok, 2021. This publication on artificial intelligence (AI) for agriculture is the fifth in the E-agriculture in Action series, launched in 2016 and jointly produced by FAO and ITU. It aims to raise awareness about existing AI applications in agriculture and to inspire stakeholders to develop and replicate the new ones. Improvement of capacity and tools for capturing and processing data and substantial advances in the field of machine learning open new horizons for data-driven solutions that can support decision-making, facilitate supervision and monitoring, improve the timeliness and effectiveness of safety measures (e.g. use of pesticides), and support automation of many resource-consuming tasks in agriculture. This publication presents the reader with a collection of informative applications highlighting various ways AI is used in agriculture and offering valuable insights on the implementation process, success factors, and lessons learnt.
Peeyush Kumar, Andrew Nelson, Zerina Kapetanovic, Ranveer Chandra
2023-03-04T02:29:52Z
http://arxiv.org/abs/2303.06049v1
# Affordable Artificial Intelligence - Augmenting Farmer Knowledge with AI ###### Abstract We present a novel and efficient framework for predicting micro-climate data on a farm that spans a new deep learning approach which provides a comprehensive solution to the problem of predicting micro-climates on farms. DeepMC predicts various climatic parameters such as soil moisture, humidity, wind speed, temperature based on the requirement over a period of 12 hours - 120 hours with varying resolution of 1hour-6hours. This article presents multiple case studies and results from live deployments of DeepMC. On average 90%+ accuracy is reported. ## 1 Introduction It is the month of April and a farm in Eastern Washington, USA is producing wheat and lentil crops. The spring is just settling in while the low temperatures are slightly above freezing. The farmer is getting ready to spray his fields as the conditions become safe from a winter runoff and frost[1]. The plants are significantly susceptible to certain herbicides at freezing temperatures, therefore, the farmer consults the local weather station for temperature forecasts, which is located in the closest metropolitan valley about 50 miles away from the farm. The 3 day predictions show consistent temperatures above freezing point. The farmer rests equipment and orders chemicals, and starts spraying the farm. A couple of nights the temperature in certain parts of the field drop below freezing and kills around 30% of the crops. Despite the availability of weather forecasts from commercial weather stations, this is a common situation which can effect up to 50% of the crops[1, 2, 3]. This is because the climatic parameters around the plant not only vary from the nearest weather stations but also between various regions of the farm. Artificial intelligence (AI) technologies are key to tackle the kind of problem presented above and many more as farmers face the challenge of feeding a growing population. The AI market is projected to be USD 1.0 billion by the end of 2020 and is estimated to reach USD 4.0 billion by 2026, at a CAGR of 25.5% between 2020 and 2026. AI technologies help yield healthier crops, control pests, monitor soil and growing conditions, organize data for farmers, help with workload, and improve a wide range of agriculture-related tasks in the entire food supply chain. Additionally, as climate changes, the world becomes more connected and populations increase the burden on natural resources also increase. AI technologies are helping in a major way to create more sustainable farming practices by decreasing water wastage, overuse of chemicals on farms and conserving energy usage. Farms produce hundreds of thousands of data points on the ground daily. Farming technique which combines farming practices with the insights uncovered in these data points using AI technology is called _precision farming_. Precision farming technology augments and extends farmers' deep knowledge about their land, making production more sustainable and profitable. As part of the larger effort at Microsoft for empowering agricultural labor force to be more productive and sustainable, this paper presents the AI technology for predicting micro-climate conditions on the farm. Micro-climate is the accumulation of climatic parameters formed around a (approximately) homogeneous and relatively smaller region[4, 5]. Knowledge of micro-climate and micro-climate predictions are of importance in agriculture[6, 7], forestry[8], architecture[9], urban design[10], ecology conservation[11], maritime[12] and many other domains. DeepMC predicts various micro-climate parameters with 90%+ accuracy at IoT sensor locations deployed in various regions across the world. This article presents an outline and impact of a micro-climate prediction framework - DeepMC. This framework is based on a new deep learning approach which provides a comprehensive solution to the problem of predicting micro-climates on farms. DeepMC predicts various climatic parameters such as soil moisture, humidity, wind speed, temperature based on the requirement over a period of 12 hours - 120 hours with varying resolution of 1hour-6hours. This article presents multiple case studies and results from live deployments of DeepMC. On average 90%+ accuracy is reported. ## 2 Context Deploying AI solutions for predicting micro-climate on farms is a challenging problem. First, data needs to be collected from the farm before it is processed through an AI service. Second, this data needs to be transferred from the location it is collected to the cloud where it is processed through the AI service forecasting micro-climate. One of the biggest challenges with deploying IoT systems for data-driven agriculture is connectivity. Since most farms are located in rural areas, there is often little to no Internet connectivity and this is crucial when it comes to enabling seamless data collection. Consider the following scenario, where we want collect data on a farm that spans thousands of acres. This would require deploying sensors across the entire farm field, which all need connectivity to convey information and in turn allow us to enable applications such as micro-climate prediction or precision irrigation. Bringing this to fruition becomes even more challenging when considering the typical farming terrain. That is, signals must be able to travel through dense crop canopy at long-distance, often with no line-of-sight. Third, the methodology used to build AI models which forecast micro-climates needs to be accurate, reliable for daily use, replicable across farms and adaptable for various use cases. Climatic parameters are stochastic (random process) in nature and quite challenging to model for prediction tasks on farms. * **High prediction accuracy**: Generating high accuracy results is an obvious challenge for any real world deployment of a machine learning solution. In the context of micro-climate predictions, small quantity of labelled datasets, heterogeneity of features and non-stationary of input features makes the learning problem to generate highly accurate results quite challenging. * **Reliability for frequent use**: Non stationarity of the climatic time series data makes it difficult to reliably characterize the input-output relationships. Each input feature effects the output variable at a different temporal scale, for example the effect of precipitation on soil moisture is instantaneous while the effect of temperature on soil moisture is accumulated over time. * **Replicable for farms across the world**: Any system for micro-climate predictions is expected to perform across various terrains, geographic and climate conditions. In practice, good quality labelled data is generally not available and even if it is accessible it is not available for every terrain, geographic or climatic conditions. Therefore, smarter techniques are required to transfer model learned in one domain to another domain with little paired labelled datasets. * ambient temperature, wind speed and precipitation14. This creates a challenge for a machine learning system to accept vectors of varying dimensions as input to replicate predictions for different use cases. Footnote 1: [https://rdlecom.com/tv-white-space/](https://rdlecom.com/tv-white-space/) Footnote 2: [https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft/farmbeats.microsoft_farmbeats](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft/farmbeats.microsoft_farmbeats) Footnote 3: [https://darksky.net/dev](https://darksky.net/dev) Footnote 4: [https://www.necdc.noaa.gov/cdo-web/webservices/v2](https://www.necdc.noaa.gov/cdo-web/webservices/v2) Lastly, the output information needs to be presented in a way which can be consumed by the end user (producer/farmer in most cases) to aid their decision making. ## 3 Methodology DeepMC addresses the problems outlined above. It uses FarmBeats15 platform and TV White Spaces (TVWS) technology1 to address problems of data collection, data transmission and data presentation. In addition DeepMC also utilizes the nearest available weather station forecasts to learn the relationship between various climatic parameters. Footnote 1: [https://rdlcom.com/tv-white-space/](https://rdlcom.com/tv-white-space/) **FarmBeats**: DeepMC uses FarmBeats15 platform to collect climatic and soil data from multiple sensors around the world. FarmBeats is an end-to-end IoT platform for data-driven agriculture, which provides consistent data collection from various sensor types with varying bandwidth constraints. We chose the FarmBeats system for this work because of high system reliability and system availability, especially during events such as power and Internet outages caused by bad weather - scenarios that are fairly common for a farm. This data collected by FarmBeats IoT sensors is persisted in the cloud and accessed there. We also use the FarmBeats platform dashboard to deliver micro-climate predictions to the end-users using their Azure marketplace offering2. Footnote 1: [https://rdlcom.com/tv-white-space/](https://rdlcom.com/tv-white-space/) **TVWS technology**: The challenge of transmitting data from farm locations to a compute unit is solved by utilizing a new technology called TV White Spaces (TVWS). TVWS are unused TV spectrum that can be leveraged to extend Internet connectivity to locations that can be 10s of miles away. This technology is particularly ideal for agricultural scenarios for two reasons. First, since farms are typically located in rural areas there is a lot of unused TV spectrum available that provides large amounts of bandwidth for data transmissions (a single TV channel in the US has a 6MHz bandwidth). Second, TV spectrum spans the lower megahertz UHF and VHF bands, which is ideal for long-range communication even through dense canopy. **Weather Station Forecasts**: Weather station forecasts are collected for training and inference through commercial weather stations. Models in DeepMC are trained and tested with various weather data providers - DarkSky3, NOAA4, AgWeatherNet5, ### DeepMC - A deep learning based framework for micro-climate predictions The prediction problem is solved by using a deep learning approach by combining weather station forecasts and IoT sensor data in a special way. Each of the challenges identified in the Third point of the 2 are addressed. **1) Result accuracy:** Instead of predicting the climatic parameter directly, we predict the error between the nearest commercial weather station forecast and local micro-climate forecast. This is based on the hypothesis that hyper-localization of weather station forecasts is easier to learn than learning the relationships of the predicted climatic parameter with the predictor climatic parameters from ground-up. DeepMC achieves acceptable accuracy using this design model with reported 90%\(+\) (MAPE - mean absolute percentage error) accuracy across various regions in the world. **2) Reliability**: In order to reliably capture varying effects of climatic data a solution needs to capture multiple trends in the data in a stationary way. DeepMC utilizes a multi scale decomposition approach to capture these effects. This approach decomposes the input signals into various scales capturing trends and details in the data and allow them to be modelled in a repeatable and reliable way. **3) Replicablity:** DeepMC utilizes a specialized deep learning model, called GAN16, to transfer learnings from source farms to target farms around the world. **4) Adaptability:** All of the techniques combined together in a specialized architecture enable adaptability across multiple use cases on the farm. Footnote 16: [https://www.weather.gov/documentation/services-web-api](https://www.weather.gov/documentation/services-web-api) ## 4 Impact DeepMC is used across many different regions of the world where FarmBeats15 technology is deployed. In this section, we present 3 agricultural applications which are a projection of common situations effected by weather conditions. We also show some results in comparison to common models used to solve prediction tasks. Footnote 15: [https://cs-docs.dnt.com/apis/weather-api/](https://cs-docs.dnt.com/apis/weather-api/) ### Scenario 1 - Spraying Herbicide: Micro-temperature predictions This scenario is the one presented in the Introduction. The farm, called Nelson Farm, is located in the eastern portion of Washington State in a region called the "Palouse". It is an area known for it's rolling hills and crops such as wheat, lentils, peas, garbanzo beans, and canola. The area is at the foothills of mountain ranges and the combination of the rolling hills and Figure 1: DeepMC Micro-Climate temperature 6 day sequential predictions with a resolution of 6 hour the landlord and the farmer of the land are both affected by the decisions the farmer. There are also many research test plots scattered throughout the area that benefit from the farmer's advice on when to do certain field operations. The farmer operates on approximately 9000 acres of land across a region which is quite hilly. There are many distinct micro-climate regions in this farm. Climatic parameters vary significantly among various regions of the farm and also between the nearest commercial weather forecast provider and the readings on the ground. The farmer uses DeepMC predictions for advisory on temperature forecasts at specific locations on his farm. We deployed TVWS and FarmBeats sensors at Nelson Farm. The farmer has Internet connectivity at his home, but it cannot cover the vast size of his farm that spans approximately 9000 acres across 45 miles. To remedy this, we deploy a TVWS base station that connects to the Internet at the farmers home and extends the coverage to TVWS clients that are deployed out in the farm field (see Figure 2). The TVWS links between the base station and clients are up to 13 miles in this deployment. With the TVWS deployment we were able to deploy several FarmBeats sensor boxes on the farm to collect data (see Figure 3). These sensor boxes include many sensors such as wind speed and direction, ambient temperature, and soil moisture and temperature. Thus far we have had the FarmBeats system deployed on Nelson farm for 24 months and it has provided numerous insights that helped improve overall productivity. For instance in this scenario, the farmer consults DeepMC for temperature predictions for specific locations to plan logistics and operations for spraying herbicide. These experiments were conducted in the Spring of 2019 and Spring of 2020, micro-climate predictions were used daily when spraying herbicides on the wheat, lentils, peas and garbanzo beans. It was used to plan the days that the fields could be sprayed with certain herbicides, sometimes to try to avoid freezing weather and other times to avoid overly hot weather. Figure 1 shows a 6 day forecast with a temporal resolution of 6 hours. The figure shows the comparison of the results obtained by DeepMC with Dark Sky weather forecast (from the nearest station) and the actual temperatures recorded using IoT sensors in retrospect. Based on DeepMC's predictions the farmer postponed his spraying for the period between 07-April-2019 to 11-April-2019 as the temperature predicted by DeepMC were below freezing. Instead, had the farmer relied on weather station forecasts, which consistently showed temperatures above freezing (more that 5C), then he would have been at risk of endangering the crop losing up to 30% in yield. The farmer was also able to re-arrange his labor to other practices during the days that spraying herbicide wouldn't have been beneficial. In the fall season, it is used to allow more notice to the farmer's employees on when to arrive at the farm in since many operations can not be done in freezing conditions. This advance notice allows employees to have more time to rest when they used to have to wait for an early morning call for them to plan. In many places, especially small holder farms, this percentage is significant enough to decide whether the farmers will be able to achieve basic sustenance of food and supplies in the coming year or not. For this particular farm and location, DeepMC predictions for ambient temperature has recorded RMSE of 1.35 and MAPE of 7.68% (implying accuracy of 92.32%) for the data recorded in Figure 1. The predictors used for predicting micro-temperature are: a) From the IoT sensors - Ambient Temperature, Ambient Humidity, Precipitation, Wind Speed; b) From the weather station - historical ambient Temperature forecasts. The DeepMC design choice allows Figure 2: A TVWS deployment on Nelson farm ### Scenario 2 - Phenotyping Research: Micro-soil-moisture predictions The producer is interested in experimenting with different growing techniques for vine tomatoes. The vine tomatoes are susceptible to rot if they are too close to the soil with high moisture values. Generally, growers use trellises to lift up the vines and provide structural stability. The trellises add more challenges to manage the crops over the growing season. The producer here is interested in growing tomatoes without the trellises. This critically depends on being able to predict the local soil moisture values accurately. The producer uses DeepMC for advisory on micro-soil-moisture conditions. The results are show in Figure 4 with the recorded RMSE value of 3.11 and MAPE value of 14.03% (implying a 85.97% accuracy). The predictors used for predicting micro-soil-moisture are: a) From the IoT sensors - Ambient Temperature, Ambient Humidity, Precipitation, Wind Speed, Soil Moisture and Soil Temperature; b) From the weather station - historical Soil Moisture forecasts. ### Scenario 3 - Greenhouse control: Micro-humidity predictions In this scenario, the producer is storing garbanzo beans inside a grain tank. In order to control climate conditions inside the grain tank, the producer uses fans which pull the air from outside to regulate temperatures inside the greenhouse. The speed and duration of the fan control depends on the immediate humidity levels in the air outside. The producer consults DeepMC to advise in the decision making of grain tank fan control. The results are shown in Figure 5. The predictions are plotted for the 12th hour over a period of 1 week with a resolution of 1 hour. The RMSE recorded for these predictions is 5.54 and MAPE is 5.09% (therefore, the MAPE accuracy recorded is \(100\%-5.09\%=94.91\%\)). The model was trained on a stock dataset from a different farm where sufficient paired data was available and transferred at this location. The predictors used for predicting micro-humidity are: a) From the IoT sensors - Ambient Temperature, Ambient Humidity, Precipitation, Wind Speed; b) From the weather station - historical ambient Humidity forecasts. ## 5 Innovation and success factors This work develops a comprehensive micro-climate prediction framework which can be used for multiple input-output paired climatic parameters. We highlight three real-world deployments which characterize a diverse set of conditions. We conduct a comprehensive validation of DeepMC across various regions around the world and various micro-climatic parameters. The predictions computed are being used through real-world deployments of FarmBeats system in Ireland, Africa and various states in the United States. The results presented here are computed for predictions of local temperature, local wind speed, soil moisture, soil temperature and humidity. The framework is generalizable to other input-output combinations of the climatic parameters. Figure 3: A FarmBeats sensor deployed on Nelson farm Figure 4: DeepMC Micro-climate soil moisture 4th day prediction with 6 hour resolution over a period of 10 months Figure 5: DeepMC Micro-Climate Humidity prediction at the 12th hour with a resolution of 1 hour over 1 week period This framework is an example of how AI technologies can augment farmers' knowledge to help them make better decisions on the farm. We address some of the most difficult problems in precision agriculture viz-a-vis data collection on farms which are remote and computation to surface insights from that data. The factors which contributed to the success of this work was the innovation that went into the research along with the collaboration with the farmers who were facing the problems we set out to solve. This close link between development and application site fueled the success of this project. ## 6 Sustainability **Environmental Sustainability:** AI-based solutions allow for better cost control by allowing for better predictions based off of relatively affordable weather stations. Their data allows farmers to apply chemicals with better timing allowing them to be as effective as possible. Many weeds that are controlled by chemicals are gaining resistance, so the more effective the chemical is at the time of application, the more likely that the weed will not develop resistance to it. This allows for less chemical application overall. The other way that it helps with environmental issues is that farmers are able to better monitor crops that are far apart geographically and not apply the same practice at each field, instead allow the field to be managed to it's micro-climate for the current year. **Operational Sustainability:** We follow a partnership driven model to promote uptake of the solution described in this paper. We work closely with some of the major organizations in agriculture8 (corporations, governments, cooperatives, consortiums, NGOs, etc.) who have a wider reach into the farmer ecosystem. The AI solutions are deployed in partnership with these organizations, where we share AI generated insights on market advisories, and inputs/outputs to farm operations. This creates a synergistic environment and the right incentives for organizations to deploy these solutions on the farm, where the organization benefit from the advisories (such as seed intake prediction, crop yield estimation, etc) and farmers benefit from input insights (such as micro-climate predictions, irrigation advisory, etc.). This partnership driven model has been found to scale adoptions to a wide demography of farms around the globe. Footnote 8: For example: [https://www.businessinsider.com/microsoft-and-land-olakes-new-partnership-tackles-the-digital-divide-2020-7](https://www.businessinsider.com/microsoft-and-land-olakes-new-partnership-tackles-the-digital-divide-2020-7) Additionally, another key requirement to make AI solutions more practical and sustainable is the cost factor. The key innovations in the technology presented here decreases the cost of deploying sensors and digital operations on the farm. The FarmBeats platform makes it cheaper to deploy and integrate sensors directly in a central datahub by its key innovations in networking technology, storage and compute on the edge framework. This central data aggregation and deployment platform also enables running AI solutions for a fraction of the cost. In addition, the PaaS model allows for scale which drives down costs per user. A key challenge for an AI/digital framework, such as presented in this paper, to be actionable on the farm is the gap between how producers think and carry out their operations, and the complexity of using technology and digital literacy. As part of deploying the solution we also developed solutions to help educate the next generation of farmers on how technology can be utilized to make advancements in agriculture. We developed student kits, a plug-and-play platform designed to teach students about how data can be used to provide meaningful and actionable insights for farming applications. The student kit comes with several sensors (e.g. soil moisture, soil temperature, light intensity) and a Raspberry Pi running Windows IoT Core that is ready to interface with an IoT dashboard. The IoT dashboard displays all incoming sensor data that the students get to interact with, learn how to interpret data, and use the data to make intelligent decision for their farms. We partnered with the Future Farmers of America (FFA)9 to distribute FarmBeats student kits to FFA chapters across the United States. Moreover, we work with FFA to provide workshops and hackathons for teachers to learn how to educate students about ag-tech and develop lesson plans around the student kits, ultimately expanding each students view of how farming practices can become more cost-effective, productive, and sustainable by leveraging AI and IoT technology. Footnote 9: [https://www.ffa.org/](https://www.ffa.org/) ## 7 Constraints There were a few challenges encountered during the development and deployment of the DeepMC framework. The development challenges are described in Section 2 and how the solution presented here addresses them. Operationally, the challenges mentioned in Section 6 on Sustainability highlight the constraints on adapting this technology on farms and steps taken to overcome them. ## 8 Replicability This framework is replicable to be able to us for other contexts. The results presented in Section 4 show how DeepMC can be adapted for various farms across multiple conditions. As it stands this framework is easily scalable in a could environment. DeepMC can also be used in other contexts where micro-climate predictions are needed, such as forestery, maritime environment, etc. In order to use this framework for non-farm conditions, it will be required for the model to be retrained from scratch without any change in the underlying architecture is required. ## 9 Testimony **Andrew Nelson, Nelson Farm**: _Utilizing AI allows farmers to have another tool at their disposal. The ability to quickly apply the results that AI models produce is a great advantage. The other large benefit is that AI can allow for other technologies to have more realized benefits. TVWS sensors that are placed throughout the farm can allow multiple predictive models for different terrain and micro-climates. AI can has brought a new perspective on existing data, it can combine aerial imagery and ground soil moisture sensors to give insight on soil moisture that is not easy to see while walking through the farm even if the farmer were to take a soil moisture reading at multiple locations. Farmers are then able to utilize more data to make their decisions that would otherwise be difficult or too time consuming to analyze on their own. During busy seasons, farmers are already working during all available sunlight, any time savings allows the farmer more time to tend to their crops which usually allows for higher yield potential. The future predictions that AI provides gives farmers more insight on how to maximize their investment of time and money into the current crop. It has allowed for larger scale testing of different farming techniques that have improved farming practices in terms of profitability, sustainability, and sometimes both.
2301.10052
Event Detection in Football using Graph Convolutional Networks
The massive growth of data collection in sports has opened numerous avenues for professional teams and media houses to gain insights from this data. The data collected includes per frame player and ball trajectories, and event annotations such as passes, fouls, cards, goals, etc. Graph Convolutional Networks (GCNs) have recently been employed to process this highly unstructured tracking data which can be otherwise difficult to model because of lack of clarity on how to order players in a sequence and how to handle missing objects of interest. In this thesis, we focus on the goal of automatic event detection from football videos. We show how to model the players and the ball in each frame of the video sequence as a graph, and present the results for graph convolutional layers and pooling methods that can be used to model the temporal context present around each action.
Aditya Sangram Singh Rana
2023-01-24T14:52:54Z
http://arxiv.org/abs/2301.10052v1
# Event Detection in Football using Graph Convolutional Networks ###### Abstract The massive growth of data collection in sports has opened numerous avenues for professional teams and media houses to gain insights from this data. The data collected includes per frame player and ball trajectories, and event annotations such as passes, fouls, cards, goals, etc. Graph Convolutional Networks (GCNs) have recently been employed to process this highly unstructured tracking data which can be otherwise difficult to model because of lack of clarity on how to order players in a sequence and how to handle missing objects of interest. In this thesis, we focus on the goal of automatic event detection from football videos. We show how to model the players and the ball in each frame of the video sequence as a graph, and present the results for graph convolutional layers and pooling methods that can be used to model the temporal context present around each action. Action Spotting, Event Detection, Deep Learning, Ball-Player Detection, Graph Convolutional Networks, Sports Analytics ## I Introduction Sports have emerged to be the highest revenue-generating applications of computer vision [1] with an annual market revenue of $91 billion [2], with $28.7 billion [3] generated solely from the European football industry, mostly from broadcasting and commercial activities. It has played a big role in enhancing the visual experience of live sports broadcasting through the integration of augmented and virtual reality. Machine learning has also revolutionized the sports industry in the way athletes train, how their game performances are analyzed and how coaches prepare their teams to tackle an opposition. For both sports analysts and broadcast producers, it is crucial to be able to identify and summarize all the events that occur within a game. Currently, this requires hours of manual annotation and a high-level understanding of the game being played. With around 10,000 football games scheduled for the five biggest leagues in Europe every year, creating an automatic event detection system would save hundreds of thousands of hours of manual annotation and speed up the process by a huge factor. It would also help in cutting down the high costs of production that can only be afforded by the top leagues and that in turn leaves behind a majority of the games from lower leagues and less popular sports uncovered. One of the key elements in automating any sports' statistical analysis is an accurate and efficient ball-player detection system. Detecting the ball from the long-shot scene of a football game is a challenging task. Komorowski et al., 2019 [4] discusses several factors that make localizing the ball on the football pitch difficult. The ball has a very small size compared to all the other objects present on the pitch and in the scene. Also, its size can vary a lot depending on its position on the pitch. The size of the ball can be as small as \(8\times 8\) pixels in a \(1980\times 1080\) or a \(3840\times 2160\) pixel image depending on the camera resolution. This can force the detector to output lots of false-positive values for the ball as it may be difficult to differentiate the ball from the white socks and white shoes of the players, sometimes even being mistaken for a bald person's head [5]. Also, when in motion the ball may appear as blurry and elliptical instead of its original circular shape as can be seen in Fig. 2. Detecting the players on the pitch is easier compared to the ball since they are much larger in size. However, it may be difficult to detect players when they are occluded by another player. Positional data from football games is difficult to model because most machine learning algorithms require data and features to be arranged in a specific order. It is unclear how to construct a feature vector taking into account the individual features of the players and the ball as there are no clearly defined rules on what order to use for selecting the players. Lucey et al. tried to overcome this ordering issue by predefining a specific formation template of 4-3-3 to each team [6, 7] and then assigning each player a role in this arrangement. However, this model can easily fail its assumptions as teams use a variety of formations, so the roles of the defenders and attackers playing in a 3-5-2 formation would be very different from a 4-3-3 formation. Hence, we would not be modeling the roles of the players correctly as can be seen in Fig. 3. Also, this formation template requires all players to be present on the pitch, and cannot be used when a player from a team has been sent off. Other papers [8, 9] have tried working with image representations of tracking data to overcome this ordering/alignment and then processing them using Convolutional Neural Networks (CNNs). However, taking the tracking data which is a compact, complete and low-dimensional representation of an object's movement and converting it to an image that is a sparse high-dimensional representation is sub-optimal. Modeling our data with Graph Neural Networks (GNNs) can help us overcome these issues as * we can neglect the need for ordering the features in a certain order * deal with a variable number of players on the pitch, and can also handle missing tracks for the ball or players * learn local and high-level features directly from the tracking data GNNs have recently gained a lot of popularity and have been employed in a diverse set of domains where data is more aptly represented as graphs or networks. For example, in chemistry molecules are modeled as graphs with atoms as nodes and the bonds representing the edges [10, 11]. In a social network, the interactions between the users can help us determine which accounts are fake or bots [12, 13]. Bronstein et al [14] discuss in detail the history and development of the field of machine learning on graphs in their book on geometric deep learning [14]. With the rapid development of ideas in this field, it is important to benchmark the performance of existing architectures against large datasets under consistent experimental settings, a task that has been recently achieved in the paper [15] by Dwivedi et al. In the following sections, we will describe the generation process and the structure of our input data, from frames obtained Fig. 1: **Overview of the proposed method.** 2D positions of players and ball in each frame are represented as graphs, that are then fed into a Graph Neural Network architecture to extract features. These features are then pooled taking into account temporal windows and the pooled features are fed into a multi-label classifier. Fig. 2: Example of patches illustrating high variance in ball appearance and difficulty of the ball detection task. Image taken from [4]. from the camera to the real-world coordinates of all the entities (referring to football players, referees and the ball) present on the football pitch. However, the main focus of this thesis is the processing of this tracking data. The contributions of this thesis can be summarized as follows 1. Formulating the pipeline for generating a high-level view of the football field (referred to as a minimap) using ball-player bounding boxes and camera calibration 2. Describing how football tracking data can be modelled using graphs and then processed using Graph Convolutional Networks 3. Formulating event detection as an action spotting task, which involves localizing events to a certain timestamp in a video 4. Experimenting with different pooling methods for modelling the temporal context around each action ## II State of the art ### _Object Detection_ Komorowski et al. [16] compare the performance of state-of-the-art object detectors on publicly available football datasets ISSIA-CNR Soccer [17] and Soccer Player Detection [18]. They also propose their own architecture, inspired by the Faster-RCNN (Region-based Convolutional Neural Networks) [19] and the Feature Pyramid Network (FPN) [20] proposed by Girsick et al., that can run at a much faster framerate while being at par on performance with these networks. Their network, called FootandBall [16], has 200K parameters compared to the 41M parameters in FPN Faster-RCNN with a Resnet-50 backbone [21], and runs at rate of 37 fps (vs 8fps of the latter) when processing a high-definition (\(1920\times 1080\)) video. There is a need for an updated study that compares the performance of the new and improved single-stage architectures [22, 23] and transformer-based architectures [24, 25] on these datasets. The trade-off between single-stage and two-stage architectures is that of speed vs accuracy. Two-stage detectors, like Faster-RCNN split the detection task into region proposal and then regression-classification. One-stage detectors, on the other hand, perform bounding box regression and classification directly on the image. Two-stage architectures usually perform better on smaller objects as they get a chance to look at the image twice, and refine the proposals from the first-stage instead of having to regress the coordinates in a single pass. ### _Action Spotting_ Certain actions like shot on goal, pass, offside do not happen over a time window but can be anchored to a certain frame/time that defines the event. For example, in soccer, a goal happens at the exact moment the football crosses the goal line. The closer our predictions are to the target, the better our action-spotting performance is. The task of action spotting was introduced by Giancola et al. in their SoccerNet paper [26] along with the SoccerNet dataset in Computer Vision and Pattern Recognition (CVPR) conference 2018. The SoccerNet dataset consists of 500 football games (amounting to 764 hours of video footage) from the main European leagues collected across three seasons. They provided event annotations for the games by splitting them into three main categories: 'goals', 'cards' and'substitutions', for a total of 6,637 events. In the year 2021, they released the second version of this dataset dubbed SoccerNet v2.0 [27], where they increased the number of actions annotated to 17. They introduced new classes corner, throw-in, shots on target, offside, etc. and increased the number of available annotations to 110,458, with an average of 221 actions per game. Current state of the art methods for action spotting include Context Aware Loss Function (CALF) [28, 29] and NetVLAD++ [30], the latter of which we are going to be using for our experiments. Fig. 3: Different teams can use different formations in the same season, as can be seen above for a team like FC Barcelona that has recently been changing its style and formation of play quite regularly. Modelling the motion of players of a team becomes difficult when you use models that predefine a specific formation template [6, 7] when trying to analyze a team or generating features for another downstream task. ### _Graph Convolutional Networks_ GNNs are amongst the most general class of deep learning architectures as most other learning architectures can be formulated as a special case of GNNs with additional geometric structures. A class of GNNs called Graph Convolutional Networks (GCNs) generalize the idea of convolution from euclidean to the graph domain. The same GCN model has been developed parallelly between different streams of literature, the popular dichotomy being the one between spectral theory on graphs [12, 31, 32, 33, 34] and spatial-based graph convolutions. Regardless of the motivation, the defining feature of a GCN is that it uses a form of neural message passing in which vector messages are exchanged between nodes and updates using neural networks [11]. Different flavours of this model can be seen in Fig. 4. Recent works [15] on benchmarking the performance of GNNs have shown that anisotropic mechanisms [35, 36, 37, 38, 39] improve the performance of GCNs. The best results amongst this class of models were achieved by GAT [38] which leverages attention [40] and GatedGCN [39] that uses gating introduced in Chung et al [41]. One of the key reasons behind the success of Convolutional Neural Networks (CNNs) [42, 43] is the design and training of very deep models by stacking many layers together with residual connections [21] between them. Stacking more than four layers in a vanilla GCN leads to an over-smoothing problem in which deeper node features converge to the same value and local neighborhood information is lost. Li et al. [44] mitigate this problem by adapting the ideas of ResNets [21] and DenseNets [45] to construct deep GCNs up to fifty-six layers achieving state of the art on semantic segmentation of point clouds. ## III Method In this section, we describe how we encode per-frame player and ball tracking data in a graph, and investigate several temporal pooling methods that learn the past and future context independently to perform the task of action spotting. ### _Graphs_ A graph is a ubiquitous data structure and can be described as a collection of objects that may or may not interact with each other. Formally, a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is defined by a set of nodes \(\mathcal{V}\) connected by a set of edges \(\mathcal{E}\). We denote an edge going from node \(u\in\mathcal{V}\) to node \(v\in\mathcal{V}\) as \((u,v)\in\mathcal{E}\). A convenient way to represent graphs and its node features is through an adjacency matrix \(A\in R^{|\mathcal{V}|\times|\mathcal{V}|}\) and a feature matrix \(F\in R^{|\mathcal{V}|\times m}\) respectively, where \(|\mathcal{V}|\) is the number of nodes/vertices in the graph and \(m\) is the dimensionality of the node feature vector. When constructing the feature matrix \(F\), we assume that the ordering of the nodes is consistent with the ordering in the adjacency matrix \(A\). Even though in most graphs \(A\) is usually a binary matrix of \(\{0,1\}\) signifying whether an edge exists between two nodes or not, it can also be a real matrix or a matrix of vectors \(A\in R^{|\mathcal{V}|\times E\times|\mathcal{V}|}\), where each edge can be weighted or have its own vector representation of length \(E\). The way we construct the graphs for our models is depicted in Fig. 5. ### _NetVLAD_ **VLAD** (Vector of Locally Aggregated Descriptors), 2010 [46] was presented as a solution to large-scale image retrieval which was restricted by memory requirements at the time, as an alternate to the Bag of Words (BoW) [47] and Fisher Vectors [48] approach. VLAD, like visual word encoding, starts by vector quantizing feature descriptors. It differs from BoW as instead of keeping the count of visual words, VLAD stores the sum of residuals (difference vector between the descriptor and its corresponding cluster center). Mathematically, given a set of \(N\)\(D\)-dimensional features \(\{\mathbf{x}_{i}\}_{i=1..N}\) as input, a set of Fig. 4: A visualisation of the dataflow for the three flavours of GNN layers, g. We use the neighbourhood of node b from Figure 10 to illustrate this. Left-to-right: convolutional, where sender node features are multiplied with a constant, \(c_{uv}\); attentional, where this multiplier is implicitly computed via an attention mechanism of the receiver over the sender: \(\alpha_{uv}=a(x_{u},x_{v})\); and message-passing, where vector-based messages are computed based on both the sender and receiver: \(m_{uv}=\psi(x_{u},x_{v})\). Retrieved from [14] clusters centers \(\{\mathbf{c}_{k}\}_{k=1..K}\) with same dimension \(D\) as VLAD parameters, the output of the VLAD descriptor \(V\) is defined by: \[V(j,k)=\sum_{i=1}^{N}a_{k}(\mathbf{x}_{i})(\mathbf{x}_{i}(j)-\mathbf{c}_{k}(j)) \tag{1}\] where \(\mathbf{x}_{i}(j)\) and \(\mathbf{c}_{k}(j)\) are respectively the \(j\)-th dimensions of the \(i\)-th descriptor and \(k\)-th cluster center. \(a_{k}(\mathbf{x}_{i})\) denotes the hard assignment of the sample \(\mathbf{x}_{i}\) from its closer center, i.e. \(a_{k}(\mathbf{x}_{i})=1\) if \(\mathbf{c}_{k}\) is the closest center of \(\mathbf{x}_{i}\), \(0\) otherwise. The matrix \(V\) is then L2-normalized at the cluster level, flatten into a vector of length \(D\times K\) and further L2-normalized globally. **NetVLAD.** The VLAD module is non-differentiable because of the hard assignment \(a_{k}(\mathbf{x}_{i})\) of the samples \(\{\mathbf{x}_{i}\}_{i=1}^{N}\) to the clusters \(\{\mathbf{c}_{k}\}_{i=1}^{K}\). These hard assignments create discontinuities in the feature space between the clusters, hindering us from learning the parameters of VLAD by backpropagation. To overcome this issue, NetVLAD introduced a soft-assignment \(\tilde{a}_{k}(\mathbf{x}_{i})\) for the samples \(\{\mathbf{x}_{i}\}_{i=1}^{N}\), based on their distances from each cluster center. Formally: \[\tilde{a}_{k}(\mathbf{x}_{i})=\frac{e^{-\alpha\|\mathbf{x}_{i}-\mathbf{c}_{k} \|^{2}}}{\sum_{k^{\prime}=1}^{K}e^{-\alpha\|\mathbf{x}_{i}-\mathbf{c}_{k^{ \prime}}\|^{2}}} \tag{2}\] \(\tilde{a}_{k}(\mathbf{x}_{i})\) ranges between \(0\) and \(1\), with the highest value assigned to the closest center. \(\alpha\) is a temperature parameter that controls the softness of the assignment, a high value for \(\alpha\) (ex. \(\alpha\rightarrow+\infty\)) would lead to a hard assignment like in VLAD. Furthermore, by expanding the squares and noticing that \(e^{-\alpha\|\mathbf{x}_{i}\|^{2}}\) will cancel between the numerator and the denominator, we can interpret Equation 2 as the softmax of a convolutional layer for the input features parameterized by \(\mathbf{w}_{k}=2\alpha\mathbf{c}_{k}\) and \(b_{k}=-\alpha\|\mathbf{c}_{k}\|^{2}\). Formally: \[\tilde{a}_{k}(\mathbf{x}_{i})=\frac{e^{\mathbf{w}_{k}^{T}\mathbf{x}_{i}+b_{k} }}{\sum_{k^{\prime}}e^{\mathbf{w}_{k^{\prime}}^{T}\mathbf{x}_{i}+b_{k^{\prime }}}} \tag{3}\] Finally, by plugging the soft-assignment from 3 into the VLAD formulation in 1, the NetVLAD features are defined as in Equation 4, later L2-normalized per cluster, flattened and further L2-normalized in its entirety. \[V(j,k)=\sum_{i=1}^{N}\frac{e^{\mathbf{w}_{k}^{T}\mathbf{x}_{i}+b_{k}}}{\sum_ {k^{\prime}}e^{\mathbf{w}_{k^{\prime}}^{T}\mathbf{x}_{i}+b_{k^{\prime}}}}( \mathbf{x}_{i}(j)-\mathbf{c}_{k}(j)) \tag{4}\] While the original VLAD optimizes solely the cluster centers \(\mathbf{c}_{k}\), NetVLAD optimizes for three sets of independent parameters \(\{\mathbf{w}_{k}\}\), \(\{b_{k}\}\) and \(\{\mathbf{c}_{k}\}\), dropping the constraint of \(\mathbf{w}_{k}=2\alpha\mathbf{c}_{l}\) and \(b_{k}=-\alpha\|\mathbf{c}_{k}\|^{2}\). All parameters of NetVLAD are learnt for the specific task in an end-to-end manner. As illustrated in Fig. 6, the NetVLAD layer can be visualized as a meta-layer that is further decomposed into basic CNN layers connected up in a directed acyclic graph, and can be easily plugged into any architecture for training. We also test our approach using NetRVLAD [26, 30] which is a slightly tweaked version developed on top of NetVLAD, which drops the cluster parameters \(\mathbf{c}_{k}(j)\) in (4), leading to slightly less parameters to learn. Fig. 5: **Ball-Player Graph** In our graph, the ball and the players are represented by nodes, connected through an edge if the real-world distance between them is less than 25 meters, which we consider sufficient for message passing. The feature for each node is constructed by concatenating its \(x\) and \(y\) position (normalized to \([-0.5,0.5]\)) based on the pitch size of the field, and a one-hot vector representing if the node is a player from the first team, second team or the ball. The choice of the first team and second team is irrelevant as the important idea is that players from the same team have the same one-hot vector label. At this point, I would like to reiterate that only about 12.5 percent of each game in our data has ball annotations and for the rest of the cases, the model has to learn and make predictions solely based on the movements of the players. ### _Pooling for Detection_ In order to recognize or detect activities within a video, a common practice consists of **aggregating** local features and **pooling** them. While naive approaches use mean or maximum pooling, more elaborate techniques such as Bag-of-Words (BOW) [47], Fisher Vector (FV) [48], and VLAD [46] look for a structure in a set of features by clustering and learning to pool them in a manner that improves discrimination. Recent works extend those pooling techniques by incorporating them into Deep Neural Network (DNN) architectures, namely SoftDBOW [49], NetVLAD [50, 51], NetRVLAD [30] and ActionVLAD [51]. We follow an action spotting pipeline similar to the one proposed in SoccerNet [26], where they try to classify if an action lies within a certain time window in a multi-label setting. During training, we split our videos into chunks of different lengths annotated with all events occurring within that time window. We aggregate the features extracted from all frames present within that window and use a sigmoid activation at the last layer, as is usual in the task of multi-label classification. For testing, we sample the frames of the video with the same window size and a stride of one and pass this as input to the model to obtain the raw event predictions for each frame. We then apply a confidence threshold and non-maximum suppression on the output predictions for each class to get the final action spotting results. ### _Temporal Pooling - Layer ++_ All the above-mentioned pooling methods are permutation invariant and do not take into account the order of the frames, hence losing the temporal information. Recent works [28, 29, 30] have shown that temporal context before and after an event is very different and should be handled differently. They describe how different actions might share the same similar sets of features before or after the event but usually not both. For example, the semantic information before a 'goal' event and a'shot on goal' are very similar, representing the concept of a player trying to score a goal and a goalkeeper trying to catch the ball. Yet, the semantic information derived from the movement of the players after the two events is very different, as the goal is usually followed by all the players gathering together. For our experiments, we use the idea proposed in [30], where we add two separate pooling modules for aggregating features from before and after the action separately, as can be seen in Fig. 7. The comparison of performance between the pooling layers and the temporally aware pooling layers is provided in the table I where ++ represents that the layer is used in temporal pooling fashion. Fig. 6: **Graph architecture** with the NetVLAD layer. The layer can be implemented using standard CNN layers (convolutions, softmax, L2-normalization) and one easy-to-implement aggregation layer to perform aggregation in equation 1, joined up in a directed acyclic graph. Parameters are shown in brackets. Fig. 7: NetVLAD (top) vs. NetVLAD++ (bottom) pooling modules for action spotting. The temporally-aware NetVLAD++ pooling module learns specific vocabularies for the past and future semantics around the action to spot. Figure taken from [30]. We define the _past_ context as the frame feature with a temporal offset in \([-T_{b},0]\) and the _future_ context as the frame feature with a temporal offset in \([0,T_{a}]\). Each pooling module aggregates different clusters of information from the \(2\) subsets of features, using \(K_{a}\) and \(K_{b}\) clusters, respectively for the _after_ and _before_ subsets. Formally: \[V=AGGREGATE(V_{b},V_{a}) \tag{5}\] where \(AGGREGATE\) is an aggregation function \(V_{b}\) and \(V_{a}\) that represents the NetVLAD pooled features for the sample _before_ and _after_ the action occurs, parameterized with \(K_{b}\) clusters for the _past_ context and \(K_{a}\) clusters for the _future_ context. ## IV Data Generation One of the main motivations for this thesis is being able to leverage information from a large set of football matches without the burden of processing each frame of the videos. The results shown in this paper are based on simple 2D tracking data for the players and the ball. But, of course, some preprocessing work is required to obtain this low-dimensional data. This section describes the data used in section III, as well as the pipeline used to get it from the initial football videos. ### _Preprocessing_ Steps to convert video data (3 images with 4K resolution per each frame of the match at a frame rate of 15fps) into low-dimensional 2D tracking data (23 \(xy\) positions for players and the ball per frame, with team information for each player) are detailed below (see Fig. 9 for a visual summary): 1. For any specific game, the starting point are three videos covering the left, central and right parts of the pitch with some overlap between them. 2. With these videos, a bigger one where the whole pitch is visible (referred as the _panorama video_ from now on) is constructed after a calibration process. 3. A Faster-RCNN [19] architecture with a ResNet [21] backbone trained with football images is used to detect the players and the ball in the video. 4. Features for re-identification extracted based on [52]. 5. Hungarian algorithm [53] to associate detections with trajectories based on their features, their position in the previous frame and and an estimation of their current position using a Kalman filter [54]. 6. Players, referees and ball positions are projected onto the football pitch minimap. ### _Information in the dataset_ Our data includes information from 9 matches recorded at 15 fps with the following information: * Names and IDs of the teams and players of the match. * Name of the stadium and size of the pitch. * 2D position of each player and referee on the pitch in meters for each frame of the match. * 2D ball position on the pitch in meters for 10-12% of the frames of the match (without 3D information, so there's a reprojection error when the ball is not touching the ground). Fig. 8: **Dataset distribution. The bar plot shows the number of annotations available for each event in the dataset. Though we do not have any annotations available for a red_card in our dataset yet, we have included it in our model because it will be be annotated eventually as more games are recorded and become available for training.** * Frame number where certain events happen, associated with the corresponding player ID when needed. Concretely, these are the annotated events and their distribution can be found in 8. 1. game flow annotations: These are used to note when the game stops. These tell the time of the play being stopped but interpretation will depend on what annotation follows (Except goal, the interpretation is clear for goal). * 01. out * 02. stop * 03. goal 2. post-out annotations: These always come between "out" and "pass". The exact time of these doesn't matter; they are just used to tell us how to interpret the next "pass". * 04. goal_kick * 05. corner_kick * 06. throw_in 3. post-stop annotations: These work similarly as out events, but follow "stop" rather than "out". These can be followed by "pass" or "shot". * 07. offside * 08. foul * 09. yellow_card * 10. red_card 4. other: * 11. goal_chance - when the previous annotation (touch event) was a chance at goal * 12. shot ## V Experiments ### _Evaluation Metrics_ The evaluation follows the mAP metric used in recent papers [26, 27, 28]. For each class, we mark a prediction as a True Positive (TP) if it lands within a tolerance of \(\delta\) around the ground-truth event as can be seen in Fig 11. For each tolerance and class, we then threshold the predictions based on their confidence score and plot a precision-recall (P-R) curve by varying the confidence threshold from \([0,1]\) at 200 points. After obtaining the P-R curve, we modify each point by its right-most max value and then compute the area under this curve by an 11-point approximation \([0,0.1,0.2,...,1]\). This area is called the Average Precision of a class at that \(\delta\) and can be observed in Fig. 12. We then repeat this procedure for a class by varying \(\delta\) from 5 seconds to 60 seconds and plot an AP vs \(\delta\) curve. Once all APs are computed, we approximate the area under this curve with the trapezoidal formula, and this gives us the average-AP, which can also be called AP for that class. We repeat this procedure for each class and average all the APs to get the mean Average Precision (mAP) of our model. ### _Network Architecture_ The main architecture can be seen in Fig. 1 and consists of a graph neural network module that is used to extract an embedding from each graph (frame) before it is passed to the event detection model. The graph neural network consists of two Graph Convolutional Layers (GCN) from [12] with hidden dimensions of 64 and 64, followed by a linear layer of size 32. The sizes were chosen keeping in mind the dimensions of the input features to the graph whose dimensions are of size 5 (2 for Fig. 9: **Summary of the data preprocessing pipeline.** Videos coming from three cameras are joined to form a _panorama_, which is calibrated with respect to a map of the pitch, where we project the detected players and ball position to get a compact representation of the information in each frame. coordinates and 3 for 1-hot vector). Both the GCN layers are also followed by a batch norm layer and a ReLU non-linearity. The graph embedding is generated by aggregating the feature vector from all the nodes and taking an average. This is referred to as a readout operation in graph convolutional networks. We follow an action spotting pipeline similar to the one proposed in SoccerNet [26], where they try to classify if an action lies within a certain time window in a multi-label setting. During training, we split our videos into chunks of different lengths annotated with all events occurring within that time window. For each frame in this chunk, an embedding is created using the graph embedding model. We aggregate the features extracted from all the frames present within that window and use a linear layer with sigmoid activation as the last layer, as is usual in the task of multi-label classification. For testing, we sample the frames of the video with the same window size and a stride of one and pass this as input to the model to obtain the raw event predictions for each frame. We then apply a confidence threshold and non-maximum suppression on the output predictions for each class to get the final action spotting results. The results comparing the raw predictions of the model and after applying different settings of non-maximum suppressions and confidence thresholds can be found in the appendix section in Figs. 13, 14 and 15. The results for different pooling layers and window sizes can be found in I. ### _Training Setup_ The training, test and validation sets consist of five, two and two matches respectively. Fig. 11: A prediction as a True Positive (TP) if it lands within a tolerance of \(\delta\) around the ground-truth event. For each class, the AP is calculated by averaging the performance of the model over \(\delta\)s varying from 5s to 10s as explained in the section V-A. Fig. 10: The above frames give an example of the way frames are annotated right before a goal. For our temporally aware NetVLAD pooling layers as described in Section III-D, we set \(K_{a}=K_{b}=K/2\) and set \(T_{a}=T_{b}=T/2\) to consider the same amount of temporal context from before and after the actions. We train our models with the Adam [55] optimizer with default parameters from PyTorch [56], and a starting learning rate of 1e-3. We use ReduceLROnPlateau scheduler from PyTorch which reduces the learning rate by a factor of 10 if the validation loss does not improve for 10 consecutive epochs. This prevents our model from overfitting. We stop the training once the learning rate falls below 1e-8. A single training converges in about \(\sim\) 100 epochs and takes approximately 1 second/epoch on a machine with a single NVIDIA Tesla V100 and a 32GB 12-core CPU. Each experiment takes about 10-15 minutes as 90 percent of the time is spent on data creation and loading. During inference, a single game can be processed to obtain the final event predictions in as little as 2 minutes. Note that this time only accounts for the time it requires for processing the tracking data, and not for getting the object detections and generating tracks from raw images, which are part of the pre-processing pipeline. ## VI Results and Discussion Results obtained for the dataset described in IV and V-C using the mAP metric explained in V-A are shown in Table I. The upper half of the table clearly shows an improvement when using more sophisticated pooling methods like NetVLAD or NetRVLAD with respect to simpler ones like max pooling or average pooling. All the best mAP results (both per class and total) are obtained with the best pooling methods. It also shows how, in general, the temporally-aware pooling techniques produce better results than the regular ones. NetVLAD++ shows the best results overall. The bottom half of the table shows an analysis of the effect of using different window sizes for the NetVLAD++ pooling method. Different window sizes produce better results for different classes, but taking into account the duration of the detected events and the contextual information needed for detecting them, 10 seconds looks like a reasonable value and it produces the best results overall. The model's performance was good on the classes that had a good number of annotations available. However, the events that the model was not able to detect correctly were mostly the ones with very few annotations (less than 50 for some), which is expected since deep learning models require more data for learning good representations. ## VII Conclusions We presented a methodology for detecting events based on processing tracking data that is generated from the players and discussed how graph neural networks can overcome the problems faced by other machine learning models when processing this type of data because it is unclear on how to order players in a sequence and how to handle missing objects of interest when constructing a feature vector summarizing a frame. We show how to model the players and the ball in each frame of the video sequence as a graph, and discuss how the performance of pooling layers in event detection models can be improved by considering the context before and after the action separately. We were able to get good results for few classes despite having just a few annotations for each class compared to other recent works which are trained on datasets with thousands of annotations per event. For future work, we would like to explore self-supervised techniques for pre-training our graphs before training them on event detection tasks. One of the tasks we have in mind is predicting the future motion of teams given the previous positions of its players over a window. ## Acknowledgements There are many people I must thank for the development of this thesis and the wonderful year that I have spent studying and working in Barcelona. Firstly, I would like to thank my supervisor, Dr. Francesc Moreno, without whose trust and support this project would not have been possible. I am grateful to him for allowing me to intern at Institut de Robotica i Informatica Industrial as well as giving me the chance to continue working under his supervision at Kognia Sports Intelligence, where this thesis has been developed. I feel extremely lucky to have Dr. Antonio Rubio Romano as my co-supervisor, under whose mentorship I have been able to learn and grow immensely in the past year. Antonio is one of the most insightful people I know as he forces you to increase your understanding of anything by questioning the very fundamentals of your ideas. I want to thank him for all the hours he has helped me in writing, debugging my code, and helping me become a much better programmer. I would also like to thank all the people at Kognia for making my previous year full of learning and hard work, especially Dr. Luis Ferraz who has been an amazing team leader. I would also like to thank all the professors who taught in this Master's program, as they helped me build my ideas in machine learning and computer vision from the very fundamentals. I am extremely grateful to them for all the effort they put in teaching us as well as solving our doubts. A big thank you to my family for believing in me much more than I do and for making so many sacrifices to ensure I receive the best of opportunities in life. It is, in large part, my determination to vindicate their leap of faith and make them proud that drives my ambitions. Fig. 12: Precision-Recall curve computed for each class at a \(\delta=30seconds\). Th results are for the test dataset for the best model configuration of ’NetVLAD++’, Window size \(=10\) seconds, and the number of frames per second \(=\) two. ## Appendix A Confidence scores Fig. 14: All the plots on the first column show the raw predictions outputted per frame using our best model configuration. The plots in the second column show a non-maximum suppression of a 30 seconds window and zero confidence threshold applied to these raw predictions. The third column shows a non-maximum suppression of a 30 seconds window and 0.2 confidence threshold applied to the raw predictions. Fig. 15: All the plots on the first column show the raw predictions outputted per frame using our best model configuration. The plots in the second column show a non-maximum suppression of a 30 seconds window and zero confidence threshold applied to these raw predictions. The third column shows a non-maximum suppression of a 30 seconds window and 0.2 confidence threshold applied to the raw predictions.
2302.06360
Review on Efficient Strategies for Coordinated Motion and Tracking in Swarm Robotics
Swarm robotics is a creative method of organizing multi-robot structures, consisting of many basic robots influenced by communal insects. The greatest astonishing attribute of swarm robots is their capacity to function together to accomplish a collective objective. This paper addresses the list of current surveys, problems and algorithms that were stimulated in the research of Coordinated Movement in Swarm robotics. Algorithms for swarm robotics movement are contrasted, considering the swarm micro-robots to accomplish aggregation, creation, and clamouring by contrasting the relative computational simulations between the algorithms and simulations used.
B. Udugama
2023-02-13T13:46:21Z
http://arxiv.org/abs/2302.06360v1
# Review on Efficient Strategies for Coordinated Motion and Tracking in Swarm Robotics ###### Abstract Swarm robotics is a creative method of organizing multi-robot structures, consisting of many basic robots influenced by communal insects. The greatest astonishing attribute of swarm robots is their capacity to function together to accomplish a collective objective. This paper addresses the list of current surveys, problems and algorithms that were stimulated in the research of Coordinated Movement in Swarm robotics. Algorithms for swarm robotics movement are contrasted, considering the swarm micro-robots to accomplish aggregation, creation, and clanouring by contrasting the relative computational simulations between the algorithms and simulations used. Swarm Robotics, Swarm Intelligence, Multi-Robot Systems, Particle swarms, SLAM, CML, Motion Coordination ## I Introduction The phrase "Swarm Robotics" refers to advanced group actions that can arise from the mixture of several basic entities, each working independently[1]. Swarm intelligence is, corresponding to Cao et al. [2], "a feature of non-intellectual robots' structures that display collectively sensible behavior." Nevertheless, established on the concepts, that the basic attributes of swarm intellect consist of a naturally motivated focus on autonomous regional regulation and local connectivity and the development of global action because of self-corporation [3]. The adaptation of collaborative robotics of swarm intelligence concepts may be called "Swarm Robotics." Swarm robotics is a novel methodology to integrating large totals of unsophisticated robots [4], which are mobile, not centrally regulated, able to interact locally, and function based on a certain form of biological inspiration. Since the 1980s, swarm robotic techniques have been a significant exploration area [4], as new methods to solutions are being established and tested, the benefits of swarm robotic organisms [2] are also implementable. In 1993, Dudek et al. [8] performed the early effort on classifying exploration areas of swarm robotic techniques. The paper divided the areas into the categories of mapping, biological inspiration, topology of communication, motion coordination, swarm reconfigurability and processing power of swarm units. Cao et al. [2] conducted a hierarchical study of cooperative robotics. They split the publications into community design, resource tensions, collective roots, learning and spatial problems. Through looking at their mutual features, Luca Iocchi et al. [9] provided an overview of the multirobot networks. They also suggested nomenclature of multi-robot structures and a classification of the multi-robot system's responsive and social purposeful behaviors. Rather of summarizing the swarm robots research field into a classification of cooperative structures [2, 9], Lynne [10] grouped the fields into the key subjects that produced substantial study rates. Responsive study problems within each topic field were also described and specifically addressed in this article. ## II Biological Inspiration Swarm engineering and the associated swarm intelligence concept was influenced by an appreciation of the autonomous processes underlying the structure of biological animal behaviors and considering their efficiency. Social insects like ants provide one of the most documented descriptions of self-organized natural actions. They can perform amazing behavioral feats by local and restricted communication: preserving the colony's wellbeing, compassionate for their children, reacting to assault. Thomas et al. [11] studied the actions of a community of robots engaged in an item recovery process in which the control mechanism of the robots is influenced by a foraging behavior pattern of the ants. The tracks allocated to the automatons are derived from basic ant swarms' activities such as scanning, extracting, depositing, returning and rest. Ideas influenced by these group actions also contributed to the usage of pheromone [10], a biochemical material released by ants and related communal insects to label the area with details for later helping other bees. Likewise, David et al. [8] and Cazangi et al. [7] have used pheromomes in their work to accomplish the process of inter-robot contact. More work in this field has culminated in primates being able to communicate and connect with each other. ## III Mapping Mapping is a visualization of the actual surroundings through virtual models by sensory data from the mobile robot. Localization is described as evaluating the robot's position within the created spatial structure. The function of the Simultaneous localization and mapping (SLAM) or concurrent mapping and localization (CML) question is to acquire and build a map of an unfamiliar area with the aid of a moving robot when navigating the robot. In the SLAM aspect, due to the situation where the robot entails a global positioning sensor, the robot mostly relies on gradual motion for robot location prediction (e.g., odometry). There are several methods that have been applied to solve the odometry issue in Geometry such as macroscopic mapping and geometric mapping by utilizing different forms of filters. A macroscopic map is an abstract representation of a given ecosystem's structural attributes. For most instances macroscopic maps use points to reflect the world as a series of distinctive positions (e.g., rooms), linked by robot action sequences through lines (e.g. wall-following). Nonetheless, a graphical diagram reflects the environment's exact spatial features, which appears like a floor plan. ## IV Motion Coordination Swarm robotics trajectory-planning is a field that has gained a lot of interest over the last two decades. In addition, design a route between two different places for a given robot and an area outline, which must be void of hazards and follow all the optimization criteria; is perceived to be the contemporary problem in the trajectory optimization of the mobile robot. Research of route-planning may be allocated into local and global route-planning to tackle this issue. Local path determination is constructed using the sensor data provided by transducers mounted on the robot, which provides specifics of the unexplored area. In the other side, the regional preparation determines specifically the layout of the climates, and navigation is carried out with the details established in priori. ## V Major factors in Search and Tracking Experts in the history of reasearch discussed various issues when dealing with the topic of target explore and monitoring. These differ in different criteria and expectations, which may limit the study's emphasis to certain sub-complications in effect. ### _Number of targets_ Based on the number of destinations to be sought or monitored, the issue of targeting and monitoring can be split into two major scenarios: one target, and several targets. In order to increase the precision of the objective states calculated, the key priority for tracking a lone objective with any Multi Robot Systems (MRS) is the fusion of sensor data from multiple tracker systems [25]. The situation for multiple goals can be considered by expanding the single goal event, which includes many other complexities. The number of goals, for instance, may be unclear or may also vary over time. However, even though there is information and consistent numbers of targets, sensor observations are still unpredictable because they can come from all the objectives. This is the problem of the correlation of results. And, compared to the single target situation, robots need to disperse to the different objectives needing a job assignment method. Another crucial element which influences the solution approach is the ratio among the amount of targets and followers [25]. For instance, where the goals are considerably greater than the number of followers it will not always be achievable to track the entire goals and optimize the total number of goals encountered during the project by at least one robot [4]. Instead of monitoring these clusters separately, an alternative method would involve dividing goals into groups [26]. When observing many moving objects, such as a crowd or a bunch of animals, it is not practical or appropriate to observe each person separately. Individuals in a crowd or animals in a flock have the inclination because they can fly more specifically than alone to popular destinations [27, 28]. ### _Mobility of targets_ The dilemma is either to look for stationary goals or to track moving goals based on the versatility of the goals. While in the Swarm Robotics group the stationary aim case was thoroughly investigated [32], there has been less research contemplating mobile goals [35]. In case of static goals, noisy results, i.e. false alarms, or lack of measurements, are the only confusion. But there is more ambiguity in the goal transition for shifting objectives. For instance, the goal can travel on the ground, swim underneath water, or rise into the air. The movement modes of the objectives do need to be addressed. Since Swarm Robotics has been a young field of study, most analysis has so far been performed under laboratory conditions in the search and tracking of objects traveling on 2D terrain. ### _Mobility of followers_ Even though followers can mostly be static within wireless sensor frameworks, they are still mobile in the context of a robotic swarm. The tracker function, however, has an significant effect on the solution of the question. This controls the mindset of the followers along with pace and responsiveness. The followers should be the identical as the targets, for instance ground followers which track land moving targets, hovering followers which track airborne targets, etc. The movement approach of the objectives and followers can also vary. Flying followers, for example, may be used to track the flying targets on the ground. ### _Complexity of environment_ Environmental uncertainty is an significant consideration in the creation of the MRS, as robots play a key role in communicating with others and the world. The only relations with the followers and targets to take into consideration in the case of an open room. The atmosphere framework may be used for goal identification or robotics motion preparation in organized settings, such as indoor office-type settings. However, occlusion induced by the ambient configuration should be regarded as inconsistencies in device observations in unorganized configurations, i.e. cluttered ecosystems. ### _Knowledge of target motion_ If mobile targets are tracked via mobile bots, prior experience of goal activity will help you anticipate the next goal position, so that the maneuvers of robots can be regulated respectively. If the objective motion is well understood, it is claimed that the motion pattern is 'deterministic[25]. The typical case is the usage of rockets to control the projectile as it has a defined course under the rule of physics [37]. In [38], visualization analyses were performed to see how the application of objective complexities in sensor design enhances monitoring efficiency. The discoveries included a community of ground robots which tracked an aerial target with a deterministic motion. Since previous experience of the goal action may be influenced by spontaneous variables [25] the goal motion is considered 'probabilistic.' There seems to be no background knowledge on goal behavior for real-world applications. The alternatives in such situations will either follow a basic motion framework [37, 39] or a randomized motion framwork like the Quantum fluctuations [40]. ### _Type of cooperation_ Collaboration between the Swarm Representatives is necessary to obtain the optimal result of the SRS squad, which is supreme to the sheer amount of individual brilliance. The robots can boost their overall efficiency by collaboration in two separate ways: (1) the confusion (2) allocating the goal. In the single aim case, the first sort of teamwork is used to integrate observations from several sensors to determine the goal location more precisely than is feasible for a single sensor independently. Throughout the event of objective mission and monitoring through SRS, sensor measurements from various robots may be integrated to approximate the actual target position and pace. They tackled the problem of mutual Multi Robot (MR) monitoring of several mobile objects, based on the integration of sensors [41]. Goal distribution is used in a multi-task system to boost monitoring efficiency by assigning objectives to followers in the right place to track things. It is a Multi-Robot Task Allocation (MRTA) field, wherever the purpose is to delegate tasks to bots in a manner that achieves the broad target more effectively by collaboration [34]. In the issue that the 'tasks' will be independent goals or groups of them, and the objective will be to manage them accurately and effectively. ### _Coordination among multiple followers_ A strong teamwork approach is mandatory to leverage the full benefits of collaboration between robots. In fact, robot cooperation techniques can be classified into two major groups, specifically explicit cooperation, and implied organization [22]. In specific synchronization, the actions of one bot may be guided by another robot by clear communication [21]. In tacit cooperation, the individual bots create autonomous judgments about how to act, based on the knowledge they obtain from their own experiences and contact with others [24]. By utilizing clear contact, the precision of the knowledge transfer among robots is assured. The contact burden of the device would, however, increase with the total amount of robots, likely degrading device performance [42]. If tacit contact is used, while the knowledge received by the robot is not fully accurate, the overall system's efficiency, responsiveness and fault resilience is improved [42]. ## VI Motion algorithms for swarms As stated in last Segment, the characteristics of the Swarm Robotic Systems (SRS) render themselves quite appropriately equipped for target exploration and monitoring. Within this chapter, we address discover and monitoring algorithms which have been or may possibly be used in SRSs. We group such implementations into two major types: one based on Swarm Intelligent (SI) procedures and the additional based on other methods. Such two types of procedures were discussed independently in the following paragraphs. ### _Algorithms established on basics of the swarm intelligence_ That's clear to realize the connection among swarm optimization algorithms and Swarm Robotics (SR) exploration algorithms. We also look for 'right places' inside a domain area use swarms. In addition, as Parker[48] states, all main MR engagement abilities, like goal monitoring, require a ruling mechanism that can be articulated as an optimization problem as described out in the below Table 1. It is therefore obvious that Swarm Intelligence procedures can be used to find ideal approaches to tracking algorithms. Parker[48] also states that such optimization challenges are not commonly regarded as comprehensive optimization problems. The requirement for robots to react in real-time leaves little room to determine globally optimum solutions, except the issue is really small. As a consequence, centralized approaches that integrate only regional cost measures are usually utilized even though they can only estimate the overall solution. Such suboptimal approaches are also appropriate for functional implementations. \begin{table} \begin{tabular}{l l l} \hline & Objective & Metric to optimise \\ \hline Task allocation [43, 33, 34] & Map a set of robots to a set of tasks & Optimise overall system utility. Here, “utility” refers to a combination of the quality at which a robot can execute a given task, and the cost it incurs in executing that task (e.g., power consumption) [43]. \\ Path planning [44, 45, 46, 47] & Generate paths for multiple robots & Minimise a performance metric e.g., combined robot path lengths [45], combined travel times for robots to reach their respective goals [47], combined energy use [46]. \\ Formations [35] & Enable robots to move into a desired formation, or to maintain a specific formation, while moving through the environment & Minimise the error between each current robot position and that robot’s assigned position in the formation. \\ Target tracking or observation [4, 35] & Control cooperative robot motions to ensure that a group of targets remains under observation by the robots & Optimise a combination of the time in which targets are under observation and a robot cost function [4]. \\ \hline \end{tabular} \end{table} Table 1: Optimization of performance challenges As SI procedures concentrate primarily on autonomous local regulation, local connectivity, and the development of global action as a consequence of self-organization[7], they obviously match and allow utilization of the main elements of SRs. It can be considered as the primary explanation that a large number of the current SR seek and monitoring algorithms is focused on popular SI procedures. The scanning and monitoring algorithms outlined in the continuation of this chapter are classified as the initial SI procedures are influenced by. The automated procedures mentioned here have primarily used SI concepts for modeling the actions of particular robots, wherever every single member is viewed as an particle in the accompanying SI procedure. * Particle swarm optimization * Bees algorithm * Artificial Bee Colony Optimisation * Ant Colony Optimisation * Bacterial Foraging Optimisation * Glowworm Swarm Optimisation * Biased Random Walk * Firefly Algorithm ### _Other methods to resolve SI motion optimisation_ This article refers several non Swarm Robotic based methods for aim exploration and monitoring purposes considering the local communication and other parameters. * Distributed Kalman Filter (DKF) * Potential fields * Formation-based target following ## VII Comparative analysis Goal detection and monitoring issues with and Swarms can be split into two significant smaller challenges. The initial is a goal state prediction for a lone robot, which includes the determination of the goal locations and speeds in the robot's peripheral vision from its instrument observations. The following is the synchronization of movements among robots to monitor further goals across time[25]. Many work in the field of SR focuses on the latter part of the issue. Table 2 summarizes the different conditions that characterize the problem conditions used in several MR exploration and tracking strategies mentioned in Previous sections. It is apparent as of the desk that only A-CMOMMT [4] tackled the question of monitoring several mobile goals that are greater in quantity than the robot squad. In order to appreciate the great advantages of the SRS, those assets must also be preserved, as defined in Section 1. The specifications of the different procedures presented in Previous section, with an focus on the characteristics required to match the SRSs are tabulated in Table 3. With almost no frontrunner, the rigidity of the algorithm enhances as there is no single main fault factor. This also improves the scalability of the algorithm due to decreased coordination between robots. Scalable Performance is the most valuable stuff for every dispersed MRS. It is often valuable to obtain scalability by utilizing only local contact. Demanding global connectivity might not only hinder the distribution of robots due to their restricted scope of contact, but would also trigger overloading of connectivity as the scale of the swarm grows. The absence of robot identifiers also leads to the maintenance of machine scalability. As it is often possible to restrict the amount of single color or graphic pattern indicators to be produced. In addition, identity assignment is a type of centralisation. There is also a limiting presumption of having a common communication method when it comes to SRS. It is difficult for a key inertial navigation approach to track them all because there are very large numbers of robots and the swarm will work at places where GPS and similar schemes are inaccessible [32]. SRS stresses the flexibility and sophistication of different robots when operating in broad quantities. In order to obtain real-time efficiency, something has to be accomplished, with \begin{table} \begin{tabular}{l l l l l l l l} & \multicolumn{3}{l}{Problem Characteristics} & & & \\ & Number of & Targets/trackers & Mobility of & Environment & Prior knowledge & Cooperation & Coordination \\ & targets & ratio & targets & complexity & of target motion & & \\ \hline Pugh \& Martinoli [32] & 1 & \(\ll\) 1 & Stationary & Empty space & N/A & Target estimation & Implicit \\ Parker [4] & Multiple & \(>\)1 & Mobile & Empty space & None & Target allocation & Implicit \\ Derr \& Manic [31] & Known & \(\leq\) 1 & Stationary & Cluttered & N/A & Target estimation & Implicit \\ Wang \& Gu [38] & 1 & \(<\)1 & Mobile & Empty space & None & Target estimation & Implicit \\ Jévtic et al. [34] & Multiple & \(<\)1 & Stationary & Cluttered & N/A & Target allocation & Implicit \\ Lee et al. [35] & \(>\)1 & \(\ll\) 1 & Mobile & Empty space & None & Target estimation & Implicit \\ \hline \end{tabular} \end{table} Table 2: Comparison of various goals and analysis Figure 1: Described limits for the isolated Swarm robot particle ample lightweight computing necessities. SI algorithms appear to be less complicated, so the problem size is also not explicitly connected to the sophisticated system [31]. The calculations per stage in the PSO-based algorithms[32, 31] are quite clear and contain just one vector comparison and addition. In this report, all the SI algorithms mentioned were attempted in simulations in different research papers; and some were also evaluated using real-robot testing[33, 37]. The assumption that a significant number of robots can not be used for real-world tests is a temptation to use models in the testing of SRS's. Mostly the works of practical robotics projects only used a substantial number of robots, and used models to check their systems for a larger number of robots. The tests with simulations were also carried out with actual robots to test all non-SI related algorithms [28, 4, 37, 35]. Many aim identification and monitoring systems often involve sampling of the gradients of certain physical, mechanical, genetic or electro magnetic properties to classify possible sources or entities[12]. A successful quest or monitoring algorithm in the gradient sector should not be prone to plateaux (dead space). The inclusion of unpredictability in the algorithm will prevent this issue. The BRW approach mentioned is a good example of it: if the robots assume that the pitch is positive, then they take larger measures in that direction; if a pitch is missing, the robots travel arbitrarily in a set range. Apart from the more common random Brownian phenomenon, a large variety of species, containing open-ocean rapacious fish[35] and the birds, have also provided significant support for Levy flight search patterns[36]. Levy movements are a different form of spontaneous path with the duration of steps from the power-law tail delivery Levy, rather than the typical Gaussian path used in Brownian travel. This refers to many quick measures of rolling fragments' interspersed with larger relocations[36]. Current SI algorithm such as Global Swarm Optimisation (GSO) [37] and also MR quest [30] were adopted for the enhancement of Levy flights. The question of several moving targets includes an algorithm that can simultaneously locate many optimal parameters, control Optima, and even manage the usage of proven Optima, when looking for new optima[21]. The procedure will be able to preserve variability in the inhabitants in order to reach several optimum at the same time, such that the whole population is not converging optimally by discovering all optimum[24]. The route of Levy flights can provide a community diversity toward hasty convergence, effectively exiting the algorithm from local minimum levels[39]. In fact, historical experience was also considered to be valuable for problems of dynamic optimization[25]. Because the present condition of the climate and health is sometimes equivalent to previous ones, it could be simpler to identify viable alternatives in the current world with the use of historical knowledge [12]. In addition, as previous strategies may include additional reference points following a transition, they may lead to adding flexibility to the discovery phase after the quest converges [22]. ## VIII Conclusions Leading to robustness, versatility, scalability and economic performance, Swarms have the capacity that will be used for real-world activities. Goal identification and tracking are one such activity with a number of applications. In this article, exploration and tracking algorithms for robotic swarms have been identified and compared. The most difficult but exciting scenario of implementation is to use robot swarms to monitor several movement goals among the different issue configurations found in these ventures. Following many moving goals utilizing a robot swarm is a sophisticated multimodal and vibrant distributed optimization problem. Algorithms for this issue should be able to locate several targets (optima) concurrently, track them, coordinate the manipulation of the established targets and discover new goals. The paper compared several algorithms, including GSO and FA, which give global, yet preliminary solutions to the optimization problem. The dynamics of SI processes are not specifically connected to the problem scale and are thus ideally adapted for SRS implementations in the real world.
2304.02288
$T$-equivariant motives of flag varieties
We use the construction of the stable homotopy category by Khan-Ravi to calculate the integral $T$-equivariant $K$-theory spectrum of a flag variety over an affine scheme, where $T$ is a split torus associated to the flag variety. More precisely, we show that the $T$-equivariant $K$-theory ring spectrum of a flag variety is decomposed into a direct sum of $K$-theory spectra of the classifying stack $\text{B}T$ indexed by the associated Weyl group. We also explain how to relate these results to the motivic world and deduce classical results for $T$-equivariant intersection theory and $K$-theory of flag varieties.\par For this purpose, we analyze the motive of schemes stratified by affine spaces with group action, that preserves these stratifications. We work with cohomology theories, that satisfy certain vanishing conditions, which are satisfied for example by motivic cohomology and $K$-Theory.
Can Yaylali
2023-04-05T08:19:26Z
http://arxiv.org/abs/2304.02288v3
# T-equivariant motives of flag varities ###### Abstract We use the construction of the stable homotopy category by Khan-Ravi to calculate the \(K\)-theory of \(T\)-equivariant flag varieties over an affine scheme. We also explain how to relate these results to the motivic world and deduce the classical results for \(T\)-equivariant intersection theory and \(K\)-theory of flag varieties. ###### Contents * 1 Introduction * 1.1 Notation * 2 Motivic setup * 2.1 Scalloped stacks * 2.2 The stable homotopy category for scalloped stacks * 3 \(T\)-equivariant motivic homotopy theory of flag varieties * 3.1 Affine cell decomposition of \(G/B\) * 3.2 Equivariant motives of linear Artin-stacks * 3.3 Application to \(T\)-equivariant cohomology theories of flag varieties Introduction ### Motivation Let \(G\) be a split reductive group over a field \(k\) with split maximal torus \(T\) contained in a Borel \(B\). The geometry of the flag variety \(G/B\) plays an important role in representation theory and the Langlands program. One of this aspects is to analyze the \(T\)-equivariant cycles of Schubert cells. There are various results on the \(T\)-equivariant intersection theory of a flag variety (cf. [1]) or even on the \(T\)-equivariant \(K_{0}\) of it (cf. [13]). For example, \(A_{T}^{\bullet}(G/B)\) has an \(A_{T}^{\bullet}(k)\)-basis given by precisely the classes of the Schubert cells. Analogously the same is true for \(K_{T}(G/B)\), i.e. the classes of the Schubert cells yield an \(R(T)\)-basis. There is no canonical way to imply the former via the latter, as the equivariant Riemann-Roch fails without completion (cf. [14]). Also one could expect, that the higher \(T\)-equivariant \(K\)-groups behave similarly as the \(0\)-th part but as far as the author knows, this was also never shown. One idea to resolve this, is via passage to the motivic theory, which implies both results simultaneously. Motives were famously envisioned by Grothendieck. For any variety \(X\) one should be able to encode the analogous behaviors of cohomology theories in an abelian category, the category of motives of \(X\). In our context, the similar behavior of \(T\)-equivariant \(K\)-theory and intersection theory of a flag variety should be seen motivically. This is the starting point of this article. ### Some motivic background Defining a suitable abelian category of motives is a difficult task. One approach that has been studied over the years is to define the derived category of motives directly and attach a \(t\)-structure that recovers the (abelian) category of motives as the heart of this \(t\)-structure. There are many constructions of the derived category of motives by Voevodsky, Morel, Ayoub, Cisinki, Deglise, Spitzweck and more. Under certain assumptions, it is shown that the various definitions of the derived category of motives agree. Also, in the recent constructions, the category of motives comes equipped with a full \(6\)-functor formalism which has become a powerful tool in analyzing functorial properties of cohomology theories. We will in particular follow the construction of Voevodsky-Morel (cf. [15]). Roughly, for a scheme \(S\) they define the stable homotopy category \(\operatorname{SH}(S)\) as the category of simplicial Nisnevich sheaves over a \(S\) with coefficients in \(\mathbb{Z}\), where for any smooth scheme \(X\) over \(S\) one inverts the structure map \(\mathbb{A}_{X}^{1}\to X\) (resp. the map induced on the associated representable sheaves) and inverts 'tensoring' with \(\mathbb{P}_{S}^{1}\) (on simplicial Nisnevich sheaves there is a closed monoidal structure given by the smash product, cf. _op.cit._). Ayoub has shown in his thesis that the association \(S\mapsto\operatorname{SH}(S)\) defines a functor that supports a full \(6\)-functor formalism, i.e. \(\operatorname{SH}(S)\) is closed monoidal and for finite type morphisms \(f\colon S^{\prime}\to S\) there exists adjunctions of \(f_{!}\dashv f^{!}\), \(f^{*}\dashv f_{*}\) between \(\operatorname{SH}(S^{\prime})\) and \(\operatorname{SH}(S)\) with various compatibilities (cf. [1]). If \(S\) is regular over a field and we work with etale sheaves with \(\mathbb{Q}\)-coefficients (this us usually denoted by \(\operatorname{SH}_{\mathbb{Q},\operatorname{\acute{e}t}}(S)\)), this is equivalent to the construction of Cisinksi-Deglise (cf. [10]). Voevodsky and Morel show that there exists an object \(\operatorname{KGL}\in\operatorname{SH}(S)\), which we call the _motivic \(K\)-theory spectrum_, such that \[\operatorname{Hom}_{\operatorname{SH}(S)}(1_{S}[n],\operatorname{KGL})=K_{n}(S)\] for any \(n\in\mathbb{Z}\), where \(1_{S}\) denotes the \(\otimes\)-unit in \(\operatorname{SH}(S)\). Further, by the work of Spitzweck, we know that there exists an object \(M\mathbb{Z}(n)\in\operatorname{SH}(S)\) such that \[\operatorname{Hom}_{\operatorname{SH}(S)}(1_{S},M\mathbb{Z}(n)[2n])=A^{n}(S)\] (cf. [10]). So, working with objects in the stable homotopy category enables us the deduce results in \(K\)-theory resp. intersection theory. In this way, one can also define a derived category of motives as modules over a chosen ring object in \(\operatorname{SH}(S)\). For example Spitzweck defines a derived category of motives \(\operatorname{DM}(S)\) as \(\operatorname{SH}(S)\)-modules over \(M\mathbb{Z}\) and Cisinksi-Deglise define \(\operatorname{DM}(S,\mathbb{Q})\) as \(\operatorname{SH}(S,\mathbb{Q})\)-modules over \(M\mathbb{Z}\otimes_{\mathbb{Z}}\mathbb{Q}\) (here \(\operatorname{SH}(S,\mathbb{Q})\) is the stable homotopy category associated to sheaves with rational coefficients). Thus, by understanding \(\operatorname{SH}(S)\) resp. the corresponding sheaves represented by smooth \(S\)-schemes, we can understand their cohomological behavior and their behavior in motivic categories. To further generalize these constructions to the equivariant setting one needs to be careful. Usually this is done by working with quotient stacks and imposing etale descent on \(\operatorname{SH}\). The idea then is to work with quotient stacks that can be smoothly covered by schemes and glue the corresponding motivic categories along the atlas. A drawback of the gluing process is that we loose information on the \(K\)-theoretic side. By the works of Khan-Ravi and Krishna, one is able to see that glued motivic \(K\)-theory spectrum does not represent genuine equivariant \(K\)-theory but rather its completed version (cf. [11] and [12]). This is due to the fact, that on algebraic stacks (in the sense of the [10]) \(K\)-theory does not satisfy etale descent. For example, for a field \(k\) we have \(K_{\mathbb{G}_{\mathrm{m}}}(\operatorname{Spec}(k))=\mathbb{Z}[t,t^{-1}]\) (the \(\mathbb{G}_{\mathrm{m}}\)-equivariant \(K\)-theory of \(\operatorname{Spec}(k)\)) but the simplicial limit along the smooth cover \(\operatorname{Spec}(k)\to\operatorname{B}\mathbb{G}_{\mathrm{m}}\) yields \[\lim_{[n]\in\Delta}K(\mathbb{G}_{\mathrm{m}}^{n})=\mathbb{Z}[\![T]\!].\] One can compute this limit as the completion of \(\mathbb{Z}[t,t^{-1}]\) along \((1-t)\), which is the ideal generated by the virtual rank \(0\) bundles in \(K_{\mathbb{G}_{\mathrm{m}}}(\operatorname{Spec}(k))\). ### Back to our setting As we are interested in equivariant \(K\)-theory as an \(R(T)\)-module, where \(R(T)=K_{T}(\operatorname{Spec}(k))\) is the representation ring of \(T\), it suffices to look at our problem over the classifying stack of \(T\), i.e. work with \(p\colon[T\backslash G/B]\to\operatorname{B}T\). The benefit of this viewpoint ist that we only have to deal with representable maps. Further, as \(T\) is a split maximal torus, we know that the the derived category of \(\operatorname{B}T\) with quasi-coherent cohomology is compactly generated. This fact allows for a genuine construction of the stable homotopy category \(\operatorname{SH}\) with a six functor formalism on \(\operatorname{B}T\) (cf. [14]). This can be extended to relatively representable algebraic stacks over \(\operatorname{B}T\) (cf. [11]). In op.cit._ Khan-Ravi show, using the works of Hoyois, that there is a \(K\)-theory spectrum in \(\mathrm{SH}(\mathrm{B}T)\) that represents genuine equivariant \(K\)-theory. Using this version of the stable motivic homotopy category, we can analyze the'motive' of \([T\backslash G/B]\) relative to \(\mathrm{B}T\) with coefficients in an \(E_{\infty}\)-ring spectrum \(M_{\mathrm{B}T}\in\mathrm{SH}(\mathrm{B}T)\), i.e. \(p_{!}p^{!}M_{\mathrm{B}T}\). Because of technical reasons, we have to assume that \(M_{\mathrm{B}T}\) satisfies some vanishing condition, namely for all \(n\geq 0\) we have \[\mathrm{Hom}_{\mathrm{SH}(\mathrm{B}T)}(1_{\mathrm{B}T}\langle n\rangle[-1],M_ {\mathrm{B}T})=0.\] This is not a drawback, as we will see that \((*)\) is satisfied for motivic cohomology and \(K\)-theory. As in the intersection theory case, we can use the Bruhat decomposition of \(G/B\) to stratify the flag variety via \(T\)-invariant affine cells and then compute \(p_{!}p^{!}M_{\mathrm{B}T}\) using this stratification. The theory of Khan-Ravi works even in the case where our base is not a field but an affine scheme. So, we will also prove our results in the most general case we are able to. Thus, from now on let \(S\) be an affine scheme and \((G,B,T)\) be defined over \(S\). First, we have to give a Bruhat decomposition in the this setting (even though this is probably known to many people, we didn't find a reference and proved it by ourselves). **Proposition 1** (Prop. 3.4).: _Let \(S\) be a non-empty scheme (not necessarily affine). Let \(G\) be a split reductive \(S\)-group scheme with split maximal torus \(T\) and a Borel \(B\) containing \(T\). Then the \(S\)-scheme \(G/B\) admits a cellular stratification indexed by the Weyl group of \(T\) in \(G\)._ Afterwards, we analyze the motive of a proper scheme \(X\) endowed with a group action of an \(S\)-group scheme \(H\) and an \(H\)-invariant cellular decomposition. In the special case of \(G/B\) with \(T\)-action this yields the structure of \(p_{!}p^{!}M_{\mathrm{B}T}\) as a \(M_{\mathrm{B}T}\)-module with basis given by the classes of the Schubert cells. **Theorem 2** (Cor. 3.11).: _Let \(G\) be a split reductive \(S\)-group scheme with maximal split torus \(T\) that is contained in a Borel \(B\). Then_ \[p_{!}p^{!}M_{\mathrm{B}T}\simeq\bigoplus_{w\in W}M_{\mathrm{B}T}\langle l(w)\rangle,\] _where \(W\) is the Weyl group of \(T\) in \(G\)._ Applying this result with the representation of homotopy invariant \(K\)-theory in \(\mathrm{SH}\), we get the decomposition of homotopy invariant \(K\)-theory. **Theorem 3** (Cor. 3.12).: _Let \(S\) be an affine scheme. Further, let \(G\) be a split reductive \(S\)-group scheme with maximal split torus \(T\) that is contained in a Borel \(B\). Then we have_ \[\mathrm{KH}([T\backslash G/B])\simeq\bigoplus_{w\in W}\mathrm{KH}(\mathrm{B}T)\] On the \(0\)-th homotopy group we recover an integral version of the classical result, that \(K_{T}(G/B)\) as an \(R(T)\)-module is generated by the \(T\)-equivariant classes of the Schubert cells. For the higher equivariant \(K\)-groups, we get a similar statement. **Corollary 4** (3.12.1).: _Let \(S\) be a noetherian regular affine scheme. Further, let \(G\) be a split reductive \(S\)-group scheme with maximal split torus \(T\) that is contained in a Borel \(B\). Let \(R(T)\) denote the integral representation ring of \(T\). Then we have an isomorphism of \(R(T)\)-modules_ \[K_{i}^{T}(G/B)\coloneqq K_{i}([T\backslash G/B])\cong\bigoplus_{w\in W}K_{i}^ {T}(S).\] Now let us assume that \(S=\operatorname{Spec}(k)\) is the spectrum of a field. On the higher homotopy groups, we similarly get an isomorphism of \(R(T)_{\mathbb{Q}}\)-modules via tensoring with the higher \(K\)-groups of the ground field. **Corollary 5** (3.12.2).: _Let \(k\) be a field. Further, let \(G\) be a split reductive \(k\)-group scheme with maximal split torus \(T\) that is contained in a Borel \(B\). Then we have an isomorphism of \(R(T)_{\mathbb{Q}}\)-modules_ \[K_{i}^{T}(G/B)_{\mathbb{Q}}\cong K_{i}(k)_{\mathbb{Q}}\otimes_{\mathbb{Q}}K_{ T}(G/B)_{\mathbb{Q}}.\] Using the universal property of SH, we can also the lisse-extension of SH and the etale localized rational stable homotopy category \(\operatorname{SH}_{\mathbb{Q},\text{\'{e}t}}\) (we have to assert some conditions on the base as seen in Section 3.3.2). In this way, we can extend Theorem 2 to the case of Beilinson motives and recover the analogous result on \(A_{T}^{*}(G/B)\) and completed \(K_{0}\). **Proposition 6** (3.24.2 and 3.24.1).: _Let \(S=\operatorname{Spec}(k)\) be the spectrum of a field. Further, let \(G\) be a split reductive \(S\)-group scheme with maximal split torus \(T\) that is contained in a Borel \(B\). Then on completed equivariant \(K\)-theory, we have_ \[K_{0}^{T}(G/B)_{\mathbb{Q}}^{\wedge I_{T}}\cong K_{0}(G/B)_{\mathbb{Q}}\otimes _{\mathbb{Q}}K_{T}(S)_{\mathbb{Q}}^{\wedge I_{T}},\] _where \(I_{T}\) is the ideal generated by virtual rank \(0\)-bundles in \(R(T)_{\mathbb{Q}}\)._ _On Chow rings, we recover_ \[A_{T}^{*}(G/B)_{\mathbb{Q}}\cong A^{*}(G/B)_{\mathbb{Q}}\otimes_{\mathbb{Q}}A _{T}^{*}(S)_{\mathbb{Q}}\] ### Notation #### Categorical Notation In this paper, we will without further mention use the language of \(\infty\)-categories (cf. [10]). We will identify \(1\)-categories with their Nerve and regard them as \(\infty\)-categories. In particular, we will identify the category of sets with the full sub \(\infty\)-category of \(0\)-truncated \(\infty\)-groupoids and the category of groupoids with the full sub \(\infty\)-category of \(1\)-truncated \(\infty\)-groupoids. Likewise, when we say _full subcategory_ of an \(\infty\)-category, we will always mean a full sub \(\infty\)-category. For the rest of this article, we fix an uncountable inaccessible regular cardinal \(\kappa\) and _small_ will mean \(\kappa\)-small. Without further mention, if needed, we will assume smallness of the categories involved in this article. Indexing sets will always be small. We denote by \(\mathbf{Cat}_{\infty}\) the \(\infty\)-category of small \(\infty\)-categories and by \(\infty\)**-Grpd** the \(\infty\)-category of small \(\infty\)-groupoids. A _presheaf_\(\mathcal{F}\) on an \(\infty\)-category \(\mathcal{C}\) is a functor \(\mathcal{F}\colon\mathcal{C}^{\text{op}}\to\infty\)**-Grpd**. If \(\mathcal{C}\) admits a Grothendieck topology \(\tau\), we will say that \(\mathcal{F}\) is a _\(\tau\)-sheaf_ if it is a sheaf with respect to the topology \(\tau\). #### Algebraic geometric notation Let \(S\) be a scheme. By an algebraic stack \(X\) over \(S\), we mean an etale-sheaf of groupoids on \(S\)-schemes, such that the diagonal of \(X\) is representable by an algebraic space and there exists a scheme \(U\) and a smooth effective epimorphism \(U\to X\). A morphism of algebraic stacks over \(S\) will always be an \(S\)-morphism. #### Structure of this article In the first section we recall some facts about the stable homotopy category in our setting after Khan-Ravi. Afterwards, we prove some basic facts, we need later on. Our next step is to show the existence of the Bruhat decomposition of \(G/B\) over arbitrary schemes. We then continue to compute the motive of strict linear schemes with group action and apply this to the \([T\backslash G/B]\). We conclude this article, by applying our result to integral and rational homotopy invariant \(K\)-theory and their homotopy groups. Finally, we discuss how one extends these results to other motivic categories and get the classical results on Chow rings. #### Acknowledgement I would like to thank Torsten Wedhorn for remarks on the earlier versions of this article and sketching me his idea on the Bruhat decomposition. Also, I would like to thank Simon Pepin Lehalleur, for pointing out an error in the first version of this manuscript and the discussion with him afterwards, that lead to the corrected version. Further, I would also like to thank Timo Richarz and Thibaud van den Hove for various discussions concerning this paper. This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 326 _Geometry and Arithmetic of Uniformized Structures_, project number 444845124 and by the LOEWE grant 'Uniformized Structures in Algebra and Geometry'. ## 2 Motivic setup In this section, we fix an affine scheme \(S\), a split reductive \(S\)-group scheme \(G\) together with a Borel pair \((B,T)\) consisting of a split maximal torus \(T\) inside a Borel \(B\) in \(G\). Further, any algebraic stack will be considered as an algebraic stack over \(S\) and any morphism will be relative over \(S\). In this article we want to compute the motive of \(T\)-equivariant flag varieties. In particular, we are interested in motives of Artin stacks. There are several approaches on how to extend the theory of motives to Artin stacks. Recall from the introduction that one can right Kan extend \(\mathrm{DM}_{\mathbb{Q}}\simeq\mathrm{SH}_{\mathbb{Q},\mathrm{\acute{e}t}}\) from schemes to Artin stacks and get a full 6-functor formalism. This works fine until one wants to compute the motivic cohomology in terms of K-theory. One can show that \(K\)-theory does not satisfy etale descent for Artin-stacks1. For example, let \([X/G]\) be a smooth Artin-stack over a field \(k\) with \(G\) split reductive. Then we have a map \(K([X/G])\to K^{\text{\'{e}t}}([X/G])\), where \(K^{\text{\'{e}t}}\) is the right Kan extended \(K\)-theory from algebraic spaces to etale sheaves. This map is not an equivalence but realizes \(K^{\text{\'{e}t}}_{0}([X/G])\) as the completion of \(K^{G}_{0}(X)\) along the augmentation ideal \(I_{G}\subseteq R(G)=K(\text{Rep}(G))\) (note that \(K^{G}_{0}(X)\) is in general not \(I_{G}\)-complete as seen for the \(\mathbb{G}_{m}\)-equivariant \(K\)-theory of a point, c.f. Example 3.22). This is an instance of the comparison between the Borel-construction for \(K\)-theory and equivariant \(K\)-theory (cf. [11]). Also for non-rational coefficients, one has to be careful as etale descent is not even satisfied for schemes. Hence, one has to be careful to construct a full \(6\)-functor formalism for SH with non-rational coefficients. This was done by Chowdhury in his thesis by gluing along smooth morphism with Nisnevich local sections (cf. [10]). This is equivalent to gluing along smooth covers, the so called _lisse-extension_ (cf. [11, Cor. 12.28]). But again, computing the motivic cohomology along the lisse-extended \(K\)-theory spectrum yields the completion of \(K\)-theory along the augmentation ideal (cf. [11, Ex. 12.22]). In the case of \(\mathcal{X}:=[T\backslash G/B]\) there is a construction by Khan-Ravi of a stable homotopy category \(\operatorname{SH}(\mathcal{X})\) that admits a full \(6\)-functor formalism and a motivic spectrum \(\operatorname{KGL}_{\mathcal{X}}\in\operatorname{SH}(\mathcal{X})\) such that \[\operatorname{Hom}_{\operatorname{SH}(\mathcal{X})}(1_{\mathcal{X}}, \operatorname{KGL}_{\mathcal{X}})=\operatorname{KH}(\mathcal{X}),\] where \(\operatorname{KH}(\mathcal{X})\) denotes the homotopy invariant \(K\)-theory of \(\mathcal{X}\) (cf. [11]). The quotient stack \(\mathcal{X}\) belongs to a certain class of algebraic stacks, called _scalloped_ (see below) for which the stable homotopy category is also defined. In the end of this article, we will look at the implications on motivic cohomology in various frameworks (cf. Section 3.3.2). ### Scalloped stacks We recall the necessary definitions from [11, SS2]. We will use the terminology of _loc.cit._. **Definition 2.1**.: Let \(H\) be a group scheme over \(S\). We say that \(H\) is _nice_ if it is an extension of an etale group scheme of order prime to the residue characteristics of \(S\), by a group scheme of multiplicative type. **Example 2.2** (cf. [1]).: An important example of a nice group scheme is a torus. On can show that any nice group scheme is linearly reductive. If \(S\) is the spectrum of a field of characteristic \(p>0\), then linearly reductive groups schemes are are also nice. Let \(H\) be a nice \(S\)-group scheme and \(X\) a quasi-affine scheme with action by \(H\). Hoyois constructs an equivariant version of the stable homotopy category \(\operatorname{SH}^{H}(X)\) with full \(6\)-functor formalism in this context (cf. [14]). Khan and Ravi extend this construction via gluing along Nisnevich squares to a class of algebraic stacks, which they call _scalloped_. We don't want to give an explicit scalloped stack, as it is a bit technical, but give an important example and some properties of scalloped stacks, for the definition and details we refer to [11]. **Proposition 2.3** ([14, Cor 2.13, Thm. 2.14]).: 1. _Let_ \(f\colon X^{\prime}\to X\) _be a morphism of qcqs algebraic stacks. If_ \(X\) _is scalloped and_ \(f\) _is representable, then_ \(X^{\prime}\) _is scalloped._ 2. _Let_ \(X\) _be a qcqs algebraic space over_ \(S\) _with_ \(H\)_-action, where_ \(H\) _is a nice_ \(S\)_-group scheme. Then_ \([X/H]\) _is scalloped._ Throughout this article, we are interested in quotients of qcqs schemes by the torus \(T\), such as \([T\backslash G/B]\). The above proposition tells us that these stacks are scalloped. In particular, we can work with the formalism of [14]. **Notation 2.4**.: We set \(\mathbf{Rep}_{S}\) to be the \(\infty\)-category of morphisms \(X^{\prime}\to X\) of algebraic stacks over \(S\) that are representable. We denote by \(\mathbf{Rep}_{S}^{\mathrm{ft}}\) the full subcategory of \(\mathbf{Rep}_{S}\) consisting of morphisms of finite type over \(S\). Further, we denote by \(\mathbf{Sc}_{S}\) resp. \(\mathbf{Sc}_{S}^{\mathrm{ft}}\) the full subcategories of \(\mathbf{Rep}_{S}\) resp. \(\mathbf{Rep}_{S}^{\mathrm{ft}}\) consisting of scalloped stacks. ### The stable homotopy category for scalloped stacks Let us quickly recall the construction of SH for scalloped stacks from [14]. For a scalloped stack \(X\) let us set \(\mathbf{Sm}_{X}\) as the full subcategory of \((\mathbf{Rep}_{S})_{/X}\) consisting of morphisms \(X^{\prime}\to X\) that are smooth and representable. We define the homotopy category \(\mathrm{H}(X)\) of \(X\), as the \(\infty\)-category of Nisnevich sheaves \(F\) from \(\mathbf{Sm}_{X}\) to \(\infty\)-\(\mathbf{Grpd}\) that are homotopy invariant, i.e. for any \(X^{\prime}\in\mathbf{Sm}_{X}\) and any vector bundle \(p\colon V\to X^{\prime}\), we have that the induced map \(F(X^{\prime})\to F(V)\) is an equivalence. In the classical motivic theory, for example of Cisinski-Deglise, one obtains the stable homotopy category by adjoining \(\otimes\)-inverses of Thom-motives of finite locally free sheaves. In our case, we can associate to any finite locally free module \(\mathcal{E}\) over \(X\) an object \(\langle\mathcal{E}\rangle\in\mathrm{H}(X)\), called the _Thom-anima_ (cf. [14, SS4]). Now we obtain the stable homotopy category, by formally \(\otimes\)-inverting these Thom-anima. One should note that formal \(\otimes\)-inversion of objects in \(\infty\)-categories is more delicate and we refer to _op.cit._ for references and details. **Definition 2.5** ([14]).: Let \(X\) be a scalloped algebraic stack. The _stable homotopy category of \(X\)_ is defined as the \(\infty\)-category \[\mathrm{SH}(X)\coloneqq\mathrm{H}[\langle\mathcal{E}\rangle^{\otimes-1}],\] where \([\langle\mathcal{E}\rangle^{\otimes-1}]\) denotes the formal inversion of all Thom-anima associated to any finite locally free module \(\mathcal{E}\) over \(X\). The most important feature of the stable homotopy category for us is that the assignment \(X\mapsto\mathrm{SH}(X)\) can be upgraded to a functor with a full \(6\)-functor formalism homotopy invariant \(K\)-theory resp. motivic cohomology of \(X\) can be represented by objects in \(\mathrm{SH}(X)\). Let us quickly recall the \(6\)-functor formalism for scalloped stacks on the stable homotopy category \(\mathrm{SH}\). We also recall the comparison with \(K\)-theory. **Theorem 2.6** ([16]).: _For any scalloped stack \(X\) there is an \(\infty\)-category \(\operatorname{SH}(X)\) with the following properties_ 1. \(\operatorname{SH}(X)\) _is a stable, presentable, closed symmetric monoidal_ \(\infty\)_-category. The tensor product is colimit preserving and the inner_ \(\operatorname{Hom}\) _will be denoted by_ \(\underline{\operatorname{Hom}}\)_. The_ \(\otimes\)_-unit will be denoted by_ \(1_{X}\)_._ 2. _The assignment_ \(X\mapsto\operatorname{SH}(X)\) _can be upgraded to a presheaf of symmetric monoidal presentable_ \(\infty\)_-categories with colimit preserving functors on the site of scalloped stacks_ \[\operatorname{SH}^{*}\colon(\mathbf{Sc}_{S})^{\operatorname{op}}\to\mathbf{Cat }_{\infty}^{\otimes},\ X\mapsto\operatorname{SH}(X),\ f\mapsto f^{*}.\] _For any morphism_ \(f\colon X\to Y\in\mathbf{Sc}_{S}\)_, there is an adjunction_ \[f^{*}\colon\operatorname{SH}(Y)\xleftrightarrow{}\operatorname{SH}(X)\colon f _{*}.\] 3. _(Homotopy invariance) For every vector bundle_ \(p\colon V\to X\) _of scalloped stacks, the unit of the_ \(*\)_-adjunction_ \[1\to p_{*}p^{*}\] _is an equivalence._ 4. _If_ \(f\in\mathbf{Sc}_{S}\) _is smooth morphism, then_ \(f^{*}\) _has a left adjoint, denoted_ \(f_{\sharp}\) _that is a moprhism of_ \(\operatorname{SH}(Y)\)_-modules._ 5. _The assignment_ \(X\mapsto\operatorname{SH}(X)\) _can be upgraded to a presheaf of presentable_ \(\infty\)_-categories_ \[\operatorname{SH}:(\mathbf{Sc}_{S}^{\operatorname{ft}})^{\operatorname{op}}\to \mathbf{Cat}_{\infty},\ X\mapsto\operatorname{SH}(X),\ f\mapsto f^{\dagger}\] _from the_ \(\infty\)_-category of scalloped stacks with finite type representable morphisms. For each_ \(f\colon X\to Y\) _in_ \(\mathbf{Sc}_{S}^{\operatorname{ft}}\)_, there is an adjunction_ \[f_{\dagger}:\operatorname{SH}(X)\rightleftarrows\operatorname{SH}(Y):f^{ \dagger}.\] _For any factorization_ \(f=p\circ j\) _with_ \(j\) _an open immersion and_ \(p\) _a proper representable map, there is a natural equivalence_ \(f_{\dagger}\cong p_{*}j_{\sharp}\)_._ 6. _(Localization) If_ \(i\colon Z\to X\) _is a closed immersion of scalloped stacks with open complement_ \(j\colon U\to X\)_, then we have the following cofiber sequences_ \[j_{\sharp}j^{*}\to\operatorname{id}\to i_{*}i^{*}\] \[i_{\dagger}i^{\dagger}\to\operatorname{id}\to j_{*}j^{*}.\] 7. _There is a map_ \(K(X)\to\operatorname{Aut}(\operatorname{SH}(X))\)_, assigning for any_ \(\alpha\in K(X)\) _its twist_ \(\langle\alpha\rangle\)_. If_ \(\alpha\) _is given by a finite locally free sheaf_ \(\mathcal{E}\)_, then_ \(\langle\mathcal{E}\rangle=p_{\sharp}s_{*}1_{X}\) _(this agrees with the previously considered Thom-anima), where_ \(p\colon V(\mathcal{E})\to X\) _is the projection of the associated vector bundle and_ \(s\) _its zero section. Further, any of the_ \(6\)_-operations commute with_ \(\langle\alpha\rangle\) _in a suitable sense (cf._ _[_16_, Rem. 7.2]__). We set_ \(\langle n\rangle\coloneqq\langle\mathcal{O}_{X}^{n}\rangle\) _._ 3. _The canonical projection_ \(p\colon\mathbb{G}_{\mathbb{m}}\times X\to X\) _yields a morphism_ \(p_{\sharp}p^{*}1_{X}[-1]\to 1_{X}[-1]\) _whose fiber we denote_ \(1_{X}(1)\)_. For an_ \(M\in\operatorname{SH}(X)\)_, we denote its_ \(n\)_-Tate twist by_ \(M(n)\coloneqq M\otimes 1_{X}(1)^{\otimes n}\)_. We have_ \(\langle n\rangle\simeq(n)[2n]\)_._ 4. _(Purity) Let_ \(f\) _be a smooth representable morphism of scalloped algebraic stacks with cotangent complex_ \(L_{f}\)_, then_ \[f^{!}\simeq f^{*}\langle L_{f}\rangle.\] 5. _For a cartesian diagram in_ \(\mathbf{Sc}_{S}^{ft}\)__ \[\begin{CD}W@>{g^{\prime}}>{}>X\\ @V{f^{\prime}}V{}V@V{f}V{f}V\\ Y@>{g}>{}>Z\end{CD}\] _with_ \(g\in\mathbf{Sc}_{S}^{\operatorname{ft}}\)_, we have_ \[g^{!}f_{*}\xrightarrow{\simeq}f_{*}^{\prime}g^{\prime!},\] \[f^{*}g_{!}\xrightarrow{\simeq}g_{!}^{\prime}f^{\prime*}.\] 6. _For_ \(f\colon Z\to Y\) _in_ \(\mathbf{Sc}_{S}^{\operatorname{ft}}\)_, the functor_ \(f_{!}\) _satisfies the projection formulas (cf._ _[_11_, Thm. 7.1]__)._ 7. _There exists an_ \(E_{\infty}\)_-ring spectrum_ \(\operatorname{KGL}_{X}\in\operatorname{SH}(X)\) _such that_ \[\operatorname{KH}(X)=\underline{\operatorname{Hom}}_{\operatorname{SH}(X)}(1_ {X},\operatorname{KGL}_{X}),\] _that is functorial in smooth representable morphisms and satisfies Bott periodicity for twist by finite locally free sheaves (cf._ _[_11_, Thm. 10.7]__)._ In the rest of this article, we want to focus on modules over an \(E_{\infty}\)-ring spectrum \(M\in\operatorname{SH}(X)\), where \(X\) is scalloped. The reason is that we will need a vanishing assumption (see below) that is satisfied for example for the homotopy invariant \(K\)-theory spectrum. As we are interested in \(T\)-equivariant \(K\)-theory of the flag variety, this is not a strong restriction. For oriented cohomology theories, we can relax our situation from flag varieties to linear schemes. To be more precise, the flag variety is stratified by affine spaces. If we are interested in oriented cohomology theories, it is enough to consider objects that are stratified by vector bundles. The next example, will show how this idea works. But before we come to the example let us fix some notation. **Notation 2.7**.: Let \(X\) be a scalloped algebraic stack and \(M_{X}\in\operatorname{SH}(X)\) an \(E_{\infty}\)-ring spectrum. Then we denote the \(\infty\)-category of \(M_{X}\)-modules in \(\operatorname{SH}(X)\) with \(\operatorname{SH}(X)_{M}\). Further, for any representable morphism \(f\colon Y\to X\) of scalloped algebraic stacks, we denote \(M_{Y}\coloneqq f^{*}M_{X}\). **Remark 2.8**.: Let \(f\colon X\to Y\) be a representable morphism of scalloped algebraic stacks. Further, let \(M_{Y}\in\operatorname{SH}(Y)\) be an \(E_{\infty}\)-ring spectrum. Tensoring with \(M_{Y}\) (resp. \(M_{X}\)) induces a pullback functor \(f_{M}^{*}\colon\operatorname{SH}(Y)_{M}\to\operatorname{SH}(X)_{M}\). As the \(*\)-pullback for \(\operatorname{SH}\) is monoidal, we see that its right adjoint is lax-monoidal. In particular, we get an adjunction As remarked in Theorem 2.6 (iv), if \(f\) is smooth, the left adjoint \(f_{\sharp}\) of \(f^{*}\) is a morphism of \(\operatorname{SH}(Y)\)-modules and in particular, induces a left adjoint \(f_{\sharp M}\) of \(f_{M}^{*}\) (cf. [11, SS7.2]). By conservativity of the forgetful-functor \(\operatorname{SH}_{M}\to\operatorname{SH}\) (here we see \(\operatorname{SH}_{M}\) as a functor from scalloped algebraic stacks with representable morphisms to symmetric monoidal presentable \(\infty\)-categories), we can use [16, Thm 7.1] to obtain a \(6\)-functor formalism on \(\operatorname{SH}_{M}\) that satisfies properties (i)-(xi) of Theorem 2.6. **Notation 2.9**.: In the rest of this article, we will work with module spectra over some fixed \(E_{\infty}\)-ring spectrum. Thus, we will drop the subscript in the \(6\)-functor formalism indicating the fixed \(E_{\infty}\)-ring spectrum as seen in Remark 2.8. **Definition 2.10**.: Let \(f\colon X\to Y\) be a representable morphism of finite type of scalloped algebraic stacks. Further, let \(M_{Y}\in\operatorname{SH}(Y)\) be an \(E_{\infty}\)-ring spectrum in \(\operatorname{SH}(Y)\). Then we define _motive of \(X\) with values in \(M_{Y}\)_ (resp. the _compactly supported motive of \(X\) with values in \(M_{Y}\)_) as \(M_{Y}(X)\coloneqq f_{!}f^{!}M_{Y}\) (resp. \(M_{Y}^{c}(X)\coloneqq f_{*}f^{!}M_{Y}\)) in \(\operatorname{SH}(Y)_{M}\). **Example 2.11**.: Let \(\mathcal{E}\) be a finite locally free sheaf on a scalloped stack \(X\) and let \(p\colon V(\mathcal{E})\to X\) be the associated vector bundle. Further, let \(M_{X}\in\operatorname{SH}(X)\) be an \(E_{\infty}\)-ring spectrum that admits an orientation, i.e. there is a functorial equivalence \(M_{X}\langle\mathcal{F}\rangle\simeq M_{X}\langle n\rangle\) for any finite locally free \(\mathcal{O}_{X}\)-module \(\mathcal{F}\) of rank \(n\). Note that by homotopy invariance, we have that \(p_{*}p^{*}\simeq\operatorname{id}\simeq p_{\sharp}p^{*}\in\operatorname{SH}( X)_{M}\). In particular, by purity we have that \(M_{X}(V(\mathcal{E}))\) and \(M_{X}^{c}(V(\mathcal{E}))\) are equivalent to \(M_{X}\langle\mathcal{E}\rangle\). Further, as we can orientate the unit and by the Mayer-Vietoris sequence2 Footnote 2: Note that \(f_{*}f^{*}\) and \(f_{!}f^{!}\) satisfy Nisnevich descent and thus yield a Mayer-Vietoris sequence for \(M_{X}\) and \(M_{X}^{c}\), i.e. for open substacks \(U,U^{\prime}\subseteq V(\mathcal{E})\), we have a fiber sequence of the form \[M_{X}(U\cap U^{\prime})\to M_{X}(U)\oplus M_{X}(U^{\prime})\to M_{X}(V)\] and similarly, for \(M_{X}^{c}\) **Lemma 2.12** (Localization sequence).: _Let \(f\colon X\to Y\) be a representable morphism of scalloped algebraic stacks over \(S\) of finite type. Further, let \(M_{Y}\in\operatorname{SH}(Y)\) be a \(E_{\infty}\)-ring spectrum. Let \(i\colon Y\hookrightarrow X\) be a closed immersion over \(Y\) with open complement \(j\colon U\to X\). Further, let us denote \(f_{0}\coloneqq f\circ j\) and \(\bar{f}\coloneqq f\circ i\). Then for any \(M_{X}\)-module \(N\) in \(\operatorname{SH}(X)\) there exists the following fiber sequence in \(\operatorname{SH}(Y)_{M}\)_ \[\bar{f}_{*}\bar{f}^{!}N\to f_{*}f^{!}N\to f_{0*}f^{!}_{0}N.\] Proof.: Applying the localization sequence \[i_{*}i^{!}=i_{!}i^{!}\to\operatorname{id}\to j_{*}j^{*}=j_{*}j^{!}\] to \(f^{!}N\) yields \[i_{*}f^{!}N\to f^{!}N\to j_{*}f^{!}_{0}N.\] Now applying \(f_{*}\) to this sequence yields the result. We will need the following vanishing assumption later on in this article. In motivic cohomology, this is the analogue of the vanishing of negative higher Chow groups. For the \(K\)-theory spectrum, this will follows from the vanishing of negative \(K\)-groups. **Assumption 2.13**.: Let \(X=\operatorname{B}H\) be the classifying stack of a nice group scheme \(H\) and \(M_{X}\in\operatorname{SH}(X)\) be an \(E_{\infty}\)-ring spectrum. Further, let \(n>0\) and \(m<0\), then we have \[\operatorname{Hom}_{\operatorname{SH}(X)}(1_{X}\langle n\rangle[-1],M_{X})=0.\] Assumption 2.13 is satisfied at least for the two cohomology theories, that are considered in this article. Namely, homotopy invariant \(K\)-theory and motivic cohomology. **Example 2.14**.: Assume \(S\) is noetherian. Let \(m<0\) and \(n\in\mathbb{Z}\) and \(X=\operatorname{B}H\) the classifying stack for a nice group scheme \(H\). Further, let us consider the \(K\)-theory spectrum \(\operatorname{KGL}_{X}\in\operatorname{SH}(X)\). Then we have \[\operatorname{Hom}_{\operatorname{SH}(X)_{\operatorname{KGL}}}(\operatorname{ KGL}_{X}\langle n\rangle[m],\operatorname{KGL}_{X})\simeq\operatorname{ Hom}_{\operatorname{SH}(X)_{\operatorname{KGL}}}(\operatorname{KGL}_{X}[m], \operatorname{KGL}_{X})\simeq\pi_{m}\operatorname{KH}(X),\] where the first equivalence follows from Bott-periodicity (cf. Theorem 2.6 (xii)). As \(H\) is nice, the spectrum \(\operatorname{KH}(X)\) is connective3. In particular, siunce \(m<0\), we see that \(\operatorname{KGL}_{X}\) satisfies Assumption 2.13. Footnote 3: Here we use that the derived category of complexes with quasi-coherent cohomology \(D_{\operatorname{qc}}(X)\) is compactly generated, as \(H\) is nice (combine [1, Prop. 6.14] and [1, Rem. 12.2]). Then connectiveness follows from [19, Thm. 5.7] (here we need that \(S\) is noetherian). Now let us assume that \(S=\operatorname{Spec}(k)\) it the spectrum of a field and let us consider the motivic cohomology secptrum \(M\mathbb{Z}\in\operatorname{SH}(X)\) (cf. [1, Con. 10.16]). Then \[\operatorname{Hom}_{\operatorname{SH}(X)_{M\mathbb{Z}}}(M\mathbb{Z}\langle n \rangle[m],M\mathbb{Z})=\operatorname{Hom}_{\operatorname{SH}(X)}(1_{X},M \mathbb{Z}\langle-n\rangle[-m])=A^{-n}(X,m),\] which vanishes as \(m\) is negative. The above example shows that the motivic \(K\)-theory spectrum and the motivic cohomology spectrum satisfy an even stronger condition than Assumption 2.13. Let us give this stronger assumption a number. **Assumption 2.15**.: Let \(X=\mathrm{B}H\) be the classifying stack of a nice group scheme \(H\) and \(M_{X}\in\mathrm{SH}(X)\) be an \(E_{\infty}\)-ring spectrum. Further, let \(n\in\mathbb{Z}\) then we have \[\mathrm{Hom}_{\mathrm{SH}(X)}(1_{X}\langle n\rangle[-1],M_{X})=0.\] ## 3 \(T\)-equivariant motivic homotopy theory of flag varieties Let \(S\) be a non-empty scheme, \(G\) be a split reductive \(S\)-group scheme and \(T\) a maximal split torus contained in a Borel \(B\subseteq G\). We want to understand the motive of \([T\backslash G/B]\). In this case the computations are rather straightforward, as \(G/B\) is cellular, i.e. has a stratification by vector bundles. So, one can filter \(G/B\) by \(T\)-invariant closed subschemes such that their successive quotients is given by vector bundles (usually this property is called _linear_ in the literature). The existence of such a stratification is well known and referred to as the Bruhat decomposition of \(G/B\). As we have only found references for split reductive groups over a field, we first recall the existence of an affine cell decomposition of \(G/B\) over \(S\). Afterwards, we can analyze the the motive of the scalloped stack \([T\backslash G/B]\) over \(\mathrm{B}T\). Note however, that the \(6\)-functor formalism of SH only works for representable morphisms of scalloped stacks, so we cannot compute the motive of \([T\backslash G/B]\) over \(S\). We will however explain in Section 3.3.2 how this extends to the lisse-extension of SH and Beilinson motives \(\mathrm{DM}_{\mathbb{Q}}\), which both have \(6\)-functor formalism for non-representable morphisms. ### Affine cell decomposition of \(G/b\) In this section, we will show that \(G/B\) has an affine cell decomposition. We will use the Bruhat decomposition of \(G\) and pullback the induced stratification on \([B\backslash G/B]\) to \(G/B\). This construction is compatible with base change and thus, we will reduce to the case \(S=\mathrm{Spec}(\mathbb{Z})\). Then this is a classical statement on Schubert cells. This was communicated to us by Torsten Wedhorn. **Definition 3.1**.: A _stratification_ of a scheme \(X\) is a map \(\iota\colon\coprod_{i\in I}X_{i}\to X\), where \(I\) is a set, each \(X_{i}\) is a scheme, \(\iota\) is a bijection on the underlying topological spaces, \(\iota_{|X_{i}}\) is an immersion and the topological closure of \(\iota(X_{i})\) in \(X\) is the union of subsets of the form \(\iota(X_{j})\). The subschemes \(\iota(X_{i})\) of \(X\) are called _strata_. **Definition 3.2**.: An _\(S\)-cell_ is an \(S\)-scheme isomorphic to a vector bundle. A _cellular \(S\)-scheme_\(X\) is a separated \(S\)-scheme of finite type which is smooth and admits a stratification whose strata are cells. First, let us recall that \(G\) admits a Bruhat-decomposition indexed by the Weyl group \(W\) of a split maximal torus \(T\) inside \(G\). **Lemma 3.3** (Bruhat-decomposition).: _Let \(G\) be a split reductive \(S\)-group scheme and \(B\) a Borel containing a split maximal torus. Then \(G\) admits a stratification \(\coprod_{w\in W}BwB\to G\), where \(W\) denotes the Weyl group of \(T\) in \(W\)._ Proof.: See [1, Exp. XXII, Thm. 5.7.4]. The Bruhat-decomposition yields a stratification \(\coprod_{w\in W}\left[B\backslash BwB/B\right]\to\left[B\backslash G/B\right]\). The stack \(\left[B\backslash G/B\right]\) can be identified with the quotient \(G/B\times^{G}G/B\), where \(G\) acts diagonal via conjugation. For any \(S\)-scheme \(T\) the set \(G/B(T)\) is in bijection with Borel-subschemes of \(G_{T}\) (cf. [1, Exp. XXII, Cor. 5.8.3]). Thus, we have a map \[G/B\to\left[B\backslash G/B\right]\] given on \(T\)-valued points by \(B^{\prime}\mapsto(B^{\prime},B)\). The pullback of the above stratification on \(\left[B\backslash G/B\right]\) via this map yields a stratification \(\coprod_{w\in W}C_{w}\to G/B\). We call the \(C_{w}\)_Schubert cells of \(G/B\)_. On \(W\) we have a length function, which we denote by \(l\). Then we claim that \(C_{w}\cong\mathbb{A}_{S}^{l(w)}\). In particular, \(G/B\) has a cellular stratification. **Proposition 3.4**.: _Let \(S\) be a non-empty scheme. Let \(G\) be a split reductive \(S\)-group scheme with split maximal torus \(T\) and a Borel \(B\) containing \(T\). Then the Schubert cell \(C_{w}\) is isomorphic to \(\mathbb{A}_{S}^{l(w)}\)._ _In particular, the stratification of \(G/B\) by Schubert cells \(C_{w}\) is cellular._ Proof.: As the construction of the Schubert cells is compatible with base change, we may assume without loss of generality that \(S=\operatorname{Spec}(\mathbb{Z})\). Then the proposition follows from [1, II. 13] ### Equivariant motives of linear Artin-stacks In the following we assume that \(S\) is an affine scheme and \(H\) an \(S\)-group scheme. Further every scheme will be qcqs of finite type over \(S\). We fix an \(E_{\infty}\)-ring spectrum \(M_{\mathrm{B}H}\in\operatorname{SH}(\mathrm{B}H)\) that satisfies Assumption 2.13. **Definition 3.5**.: A _linear \(S\)-scheme_\((X,(X_{n})_{n\geq 0})\) consists of an \(S\)-scheme \(X\) and a filtration of closed subschemes \[\emptyset=X_{-1}\hookrightarrow X_{0}\hookrightarrow X_{1}\hookrightarrow \cdots\hookrightarrow X_{n}\hookrightarrow\cdots\hookrightarrow X\] such that each \(X_{n-1}\to X_{n}\) is a closed immersion, each \(X_{n}\setminus X_{n-1}\) is isomorphic to a coproduct vector bundles over \(S\) and the natural closed immersion \(\operatorname{colim}_{n}X_{n}\hookrightarrow X\) is an isomorphism on the reduced loci. If \(X_{n}\backslash X_{n-1}\) is isomorphic to a coproduct of affine spaces over \(S\), we call \((X,(X_{n})_{n\geq 0})\)_affinely linear_. **Definition 3.6**.: Let \((X,(X_{n})_{n\geq 0})\) be a linear \(S\)-scheme such that \(X\) admits a \(H\)-action. We say that \((X,(X_{n})_{n\geq 0})\) is _\(H\)-equivariant_ each of the \(X_{n}\) is stabilized by \(H\). **Remark 3.7**.: Let \((X,(X_{n})_{n\geq 0})\) be a \(H\)-equivariant linear \(S\)-scheme and assume that \(X\) is quasi-compact. Further, let us set \(U_{n}\coloneqq X_{n}\setminus X_{n-1}=\coprod_{j\in J_{n}}V(\mathcal{E}_{n,j})\). By \(H\)-invariance, we get an action of \(H\) on \(U_{n}\). In particular, we can take the associated quotient stack \(U_{n}/H\). As explained in Example 2.11 this yields for each \(j\in J_{n}\) a finite locally free sheaf \(\mathcal{E}_{n,j,H}\), such \(U_{n}/H\cong\coprod_{j\in J_{n}}V(\mathcal{E}_{n,j,H})\) together with a finite set \(I_{n,j}\subseteq\mathbb{N}_{0}\) and a decomposition \(S=\coprod_{j\in J_{n}}\coprod_{i\in I_{n,j}}S_{i,j}\) such that \(\mathcal{E}_{n,j,H|S_{i,j}}\) is finite locally free of rank \(i\). **Definition 3.8**.: A linear \(S\)-scheme \((X,(X_{n})_{n\geq 0})\) is called _strict_ for all \(n\geq 0\), we have that \[\max\{i\in\bigcup_{j\in J_{n-1}}I_{n-1,j}\}<\min\{i\in\bigcup_{j\in J_{n}}I_{ n,j}\},\] where the notation is as in Remark 3.7. **Example 3.9**.: Let us give the example, that motivates the definitions above. Let \(G\) be a split reductive \(S\)-group scheme with maximal split torus \(T\) contained in a Borel \(B\). By Proposition 3.4 the Schubert cells \(C_{w}\) of \(G/B\) are isomorphic to \(\mathbb{A}_{S}^{l(w)}\). Let us set the closed subscheme \(X_{n}\) as the schematic image of \(\coprod_{l(w)\leq n}C_{w}\) inside \(G/B\). This yields a linear structure on \(G/B\) by \[X_{0}\subseteq X_{1}\subseteq\cdots\subseteq G/B,\] where \(X_{n}\setminus X_{n-1}\cong\coprod_{l(w)=n}C_{w}\). By construction, each of the \(X_{i}\) are \(T\)-invariant and further, the linear structure \((G/B,(X_{n})_{n\in\mathbb{N}_{0}})\) is strict. Thus, this construction yields a strict affinely linear \(T\)-equivariant structure on \(G/B\). From now on, we assume that \(H\) is a nice \(S\)-group scheme. **Theorem 3.10**.: _Let \((X,(X_{n})_{n\geq 0})\) be a \(H\)-equivariant linear \(S\)-scheme such that \(X\) is proper over \(S\). Further, let us set \(U_{n}\coloneqq X_{n}\setminus X_{n-1}=\coprod_{j\in J_{n}}V(\mathcal{E}_{n,j})\). Then the following equivalences_ \[M_{\mathrm{BH}}([X/H])\simeq\bigoplus_{n\geq 0}M_{\mathrm{BH}}^{c}([U_{n}/H])= \bigoplus_{n\geq 0}\bigoplus_{j\in J_{n}}\bigoplus_{i\in I_{n,j}}M_{ \mathrm{BH}}\langle i\rangle,\] _hold, where the notation is as in Remark 3.7, if_ 1. \(M_{\mathrm{BH}}\) _admits an orientation and the linear structure above is strict, or_ 2. _the linear structure above is strict affinely linear._ _Further, if \(M_{\mathrm{BH}}\) satisfies Assumption 2.15, then we can omit the strictness in (i) and (ii)._ Proof.: We will prove the theorem under the assumption (i). The proof under the other assumptions will follows easily by the same arguments. By definition \(X\) admits a filtration \[X_{0}\hookrightarrow X_{1}\hookrightarrow\cdots\hookrightarrow X,\] such that each \(X_{i-1}\to X_{i}\) is a closed immersion with complement given by \(U_{i}\). For simplicity, we will assume that every quotient in the following it taken with respect to the etale topology. By \(H\)-equivariantness we may assume that each \(X_{i}\) is stabilized by \(H\). We see that \(X/H\) admits a filtration \[X_{0}/H\hookrightarrow X_{1}/H\hookrightarrow\cdots\hookrightarrow X/H,\] where each of the \(X_{n-1}/H\hookrightarrow X_{n}/H\) is a closed immersion with complement given by \(U_{n}/H=\coprod_{j\in J_{n}}V(\mathcal{E}_{n,j,H})\). As the \(X_{n}\) are proper, the \(X_{n}/H\) are also proper over \(\mathrm{B}H\) and therefore \(M_{c,\mathrm{B}H}(X_{n}/H)\simeq M_{\mathrm{B}H}(X_{n}/H)\) by definition. In particular the localization sequence for motives with compact support (cf. Lemma 2.12) yields the fiber diagram \[M_{\mathrm{B}H}(X_{n-1}/H)\to M_{\mathrm{B}H}(X_{n}/H)\to M_{\mathrm{B}H}^{c} (U_{n}/H).\] We claim that this sequence splits. Indeed, as explained in Remark 3.7, we have \[M_{c,\mathrm{B}H}([U_{n}/H])\simeq\bigoplus_{j\in J_{n}}\bigoplus_{i_{n}\in I _{n,j}}M_{\mathrm{B}H}\langle i_{n}\rangle.\] By induction we may assume that \[M_{\mathrm{B}H}(X_{n-1})\simeq\bigoplus_{k=0}^{n-1}\bigoplus_{j_{k}\in J_{k}} \bigoplus_{i_{k}\in I_{k,j_{k}}}M_{\mathrm{B}H}\langle i_{k}\rangle.\] In particular, any morphism \(\delta\colon M_{c,\mathrm{B}H}(U_{n}/H)\to M_{\mathrm{B}H}(X_{n-1}/H)[1]\) corresponds to an element in \[\prod_{k=0}^{n-1}\prod_{j_{k}\in J_{k}}\prod_{i_{k}\in I_{k,j_{k}}}\prod_{j \in J_{n}}\prod_{i_{n}\in I_{n,j}}\mathrm{Hom}_{\mathrm{SH}(\mathrm{B}H)}(M_{ \mathrm{B}H},M_{\mathrm{B}H}\langle i_{k}-i_{n}\rangle[1]).\] As \(i_{k}-i_{n}<0\) for any \(i_{k}\in I_{k,j_{k}}\) with \(0\leq k\leq n-1\) and \(i_{n}\in I_{n,j}\) by strictness, we see using Assumption 2.13 that \(\delta=0\). Now induction over \(n\) concludes the proof of the first assertion using that the motives only depend on the underlying reduced structure. **Corollary 3.11**.: _Let \(G\) be a split reductive \(S\)-group scheme with maximal split torus \(T\) that is contained in a Borel \(B\). Then_ \[M_{\mathrm{B}T}([T\backslash G/B])\simeq\bigoplus_{w\in W}M_{\mathrm{B}T} \langle l(w)\rangle.\] Proof.: This immediately follows with Theorem 3.10 and Example 3.9. ### Application to \(T\)-equivariant cohomology theories of flag varieties In the this subsection, we will prove Theorem 3, Corollaries 4 and 5 and Proposition 6. So, let \(S\) be a noetherian regular affine scheme. In the following \(G\) is a split reductive \(S\)-group scheme with maximal split torus \(T\) contained in a Borel \(B\) of \(G\). #### 3.3.1 Integral equivariant \(K\)-theory Let \(\operatorname{KGL}_{\mathrm{BT}}\) be the \(K\)-theory spectrum computing homotopy invariant \(K\)-theory for smooth representable stacks over \(\mathrm{B}T\) (cf. Theorem 2.6 (xii)). Note that \(\operatorname{KGL}_{\mathrm{BT}}\) satisfies Assumption 2.13 by Example 2.14. Then Corollary 3.11 yields the following computation. **Corollary 3.12**.: _Let \(G\) be a split reductive \(S\)-group scheme with maximal split torus \(T\) that is contained in a Borel \(B\). Then_ \[\operatorname{KH}([T\backslash G/B])\simeq\bigoplus_{w\in W}\operatorname{KH }(\mathrm{B}T).\] Proof.: This follows from Bott periodicity4 of \(\operatorname{KGL}\), Corollary 3.11 and Theorem 2.6. Footnote 4: The Bott periodicity yields \(\operatorname{KGL}_{X}(\mathcal{E})\simeq\operatorname{KGL}_{X}\) for any finite locally free sheaf \(\mathcal{E}\) over a scalloped stack \(X\). By construction \([T\backslash G/B]\) and \(\mathrm{B}T\) are quotients of smooth noetherian \(S\)-schemes by a nice \(S\)-group scheme. In particular, we see that \(\operatorname{KH}([T\backslash G/B])\) and \(\operatorname{KH}(\mathrm{B}T)\) are connective and their homotopy groups are computed by genuine equivariant \(K\)-theory (cf. [11, Thm. 5.7]). Hence, for any \(i\in\mathbb{Z}\) we have \[K_{i}^{T}(G/B)=\bigoplus_{w\in W}K_{i}^{T}(S). \tag{3.12.1}\] For rational \(K_{0}\) this is nothing new over a field, as the Schubert classes yield an \(R(T)_{\mathbb{Q}}\)-basis of \(K_{T}(G/B)_{\mathbb{Q}}\) (cf. [10]). Let \(S=\operatorname{Spec}(k)\). With rational coefficients, we can use [10, Prop. A.5] to get isomorphisms of \(R(T)\)-modules \[K_{i}^{T}(G/B)_{\mathbb{Q}}\cong\bigoplus_{w\in W}K_{i}(k)_{\mathbb{Q}} \otimes_{\mathbb{Q}}R(T)_{\mathbb{Q}}\cong K_{i}(k)_{\mathbb{Q}}\otimes_{ \mathbb{Q}}K_{T}(G/B)_{\mathbb{Q}}. \tag{3.12.2}\] #### 3.3.2 Completed equivariant \(K\)-theory In the following we want to extend our results to other formalism of motives. To look at these different definitions all at once, we use the formalism of a motivic \(\infty\)-category \(\mathbf{D}\) with full six functor formalism on scalloped stacks (cf. [10, Prop. 5.13]). We do not want to give an explicit definition of a motivic \(\infty\)-category, as it boils down to rewriting the axioms of the \(6\)-functor formalism. But we want to give \(2\) examples that are of interest for us. **Example 3.13**.: The following \(2\) examples are motivic \(\infty\)-categories with a full \(6\)-functor formalism on scalloped stacks. 1. Let \(X\) be an Artin-stack over \(S\). The lisse extended stable homotopy category \(\mathbf{D}=\operatorname{SH}_{\triangleleft}\) is defined via \[\operatorname{SH}_{\triangleleft}(X):=\operatorname{colim}_{(T,t)} \operatorname{SH}(T),\] where limit is taken in the \(\infty\)-category of pairs \((T,t)\) with \(t\colon T\to X\) are smooth morphism and \(T\) and algebraic space (cf. [11, SS12]). In _op.cit._ they give a comparison of \(\operatorname{SH}_{\lhd}\) with the \(\infty\)-category \(\operatorname{SH}^{\otimes}\) of Chowdhury [10] that proves the extension of a full six functor formalism to Artin-stacks for this \(\infty\)-category. Note that for the existence of \(\sharp\)-pushforward and \(\operatorname{!}\)-formalism there is no need for representability in this context. Further, they show that if when \(S\) is the spectrum of a field, the motivic cohomology spectrum \(\operatorname{Hom}_{\operatorname{SH}_{\lhd}(X)}(1_{X},M)\) for \(M\in\operatorname{SH}_{\lhd}(X)\) can be computed by a Borel-construction - this is precisely the construction Edidin-Graham make to define equivariant Chow groups as seen in Example 3.20 - (cf. [11, Thm. 12.16]). 2. Let us consider the etale localized rational stable homotopy category \(\operatorname{SH}_{\mathbb{Q},\operatorname{\acute{e}t}}\). This \(\infty\)-category can be right Kan extended to Artin-stacks over \(S\). As by definition \(\operatorname{SH}_{\mathbb{Q},\operatorname{\acute{e}t}}\) satisfies etale descent, we can extend the \(6\)-functor formalism to \(\operatorname{SH}_{\mathbb{Q},\operatorname{\acute{e}t}}\) on Artin stacks (cf. [16, App. A]). Again, there is no need for representability for the existence of a \(6\)-functor formalism. The universal property of the stable homotopy category yields a unique system of comparison maps \(R_{X}\colon\operatorname{SH}(X)\to\mathbf{D}(X)\) (cf. [11, Prop. 5.13]) for any motivic \(\infty\)-category \(\mathbf{D}(X)\). The family of functors \(R\) is compatible with \(\sharp\)-pushforward, \(*\)-inverse image, Thom twists and tensor products. As the family of functors \(R_{X}\) is monoidal, we can extend this to modules over any \(E_{\infty}\)-ring spectrum in \(\operatorname{SH}(X)\), i.e. if \(M_{X}\in\operatorname{SH}(X)\) is an \(E_{\infty}\)-ring spectrum, then there is a functor \[R_{M}(X)\colon\operatorname{SH}(X)_{M}\to R_{X}(M)\text{-Mod}(\mathbf{D}(X))\] compatible with pullbacks of \(M\) and thus also \(\sharp\)-pushforward, \(*\)-inverse image, Thom twists and tensor products in module spectra. Let us relate this construction, to our examples above by applying it to motivic cohomology. **Example 3.14**.: Let \(X\) be an Artin stack over \(S\) and let \(M\mathbb{Z}\in\operatorname{SH}(X)\) be the motivic cohomology ring spectrum. 1. Let us now further assume that \(X\) is quasi-separated with representable diagonal and has a smooth cover that admits Nisnevich locally sections (such stacks are called _Chowdhury stacks_[11, SS12.7]). Let \(M\mathbb{Z}^{\lhd}\) be the image of \(M\mathbb{Z}\) under \(R_{X}\colon\operatorname{SH}(X)\to\operatorname{SH}_{\lhd}(X)\). The \(\infty\)-category \(\operatorname{SH}_{\lhd}(X)\) is equivalent to the right Kan extension of \(\operatorname{SH}\) to Chowdhury stacks, evaluated at \(X\) (cf. [11, Cor. 12.28.]). Using this construction, the \(\infty\)-category of \(M_{X}^{\lhd}\)-modules in \(\operatorname{SH}_{\lhd}(X)\) can be describe as follows. We right Kan extends Spitzweck motives \(\operatorname{DM}\) to prestacks along Nisnevich covers (cf. [10]). In this way one can construct the exceptional pullback and pushforward for finite type morphisms. Further, for prestacks given by a quotient \(X/G\) of a scheme by a group scheme, the \(\infty\)-category \(\operatorname{DM}(X/G)\) is equivalent to \(\lim_{\Delta}\operatorname{DM}(\operatorname{Bar}^{\bullet}(X,G))\), where \(\operatorname{Bar}^{\bullet}(X,G))\) denotes the \(\operatorname{Bar}\) resolution of \(X\) with respect to \(G\). If \(G\) is special (e.g. \(G=T\)), then any etale \(G\)-torsor is Zariski locally trivial and in particular the etale sheafification of \(X/G\), which we usually denoted by \([X/G]\), agrees with the Nisnevich sheafification. As seen in _op.cit._ this allows one to compute \(\operatorname{DM}([X/G])=\operatorname{DM}(X/G)=\lim_{\Delta}\operatorname{ DM}(\operatorname{Bar}^{\bullet}(X,G))\). 2. Let us assume that \(S\) is of finite type over an excellent noetherian scheme of dimension \(\leq 1\). Further, let us denote the image of \(M\mathbb{Z}\) in \(\operatorname{SH}(X)_{\mathbb{Q},\operatorname{\acute{e}t}}\) with \(M\mathbb{Q}\). Then \(M\mathbb{Q}\) is glued by the Beilinson motivic cohomology spectrum an we can describe the \(M\mathbb{Q}\)-modules in \(\operatorname{SH}(X)_{\mathbb{Q},\operatorname{\acute{e}t}}\) as follows. The right Kan extension of Beilinson motives \(\operatorname{DM}_{\mathbb{Q}}\) to Artin-stacks (cf. [10]) admits an extension of the full six functor formalism. This is achieved by gluing along smooth covers. There are different ways to see this, but we do not want to go into details and refer to the paragraph before [14, SSA.2]). **Remark 3.15**.: An important example of a Chowdhury stack in our context are quotient stacks of the form \([X/H]\), where \(X\) is a quasi-separated scheme and \(H\) is a nice group scheme (cf. [12, Rem. 12.24]). From now on let us assume that the six functor formalism in \(\mathbf{D}\) exists for not necessarily representable morphisms. **Notation 3.16**.: In the following we fix an \(E_{\infty}\)-ring spectrum \(M_{S}\in\operatorname{SH}(S)\) we will denote its image in \(\mathbf{D}(S)\) with \(M_{S}^{\mathbf{D}}\). Again, we will denote the pullback of \(M_{S}^{\mathbf{D}}\) under a map \(X\to S\), where \(X\) is a scalloped stack, with \(M_{X}^{\mathbf{D}}\). Further, we will denote the \(\infty\)-category of \(M_{X}^{\mathbf{D}}\)-modules in \(\mathbf{D}(X)\) with \(\mathbf{D}(X)_{M}\). For any morphism scalloped stacks \(f\colon X\to Y\) of finite type, we will denote the motives in \(\mathbf{D}\) with \(M_{Y}^{\mathbf{D}}(X)\coloneqq f_{!}f^{!}M_{X}^{\mathbf{D}}\) and \(M_{Y}^{c,\mathbf{D}}\coloneqq f_{*}f^{!}M_{X}^{\mathbf{D}}\). We also define _motivic cohomology of a scalloped stack \(X\) with coefficients in \(M^{\mathbf{D}}\)_ as \[H_{\mathbf{D}}^{n,m}(X,M)\coloneqq\operatorname{Hom}_{\mathbf{D}(X)}(1_{X},M_ {X}^{\mathbf{D}}(n)[m]).\] **Remark 3.17**.: Assume \(f\colon X\to S\) is a smooth scalloped stack over \(S\). Then \(f_{!}f^{!}M_{S}\simeq f_{!}M_{X}\) and we see that \[H_{\mathbf{D}}^{n,m}(X,M)\simeq\operatorname{Hom}_{\mathbf{D}(X)_{M}}(M_{S}^{ \mathbf{D}}(X),M_{S}^{\mathbf{D}}(n)[m]).\] If \(X\) is smooth over \(\operatorname{B}\!H\) for some nice group scheme \(H\), we therefore can transport all of the results of Section 3.2 to \(M_{S}(X)\) via \(\sharp\)-pushforward along the structure map \(\operatorname{B}\!H\to S\). Working over the base, we can analyze the motive of strict linear schemes. This is classical and the prove is achieved by _mutas mutandis_ of the proof of Theorem 3.10. **Proposition 3.18**.: _Let us assume that \(H_{\mathbf{D}}^{i,1+2i}(S,M)=0\) for all \(i>0\). Let \((X,(X_{n})_{n\geq 0})\) be a linear \(S\)-scheme such that \(X\) is proper over \(S\). Further, let us set \(U_{n}\coloneqq X_{n}\setminus X_{n-1}=\coprod_{j\in J_{n}}V(\mathcal{E}_{n,j})\). Then the equivalences_ \[M_{S}^{\mathbf{D}}(X)\simeq\bigoplus_{n\geq 0}M_{S}^{\mathbf{D},c}(U_{n})= \bigoplus_{n\geq 0}\bigoplus_{j\in J_{n}}\bigoplus_{i\in I_{n,j}}M_{S} \langle i\rangle,\] _hold, where the notation is as in Remark 3.7, if_ 1. \(M_{S}^{\mathbf{D}}\) _admits an orientation and the linear structure above is strict, or_ 2. _the linear structure above is strict affinely linear._ _Further, if \(H_{\mathbf{D}}^{i,1+2i}(S,M)=0\) for all \(i\in\mathbb{Z}\), then we can omit the strictness in (i) and (ii)._ Proof.: This is completely analogous to the proof of Theorem 3.10. The structure of the motive of linear stacks allows us to rewrite Theorem 3.10. **Corollary 3.19**.: _Let \(H\) be a nice \(S\)-group scheme. Let \((X,(X_{n})_{n\geq 0})\) be a \(H\)-equivariant strict linear \(S\)-scheme such that \(X\) is smooth and proper over \(S\). Then_ \[M_{S}^{\mathbf{D}}([X/H])\simeq M_{S}^{\mathbf{D}}(X)\otimes M_{S}^{\mathbf{ D}}(\mathrm{B}H).\] Proof.: This follows immediately from Proposition 3.18 and Theorem 3.10 via \(\sharp\)-pushforward along the structure map \(\mathrm{B}H\to S\) (cf. Remark 3.17). **Example 3.20**.: Let us validate Corollary 3.19 using a more direct computation over \(S=\mathrm{Spec}(k)\), where \(k\) is a field and restricting ourselves to the case \(\mathbf{D}=\mathrm{SH}_{\mathbb{Q},\acute{e}\mathrm{t}}\) and \(M_{S}^{\mathbf{D}}=M\mathbb{Q}_{S}\in\mathbf{D}(S)\) the rational motivic cohomology ring spectrum. Let \(G=\mathrm{SL}_{2,S}\) and \(T=\mathbb{G}_{\mathrm{m},S}\) the standard diagonal torus. Let \(B\) be the Borel of upper triangular matricies in \(G\). Then \(G/B\cong\mathbb{P}_{S}^{1}\) and the action of \(T\) on \(\mathbb{P}_{S}^{1}\) induced by conjugation on \(G/B\). In particular, the action of \(T\) on \(\mathbb{P}_{S}^{1}\) is given by multiplication. The motive of \(\big{[}T\backslash\mathbb{P}_{k}^{1}\big{]}\) can be computed in the following way - analogous to the computation of its intersection ring (cf. [10]). Let us fix an \(i\in\mathbb{Z}\). Further, let us choose a \(1\)-dimensional representation \(V\) of \(T\) over \(k\). We denote with \(\mathbb{V}\coloneqq\mathrm{Spec}(\mathrm{Sym}(V))\) the associated vector bundle over \(S\). Then we define the scheme \(U_{i}\) for each \(h\colon Q\to S\) via \[U_{i}(Q)\coloneqq\{u\in\mathrm{Hom}(h^{*}\mathbb{V},h^{*}\mathbb{V}^{i})\mid \mathrm{Coker}(u)\text{ is finite free of rank }i\}\] Then \(T\) acts freely on \(U_{i}\) and one can show that the codimension of \(U_{i}\) in the \(k\)-vector bundle \(\mathbb{V}_{i}\coloneqq\mathrm{Spec}(\mathrm{Sym}(V\otimes V^{\vee})^{i})\) is greater than \(-i\). Further, we can see that \[\mathbb{P}_{k}^{1}\times_{k}^{T}U_{i-1}\cong\mathbb{P}(\mathcal{O}_{\mathbb{ P}^{i-1}}(1)\oplus\mathcal{O}_{\mathbb{P}^{i-1}}(1))\to\mathbb{P}_{k}^{i-1} \cong U_{i-1}/T\] is a \(\mathbb{P}_{k}^{1}\)-bundle. Thus, the projective bundle formula yields \[M_{S}^{\mathbf{D}}(\mathbb{P}_{k}^{1}\times_{k}^{T}U_{i-1})\cong M_{S}^{ \mathbf{D}}(\mathbb{P}_{k}^{i-1})\oplus M_{S}^{\mathbf{D}}(\mathbb{P}_{k}^{i- 1})\langle 1\rangle.\] Now the motive of \(M_{S}^{\mathbf{D}}([T\backslash G/B])\) is isomorphic to the colimit over \(i\) of \(M_{S}^{\mathbf{D}}(\mathbb{P}_{k}^{1}\times_{k}^{T}U_{i-1})\) (this can be followed from of [11] resp. [11]). Therefore, we finally have \[M_{S}^{\mathbf{D}}([T\backslash G/B])\simeq M_{S}^{\mathbf{D}}(B\,\mathbb{G}_ {\mathrm{m},S})\oplus M_{S}^{\mathbf{D}}(B\,\mathbb{G}_{\mathrm{m},S})\langle 1 \rangle\simeq M_{S}^{\mathbf{D}}(\mathbb{P}_{S}^{1})\otimes M_{S}^{\mathbf{D}}( \mathbb{G}_{\mathrm{m},S}),\] where the last equivalence follows again from the projective bundle formula. **Remark 3.21**.: The above example and computations also holds for \(\operatorname{SH}_{\triangleleft}\) and the integral motivic cohomology ring spectrum if either \(k\) has characteristic \(0\) or after inverting the characteristic of \(k\) (cf. [12, Thm. 12.16]). Let us specialize to the case of \(T\) acting on the flag variety \(G/B\) and the case where \(S=\operatorname{Spec}(k)\) is the spectrum of a field. Assume for this paragraph that \(k\) has characteristic \(0\). As mentioned in Example 3.13, the motivic cohomology spectrum can be computed using the Borel construction in the cases that are of interest for us. For Chow groups of stacks this gives the right computation. For \(K\)-theory this is no longer true. In fact, one can show that the Borel construction yields the completion of equivariant \(K\)-theory along the augmentation ideal (cf. [12]), i.e. \[H^{0,-i}_{\operatorname{SH}_{\triangleleft}}([T\backslash G/B]\,,\operatorname {KGL}^{\triangleleft})\simeq K_{i}^{T}(G/B)^{\wedge_{I_{T}}},\] where \(I_{T}\subseteq R(T)\) is the ideal generated by virtual rank \(0\) representations and \(K_{i}^{T}(G/B)^{\wedge_{I_{T}}}\) is the completion along \(I_{T}K_{i}^{T}(G/B)\). Again the above stays true in characteristic \(p>0\), after inverting \(p\). **Example 3.22**.: Let \(G=\mathbb{G}_{\operatorname{m}}\). Then \(R(G)=\mathbb{Z}[T,T^{-1}]\) and the augmentation ideal \(I_{G}\) is generated by \(1-T\). Thus, \(R(G)^{\wedge_{I_{G}}}=\mathbb{Z}[\![T]\!]\) and we see that indeed \(R(G)\) is not \(I_{G}\)-complete. Therefore, the lisse extended \(K\)-theory spectrum does not recover \(K\)-theory. A similar result appears, when one wants to prove an equivariant form of the Riemann-Roch Theorem (cf. [1]). The same holds true, if we consider cohomology theories in \(\operatorname{SH}_{\mathbb{Q},\operatorname{\acute{e}t}}\) as they satisfy etale descent. This descent property is the ambiguity here. One can show that even rational \(K\)-theory of stacks does not satisfy etale descent (for \(G\)-theory one can give precise conditions on quotient stacks, cf. [13, SS3]). Nevertheless, we want to show the implications of our calculations for Chow groups and completed \(K\)-theory. The upshot of Corollary 3.19 is that it gives us a tensor description \[M_{S}^{\mathbf{D}}([T\backslash G/B])=M_{S}^{\mathbf{D}}(G/B)\otimes M_{S}^{ \mathbf{D}}(\mathrm{B}T) \tag{3.22.1}\] and we want to use this to get a tensor description of completed \(T\)-equivariant \(K\)-theory of \(G/B\). In this case we would need a Kunneth formula for \(K\)-theory. For equivariant \(K_{0}\), in our special case, this is known and follows by the spectral sequence induced for example on \(G\)-theory (cf. [13]). For higher equivariant \(K\)-groups there is no Kunneth formula, as this fails even for non-equivariant \(K\)-theory. **Example 3.23**.: Let us consider \(\mathbb{A}^{1}_{k}\to\operatorname{Spec}(k)\) the projection of the affine line. Then \(K_{1}(\mathbb{A}^{2}_{k})=K_{1}(\mathbb{A}^{1}_{k}\times_{k}\mathbb{A}^{1}_{k })=k^{\times}\) by homotopy invariance, whereas \(K_{1}(\mathbb{A}^{1}_{k})\otimes_{\mathbb{Z}}K_{0}(\mathbb{A}^{1}_{k})\oplus K _{0}(\mathbb{A}^{1}_{k})\otimes_{\mathbb{Z}}K_{1}(\mathbb{A}^{1}_{k})=k^{ \times}\oplus k^{\times}\). One has to take into account the higher \(K\)-groups of the base field. This does not happen for example for Chow groups, as the higher Chow groups of a field vanish. Instead of a Kunneth formula, one gets a spectral sequence for \(K\)-theory and higher Chow theory, at least when one of the factors comes from a linear scheme (cf. [13]). In fact, Totaro shows that Chow groups commute with tensor products if and only if one of the associated motives of the factors is Tate (relative over a field, cf. [106]). Thus, if \(X\) is a (smooth) strict linear \(S\)-scheme and \(Y\) an arbitrary (smooth) scheme, we have \(A^{*}(X\times_{S}Y)=A^{*}(X)\otimes_{\mathbb{Z}}A^{*}(Y)\). In [107] this is also a result of the associated spectral sequence of motivic cohomology of linear varieties. Dugger and Isaksen generalize this idea to arbitrary cellular motivic cohomology theories like motivic cohomology, algebraic \(K\)-theory and algebraic cobordism (cf. [108]). **Proposition 3.24** (Tor spectral sequence).: _Let \(\mathbf{D}=\operatorname{SH}_{\mathbb{Q},\mathrm{\acute{e}t}}\). Let \(M_{S}\) be either \(\operatorname{KGL}_{S}\) or \(M\mathbb{Q}_{S}\) inside \(\mathbf{D}(S)\). Let \((X,(X_{n})_{n\geq 0})\) be a linear \(S\)-scheme such that \(X\) is proper over \(S\) and let \(Y\) be a smooth algebraic stack. Then for each \(n\in\mathbb{Z}\) there is a spectral sequence_ \[\operatorname{Tor}_{p}^{H_{\mathbf{D}}^{*,n}(S,M)}(H_{\mathbf{D}}^{*,n}(X,M),H _{\mathbf{D}}^{*,n}(Y,M))_{q}\Rightarrow H_{\mathbf{D}}^{p+q,n}(X\times_{S}Y, M).\] Proof.: By Proposition 3.18 the motive \(M_{S}^{\mathbf{D}}(X)\) is a direct sum of Tate-motives. Therefore, the result follows with [108, Thm. 6.2, Thm 6.4, Prop. 7.7]. We can use Proposition 3.24 to see that \[K_{0}^{T}(G/B)_{\mathbb{Q}}^{\wedge_{T}}\cong K_{0}(G/B)_{\mathbb{Q}}\otimes_ {\mathbb{Q}}K_{T}(S)_{\mathbb{Q}}^{\wedge_{I_{T}}} \tag{3.24.1}\] noting that the \(K\)-theory of \(G/B\) and \(\mathrm{B}T\) is connective (cf. Section 3.3.1). The comparison of Beilinson motivic cohomology with higher Chow groups yield \[H_{\mathrm{DM}}^{n,2n}(X,\mathbb{Q})=A^{n}(X)_{\mathbb{Q}},\] as for \(k<n\), we have \(H_{\mathrm{DM}}^{k,n}(X,\mathbb{Q})=A^{n}(X,2k-2n)_{\mathbb{Q}}=0\). If \(S=\operatorname{Spec}(k)\) is the spectrum of a field, we can use that equivariant Chow groups are given via the Borel construction, we see as before that \[A_{T}^{*}(G/B)_{\mathbb{Q}}\cong A^{*}(G/B)_{\mathbb{Q}}\otimes_{\mathbb{Q}} A_{T}^{*}(S)_{\mathbb{Q}}. \tag{3.24.2}\] **Remark 3.25**.: It should not be difficult to generalize Proposition 3.24 to the case of Spitzweck motives and get an integral version of the above results for completed \(K\)-theory and Chow theory, at least after inverting the characteristic of the ground field (in the positive characteristic setting). But as this probably boils down to just rewriting the results of Dugger and Isaksen, we did not follow this further and leave it to the reader.
2308.13050
Multi-BERT for Embeddings for Recommendation System
In this paper, we propose a novel approach for generating document embeddings using a combination of Sentence-BERT (SBERT) and RoBERTa, two state-of-the-art natural language processing models. Our approach treats sentences as tokens and generates embeddings for them, allowing the model to capture both intra-sentence and inter-sentence relations within a document. We evaluate our model on a book recommendation task and demonstrate its effectiveness in generating more semantically rich and accurate document embeddings. To assess the performance of our approach, we conducted experiments on a book recommendation task using the Goodreads dataset. We compared the document embeddings generated using our MULTI-BERT model to those generated using SBERT alone. We used precision as our evaluation metric to compare the quality of the generated embeddings. Our results showed that our model consistently outperformed SBERT in terms of the quality of the generated embeddings. Furthermore, we found that our model was able to capture more nuanced semantic relations within documents, leading to more accurate recommendations. Overall, our results demonstrate the effectiveness of our approach and suggest that it is a promising direction for improving the performance of recommendation systems
Shashidhar Reddy Javaji, Krutika Sarode
2023-08-24T19:36:05Z
http://arxiv.org/abs/2308.13050v1
# Multi-BERT for Embeddings for Recommendation System ###### Abstract In this paper, we propose a novel approach for generating document embeddings using a combination of Sentence-BERT (SBERT) and RoBERTa, two state-of-the-art natural language processing models. Our approach treats sentences as tokens and generates embeddings for them, allowing the model to capture both intra-sentence and inter-sentence relations within a document. We evaluate our model on a book recommendation task and demonstrate its effectiveness in generating more semantically rich and accurate document embeddings. To assess the performance of our approach, we conducted experiments on a book recommendation task using the Goodreads dataset. We compared the document embeddings generated using our MULTI-BERT model to those generated using SBERT alone. We used precision as our evaluation metric to compare the quality of the generated embeddings. Our results showed that our model consistently outperformed SBERT in terms of the quality of the generated embeddings. Furthermore, we found that our model was able to capture more nuanced semantic relations within documents, leading to more accurate recommendations. Overall, our results demonstrate the effectiveness of our approach and suggest that it is a promising direction for improving the performance of recommendation systems ## 1 Introduction A subtype of information filtering system called a recommendation system makes suggestions for items that are most relevant to a certain user or query. Usually, the recommendations are made about various decision-making procedures, like choosing a product to buy, movies to watch, or online books to read. When a person must select an item from a service's possible overwhelming selection of things, recommender systems are especially helpful.There are two types of recommendation systems[1], collaborative which mostly is user depend, it relies on the history and past actions of users and recommends based on those, the other is content-based which is dependent on content like title, description, genre, author, etc and also user's preference, this can be used when you have the information of the items and not the user, this paper works on content-based. The recommender system that uses content has a number of benefits. First, as it is based on item representation, the content-based recommendation is user-independent.Therefore, this type of system is not affected by the data sparsity issue. Secondly, content-based recommender systems can address the new item cold-start issue by recommending new products to consumers. Last but not least, content-based recommender systems are able to explain the recommendation outcome in detail. In comparison to other methods, this kind of system's transparency has many advantages in real-world applications. Many methods are used to perform content-based filtering, it includes TF-IDF[2], vector space models, and classification, the basic idea of all these later models was to project the words into a high-dimensional space and then try to find the distance between them using techniques like cosine similarity which is finding how similar two orthogonal vectors are using the angle between them, Pearson correlation coefficient which is very similar to the previous one, euclidean distance, the most basic, apart from these clustering methods can also be applied to get the closest vector given a vector. Cosine similarity and clustering methods are used in this paper. One big question is how are words projected into dimensional space. There are models like Bag-of-Words(BoW), This approach assigns a unique token, usually a number, to each word that appears in the text, Word2Vec, In many ways, Word2Vec builds on BoW but instead of assigning discrete tokens to words it learns continuous multi-dimensional vector representation for each word in the training corpus, GloVe [3], etc. These models take words as input and convert them into vectors[4,5], these vectors can be projected into higher dimensions and used for similarity calculation. The best among all these is Word2Vec, but its primary flaw is that it only offers one representation for a word that is the same no matter what context it is used in. The solution to this problem was contextual word embeddings. The fundamental idea underlying contextual word embeddings is to prove more than one representation for each word depending on the context in which it appears. The emergence of contextual word embeddings was driven by the success of deep learning models in natural language processing tasks[6]. These models, such as recurrent neural networks (RNNs) [7,8] and convolutional neural networks (CNNs), require input text to be represented as a sequence of vectors, rather than a string of words. Traditional word embeddings, such as word2vec and GloVe, provided a way to convert words into vectors, but they did not take into account the context in which the words appeared.To address this limitation, researchers began developing contextual word embedding models, such as BERT[9,10] and ELMo[11], which are able to capture the context in which a word appears and produce a vector representation that reflects this context. These models use a transformer architecture, which allows them to capture complex relationships between words in the input text, and they are trained on large corpora of text, which allows them to learn to produce accurate and context-aware word embeddings. Recently there has also been more improvement with SBERT[12,13,14] being most popular for sentence-level tasks, SBERT was introduced as a way to improve the performance of BERT on sentence-level tasks, Unlike BERT (Bidirectional Encoder Representations from Transformers), which processes entire sequences of text, SBERT processes individual sentences and learns to represent each sentence in a fixed-length vector. This allows SBERT to better capture the meaning of sentences and improve the performance of NLP systems that operate at the sentence level. But, there are limitations, SBERT processes the entire document as a single sentence. This is because SBERT is a sentence-level model, which means that it is designed to understand the meaning of individual sentences rather than individual words. By processing the entire document as a single sentence, SBERT is able to capture the overall meaning of the document, which is more generalized. To better this, this paper introduces MULTI-BERT which gives more power to SBERT by adding sentence embeddings created by RoBERTa[15]. Which would allow for more information capture. ## Proposed Methodology It is a very simple model with few additions to the existing SBERT model, instead of just considering the whole document for embeddings, the sentences are taken and passed through RoBERTa model, The MULTI-BERT technique involves splitting a document into its individual sentences, extracting the sentence embeddings using SBERT, and treating each sentence as a token. These sequences of tokens are then fed into an pretrained RoBERTa model[16], and the last input layer of RoBERTa is extracted and used as the document vector. This document vector captures the inter-sentence relations within the document, and it is concatenated with the sentence embeddings to create the final document embedding. We first divide the input document into individual sentences and generate sentence embeddings for each sentence using the SBERT model. This allows us to capture the contextual meaning of each sentence without exceeding the maximum length of the SBERT model. We then apply k-means clustering to these sentence embeddings, assigning each sentence to a cluster. We use 200 clusters in our experiments. This creates a "cluster codebook" that maps each sentence embedding to a cluster. We use this cluster codebook to create a document vector, where each sentence in the document is represented by its corresponding cluster id. We fine-tune the RoBERTa model by providing it with this document vector as input and the same vector as the output label. This allows the model to learn to generate document vectors that capture the semantic relations between sentences within a document. We use a batch size of 16, attention heads of 12, 6 hidden layers, and a position vocab size of 514. We train the model Figure 1: **Architecture of the Proposed Model** using the Adam optimizer with a learning rate of 1e-4. After training, we save the fine-tuned model for use in generating document embeddings. Our approach allows us to create more semantically rich and accurate document embeddings by combining the strengths of SBERT and RoBERTa[17]. By using sentence embeddings and clustering, we can capture the contextual meaning of each sentence and the relationships between sentences within a document. This allows us to better identify similar books and make more accurate recommendations. Once we have trained and saved our fine-tuned RoBERTa model, we can use it to generate document embeddings for any input document. To do this, we follow the same process as before: we divide the input document into individual sentences, generate sentence embeddings using SBERT, assign each sentence to a cluster using the cluster codebook, and create a document vector based on the cluster id of each sentence. We then provide this document vector to the fine-tuned RoBERTa model, which generates a corresponding document embedding. This document embedding captures the contextual meaning of each sentence within the document, as well as the relationships between sentences. We can use these document embeddings in a variety of ways, such as to make recommendations or to perform similarity calculations between documents. For example, to make a recommendation, we could generate document embeddings for a set of books and use a similarity measure such as cosine similarity to identify the most similar books to a given input book. This would allow us to recommend books that are similar in terms of their content and context, rather than just their metadata. While cosine similarity is a commonly used measure of similarity between vectors, it has some limitations when applied to document embeddings. Cosine similarity measures the angle between two vectors, which is useful for identifying similar vectors but may not always be the best approach for document embeddings. In contrast, k-means clustering[18] is a technique that groups similar vectors together and can be more effective for document embeddings because it takes into account the relationships between vectors within a cluster. Additionally, k-means clustering can handle larger and more complex datasets than cosine similarity, making it a better choice for many applications involving document embeddings.Furthermore, k-means clustering allows for the identification of clusters of similar documents, which can be useful for making more accurate recommendations. \[\operatorname*{arg\,min}_{\mathbf{S}}\sum_{i=1}^{k}\sum_{\mathbf{x}\in S_{i} }\left\|\mathbf{x}-\boldsymbol{\mu}_{i}\right\|^{2}=\operatorname*{arg\,min} _{\mathbf{S}}\sum_{i=1}^{k}\left|S_{i}\right|\operatorname*{Var}S_{i}\] For example, if a user is interested in a particular book, k-means clustering(**above Equation**) can identify other books that are similar to it and recommend those to the user. This is not possible with cosine similarity, which only measures the similarity between two individual vectors. In short, k-means clustering is a more powerful and flexible tool for working with document embeddings compared to cosine similarity and is better suited for many applications in the field of recommendation systems. ## Data The Goodreads book reviews dataset provided by the University of California, San Diego includes detailed information on millions of books, ratings, and reviews from the Goodreads website. The dataset is organized into several tables, each containing specific information about the books and reviews. In this study, we focus on the children's genre, which includes 124,082 books and 734,640 detailed reviews. The book's table includes information such as the title, author, and publication date, while the rating table includes the rating given to the book by each user along with the review text and other metadata. The user's table includes information about the users who submitted ratings and reviews, such as their name, location and reading history. The data is structured and allows for in-depth analysis. ## Baselines The baseline models considered for this experiment are SBERT, and TF-IDF. SBERT is a transformer-based language model that has been trained to encode contextual information from sentences. It is a variant of BERT, which is a widely-used model for natural language processing tasks. SBERT uses a two-sentence input representation, where each sentence is encoded using the BERT model. The encoded representations of the two sentences are then concatenated and passed through a classification layer to predict the relationship between the two sentences. This allows SBERT to capture contextual information from the sentences and use it for downstream tasks, such as semantic similarity, natural language inference, and sentiment analysis. One of the main advantages of SBERT over other language models is its ability to capture sentence-level contextual information, which is important for many natural language processing tasks. Additionally, SBERT has been shown to perform well on a wide range of tasks and achieve state-of-the-art results on several benchmarks. TF-IDF is calculated by multiplying two statistics: the term frequency and the inverse document frequency. The term frequency is a measure of how often a term appears in a document, while the inverse document frequency is a measure of how rare the term is across the entire corpus of documents. In a book recommendation system, TF-IDF can be used to determine the relative importance of words in a given book, and recommend books that have similar patterns of word usage. For example, if a user has read a book that contains the words "cat" and "dog" frequently, the system might recommend other books that also use these words frequently. TF-IDF can also be used to compare the content of two or more books and determine how similar or dissimilar they are in terms of their vocabulary and language usage. This can be useful for recommending books that are similar in content to a user's previously-read books. ## Experiment and Results We experimented over Goodreads datasets in which we picked the section of children books and took around 25000 records from children books data and children books review data. The children books collection consists of information for the books such as book title, authors, description, popular shelves the user has placed that book in, average rating of the book, number of pages. From the children books datasets we extracted columns namely, language_code,popular_shelves,is_ebook,average_rating,d escription,authors, book id, rating count, title. From the children book review dataset we took columns like review text,ratings and number of votes. First, the raw training data is preprocessed by selecting only the relevant columns and removing any noisy data. This ensures that the data is clean and can be easily used by the model. Two datasets are then combined using a common column, resulting in a merged dataset that contains a subset of the information from each of the original datasets. Any columns with a high number of NULL values are replaced with a default value to ensure that the data is complete and can be used by the model. Overall, this preprocessing step ensures that the data is clean and ready for use by the model. We then pass this data through each model and get the results. For evaluation purposes, we label the dataset using genres and compare the genres of a given book with the genres of recommended books. Since a book can belong to multiple genres, we create a one-hot encoding of these genres and calculate the relevance score of each recommended book by comparing the one-hot encodings. If the recommended book and the input book belong to at least a predetermined number of genres, then the recommended book is considered relevant. We then compute the precision at 5, 10, and 25 for the relevant recommended books in the order they are received. We are doing a baseline model comparison by running the same evaluation metrics on a simple sentence-embedded model, MULTI-BERT model and TF-IDF vectorizer. The way we are evaluating the model is by using precision, and to be able to use precision we need to know which retrieved documents are relevant to the query and which are not, for that purpose we have considered genre to be the label, but in out dataset we have multiple genres for each data case.For taking the relevance of the recommended book, we have compared the genres of input book with recommended books and if the genres are matching above a certain threshold we consider that book to be relevant. We have set the threshold of this experiment to be greater than 40%. The table above shows the precision values of different models at 5, 10, and 25 retrieved documents. The proposed model, MULTI-BERT, outperforms SBERT and TF-IDF at 5 retrieved documents, but falls behind TF-IDF at 10 and 25 retrieved documents. The performance of these models is dependent on the specific data and task at hand. In this case, the small amount of data used may have impacted the results. Additionally, TF-IDF's ability to identify important keywords within a document may have given it an advantage in the book recommendation task. Further experimentation with MULTI-BERT may reveal its potential to outperform existing models in other tasks. ## Conclusion and Future Work MULTI-BERT is able to outperform SBERT because it captures both intra-sentence and inter-sentence relations within a document. SBERT, on the other hand, only captures the content of individual sentences and does not take into account the relationships between sentences within a document.By capturing both intra-sentence and inter-sentence relations, MULTI-BERT is able to create more powerful and accurate document embeddings than SBERT, which only captures the content of individual sentences. This is why MULTI-BERT is able to outperform SBERT. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Models** & **P@5** & **P@10** & **P@25** \\ \hline **MULTI-BERT** & 0.9413 & 0.7889 & 0.7621 \\ \hline **S-BERT** & 0.7563 & 0.7764 & 0.7294 \\ \hline **TF-IDF** & 0.8164 & 0.8128 & 0.7877 \\ \hline \end{tabular} \end{table} Table 1: **Precision of different models** When comparing the results of using TF-IDF and MULTI-BERT for retrieval, it is observed that MULTI-BERT performs well for small retrieval, but not as well for larger retrieval. This may be due to the limitations of MULTI-BERT, which can be further studied in future research to improve its performance for larger retrieval tasks. Future work includes using this model on larger datasets and also more tasks other than recommendations. Also, we would like to try more hybrid models where multiple BERT models are combined to perform a single task. We would like to see how the performance of the model changes with the addition of other models.
2310.09242
A Multifaceted Look at Starlink Performance
In recent years, Low-Earth Orbit (LEO) mega-constellations have emerged as a promising network technology and have ushered in a new era for democratizing Internet access. The Starlink network from SpaceX stands out as the only consumer-facing LEO network with over 2M+ customers and more than 4000 operational satellites. In this paper, we conduct the first-of-its-kind extensive multi-faceted analysis of Starlink network performance leveraging several measurement sources. First, based on 19.2M crowdsourced M-Lab speed test measurements from 34 countries since 2021, we analyze Starlink global performance relative to terrestrial cellular networks. Second, we examine Starlink's ability to support real-time web-based latency and bandwidth-critical applications by analyzing the performance of (i) Zoom video conferencing, and (ii) Luna cloud gaming, comparing it to 5G and terrestrial fiber. Third, we orchestrate targeted measurements from Starlink-enabled RIPE Atlas probes to shed light on the last-mile Starlink access and other factors affecting its performance globally. Finally, we conduct controlled experiments from Starlink dishes in two countries and analyze the impact of globally synchronized "15-second reconfiguration intervals" of the links that cause substantial latency and throughput variations. Our unique analysis provides revealing insights on global Starlink functionality and paints the most comprehensive picture of the LEO network's operation to date.
Nitinder Mohan, Andrew Ferguson, Hendrik Cech, Prakita Rayyan Renatin, Rohan Bose, Mahesh Marina, Jörg Ott
2023-10-13T16:47:26Z
http://arxiv.org/abs/2310.09242v2
# A Multifaceted Look at Starlink Performance ###### Abstract. In recent years, Low-Earth Orbit (LEO) mega-constellations have emerged as a promising network technology and have ushered in a new era for democratizing Internet access. The Starlink network from SpaceX stands out as the only consumer-facing LEO network with over 2M+ customers and more than 4000 operational satellites. In this paper, we conduct the first-of-its-kind extensive multi-faceted analysis of Starlink network performance leveraging several measurement sources. First, based on 19.2M crowdsourced M-Lab speed test measurements from 34 countries since 2021, we analyze Starlink global performance relative to terrestrial cellular networks. Second, we examine Starlink's ability to support real-time web-based latency and bandwidth-critical applications by analyzing the performance of (i) Zoom video conferencing, and (ii) Luna cloud gaming, comparing it to 5G and terrestrial fiber. Third, we orchestrate targeted measurements from Starlink-enabled RIPE Atlas probes to shed light on the last-mile Starlink access and other factors affecting its performance globally. Finally, we conduct controlled experiments from Starlink dishes in two countries and analyze the impact of globally synchronized "15-second reconfiguration intervals" of the links that cause substantial latency and throughput variations. Our unique analysis provides revealing insights on global Starlink functionality and paints the most comprehensive picture of the LEO network's operation to date. + Footnote †: journal: Computer Vision and Pattern Recognition cloud gaming, to terrestrial networks (SS5). We find that, under optimal conditions, Starlink is capable of supporting such applications, matching the performance over cellular; however, we do observe some artifacts due to the network's periodic reconfigurations. **(3)** We perform targeted measurements from Starlink RIPE Atlas (Serban et al., 2016) probes and leverage their diverse locations to characterize the satellite last-mile "bent-pipe" performance (SS6.1). We find that the "bent-pipe" latency within the dense 53" shell remains consistent worldwide (\(\approx\) 40 ms), and is significantly lower to yet incomplete 70" and 97.6" orbits. We also find evidence of Starlink inter-satellite links (ISLs) connecting remote regions, showcasing superior performance to terrestrial paths in our case study. **(4)** Our high-frequency measurements from terminals in two European countries confirm that Starlink performs network reconfigurations every 15s, leading to noticeable latency and throughput degradations at sub-second granularity. By correlating data from our terminals, one covered by 53" and the other restricted to 70" and 97.6" connectivity, we find that the reconfigurations are globally synchronized events and likely independent of satellite handovers. Leveraging multi-dimensional, global, and controlled high resolution measurements, our findings distinctively advance the state-of-the-art by illuminating Starlink's global performance and the influence of internal network operations on real-time web applications. ## 2. Background Starlink is a LEO satellite network operated by SpaceX that aims to provide global Internet coverage through a fleet of satellites flying at \(\approx\) 500 km above the Earth's surface. The majority of Starlink's operational 4000 satellites lie within the 53" shell, which only covers parts of the globe (see Figure 1). The 70" and 97.6" orbits allow serving regions near the poles. These other shells however have fewer satellites (see Appendix A, Table 2 for constellation details). Figure 2 shows the cross-section of Starlink end-to-end connectivity. To access the Internet over the Starlink network, end-users require a dish, a.k.a. "Dishy"1, that communicates with satellites visible above 25" of elevation through phased-array antennas using Ku-band (shown as User Link (UL)). Starlink satellites, equipped with multiple antennas subdivided into beams, can connect to multiple terminals simultaneously (Serban et al., 2016) and relay all connections to a ground station (GS) on a Ka-band link (shown in green). The connection forms a direct "bent-pipe" in case the terminal and GS lie within a single satellite's coverage cone; otherwise, the satellites can relay within space to reach far-off GSs via laser inter-satellite links (ISLs), forming an "extended bent-pipe". Note that not all Starlink satellites are ISL-capable and it is difficult to effectively estimate ISL usage as Starlink satellites have no user visibility at IP layer and, therefore, do not show up in traceroutes. Footnote 1: We use ”Dishy” and ”user terminal” interchangeably in the paper. Finally, the GSs relay traffic from satellites to Starlink point-of-presence (PoP) through a wired connection, which routes it to the destination server via terrestrial Internet (Borda et al., 2016). The public availability of GS deployment information differs across countries. No official source exists, so we rely on crowdsourced data for the geolocations of GSs and PoPs (Serban et al., 2016), which is also shown in Figure 1. ## 3. Measurement Methodology ### Global Measurements _Measurement Lab (M-Lab)_ M-Lab (Mendle et al., 2016) is an open-source project that allows users to perform end-to-end throughput and latency speed tests from their devices to 500+ servers in 60+ metropolitan areas (Serban et al., 2016). Google offers M-Lab measurements when a user searches for "speed test" (Serban et al., 2016), serving as the primary source of measurement initiations (Mendle et al., 2016; Serban et al., 2016; Serban et al., 2016). At its core, M-Lab uses the Network Diagnostic Tool (NDT) (Serban et al., 2016), which measures uplink and downlink performance using a single 10 s WebSocket TCP connection. The platform also records fine-grained transport-level metrics (tcp_info), including goodput, round-trip time (RTT) and losses, along with IP, Autonomous System Number (ASN), and geolocation of both the end-user device and the selected M-Lab server. We identify measurements from the Starlink clients via their ASN (AS14593). The M-Lab dataset includes samples from 59 out of 63 countries where Starlink is operational. We restrict our analysis to nott7 measurements, which use TCP BBR and countries with _at least 1000 measurements_, resulting in 19.2 M M-Lab measurement samples from 34 countries. Our analysis chronicles the global Starlink operation from its inception, as the first measurement samples in our dataset are dated to June 2021, which is closely aligned with the launch of Starlink v1.0 and v1.5 satellites (Serban et al., 2016). We find that Figure 1. Orbits of three Starlink inclinations and crowdsourced Ground Station (GS) and Point-of-Presence (PoP) locations (Serban et al., 2016). Shaded regions depict Starlink’s service area. Figure 2. Starlink follows “bent-pipe” connectivity as traffic traverses the client-side terminal, one or more satellites via inter-sat links (ISLs), nearest ground station (GS), ingressing with the terrestrial Internet via a point-of-presence (PoP). the M-Lab server selection algorithm assigns the geographically closest server to the estimated client location (Srivastava et al., 2017), which might not always be optimal for Starlink, given its PoP-centered architecture. While we examine such artifacts by contrasting the M-Lab and RIPE Atlas results (SS6.1), we approached our analysis with caution, particularly when examining fine-grained region-specific insights. _RIPE Atlas._ RIPE Atlas is a measurement platform that the networking research community commonly employs for conducting measurements (Krishnan et al., 2017). The platform comprises thousands of hardware and software probes scattered globally, enabling users to carry out active network measurements such as ping, traceroute, and DNS resolution to their chosen endpoints. In our study, we utilized 98 Starlink RIPE Atlas probes across 21 countries (see Figure 3). Our measurement targets were 145 data centers from _seven_ major cloud providers - Amazon EC2, Google, Microsoft, Digital Ocean, Alibaba, Amazon Lightsail, and Oracle (see Appendix B). The chosen operators represent the global cloud market (Srivastava et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) and ensure that our endpoints are close to Starlink PoPs, which are usually co-located with Internet eXchange Point (IXP) or data center facilities (Srivastava et al., 2017; Krishnan et al., 2017). We perform ICMP traceroute from Atlas probes to endpoints situated on the same or neighboring continent. We extract and track per-hop latencies between Starlink probe terminal-to-GS (identified by static 100.64.0.1 address), GS-to-PoP (172.16/12 address) and PoP-to-endpoint at 2 s intervals (Krishnan et al., 2017). Additionally, to improve PoP geolocations, we extract semantic location embeddings in reverse DNS PTR entry, e.g. tata-level3-seattle2.level3.net (Krishnan et al., 2017). Our measurements over _ten_ months (Dec 2022 to Sept 2023) resulted in \(\approx\) 1.8 M samples. ### Real-time Web Application Measurements _Zoom Video Conferencing_ We experimented with Zoom videoconferencing (Krishnan et al., 2017) due to its popularity in the Internet ecosystem (Krishnan et al., 2017) as well as latency and bandwidth-critical operational requirements. We set up a call between two parties, one using a server with access to an unobstructed Starlink dish and high-speed terrestrial fiber over 1 Gbps Ethernet. The other end was on an AWS machine located close to the assigned Starlink PoP. We set up virtual cameras and microphones on both machines, which were fed by a pre-recorded video of a person talking, resulting in bidirectional transmission. Both machines were time-synchronized to local stratum-1 NTP servers and we recorded (and analyzed) Zoom QoS leveraging the open-source toolchain from (Wang et al., 2018) that yields sub-second metrics. _Cloud Gaming._ We also experiment with cloud gaming due to its demanding high throughput and low delay requirements (Krishnan et al., 2017). We leverage the automated system by Iqbal et al. (Iqbal et al., 2018) to evaluate the performance of playing the racing game "The Crew" on the Amazon Luna (Luna, 2018) platform. The measurements are based on a customized streaming client that records end-to-end information about media streams, such as frame and bitrate. The system also utilizes a bot that executes in-game actions at pre-defined intervals that trigger a predictable and immediate visual response. In post-processing, their analysis system detects the visual response and computes the _game delay_ as the time passed since the input action was triggered. Amazon Luna serves games at a resolution of up to 1920\(\times\)1080 at 60 FPS and adaptively reduces the resolution to, e.g., 1280\(\times\)720. We ran the game streaming client on the same machine as the Zoom measurements, additionally setting up a 5G modem to compare Starlink against cellular network. Similar to Zoom, the Luna game server was on AWS server close to our Starlink PoP (\(\approx\) 1 ms RTT). ### Targeted Measurements A significant limitation of our global measurements is their lack of sub-second visibility, which is essential for understanding the intricacies of Starlink network behavior. To allow us to obtain microscopic understanding, we orchestrated a set of precise, tailored, and controlled experiments, utilizing two Starlink terminals as vantage points (VPs) situated in two European countries. One connects to the 53' shell while the other, deployed in a high latitude location, can be shielded to confine its communication to the 70' and 97.6' orbits (see Figure 4). We placed a metal sheeting2 barrier at the South-facing angle of the terminal, which obstructed its view from the 53' inclinations. We verify with external satellite trackers (Srivastava et al., 2017; Krishnan et al., 2017) that the terminal only received connectivity from satellites in 97' or 70' inclinations, which resulted in brief _connectivity windows_ followed by periods of no service. We performed experiments using the Isochronous Round-Trip Tester (irtt) (Krishnan et al., 2017) and iperf(Krishnan et al., 2017) tools. The irtt setup records RTTs at high resolutions (3 ms interval) by transmitting small UDP packets. The irtt servers were deployed on cloud VMs in close proximity to the assigned Starlink PoP of both VPs (within 1 ms) - minimizing the influence of terrestrial path on our measurements. We used iperf to measure both uplink and downlink throughput and record performance at 100 ms granularity. Figure 4. Field-of-view experiment setup. Dishy, deployed at a high latitude location, is obstructed by a metal shielding, which restricts its connectivity to the 70° and 97.6’ orbits. Figure 3. Overview of global Starlink measurements in this study. Heatmap denotes M-Lab speedtest measurement densities. Starlink RIPE Atlas probes are shown as red circles. Simultaneously, we polled the gRPC service on each terminal (Zhou et al., 2017) every second to obtain the connection status information. ## 4. Global Starlink Performance We use the minimum RTT (minRTT) reported during ndt7 tests to the closest M-Lab server globally to quantify the baseline network performance. This metric is not affected by queuing delays prevalent during throughput measurements which results in elevated latencies. To put the Starlink latency into context, we select speedtests originating from terrestrial serving-ISPs to capture mobile network traffic. We filter measurements from devices connected to the top-3 mobile network operators (MNOs) in each country (see Appendix C for details). Note that our criterion results in a mix of wired and wireless access networks since M-Lab does not provide a way to distinguish between the two. Our endpoint selection remains the same for both Starlink and terrestrial networks (see SS3.1). _Global View._ Figure 5 shows that, for a majority of countries, clients using terrestrial ISPs experience better latencies over Starlink. While the median latency of Starlink hovers around 40-50 ms in most countries, this distribution varies significantly across geographical regions. For instance, in Colombia, Starlink clients report better latencies than those utilizing established terrestrial networks. Conversely, in Manila (The Philippines), Starlink's performance is notably inferior (Figure 6). The uneven distribution of GSs and PoPs (Figure 1) may explain the latency differences; the USA, which experiences significantly lower latencies, also boasts a robust ground infrastructure. Similar trends are seen in Kenya and Mozambique, where the closest PoP is located in Nigeria. _Well-Provisioned Regions._ Even though a significant portion of global Starlink measurement samples originate from Seattle (\(\approx 10\%\)), the region shows consistently low latencies, with the \(75^{\text{th}}\) percentile well below 50 ms (Figure 6). Contributing factors can be dense GS availability or internal service prioritization for Starlink's headquarters. However, we observe that Starlink performance is fairly consistent across the USA, confirming that Seattle is not an anomaly but the norm (see Figure 21a in Appendix D). This result highlights the LEO network's potential to bridge Internet access disparities, which significantly affects the quality of terrestrial Internet in the USA (Kennedy et al., 2017; D'Amico et al., 2018; D'Amico et al., 2018). Europe is also relatively well covered with GSs but hosts only three PoPs that are in the UK, Germany, and Spain. Proximity to the nearest PoP correlates strongly with minRTT performance in Figure 7 - Dublin, London, and Berlin exhibit latencies comparable to the US, while for Rome and Paris, the \(75^{\text{th}}\) percentile is \(\approx 20\) ms longer. Unlike US, Starlink in EU has significantly longer tail latencies, often surpassing 100 ms. _Under-Provisioned Regions._ Starlink's superior performance in Colombia hints at its potential for connecting under-provisioned regions. However, Figure 6 shows that Starlink in South America (SA) trails significantly behind the US and Europe, with the \(75^{\text{th}}\) percentile exceeding 100 ms and tail at 200 ms. We observe similar performances in Oceania (see Figure 21b in Appendix D). By extracting the share of satellite vs. terrestrial path (from PoP to M-Lab servers, see Figure 18 in Appendix D)3, we find that the majority of SA Starlink latency comes from the bent-pipe. In contrast, latencies from Mexico and Africa (except Nigeria) show significant terrestrial influence, which we allude to non-optimal PoP assignments by Starlink routing policies. Footnote 3: We subtract the latency to the Starlink PoP reported by M-Lab’s reverse traceroutes from the end-to-end TCP minRTT. We observed an interesting impact of ground infrastructure deployment in the Philippines, where a local PoP was deployed in May 2023. Prior to this, Starlink traffic from the Philippines was directed to the nearest Japanese PoP, traversing long submarine links to reach the geographically closest M-Lab server in-country - evident from Figure 19 in Appendix D which shows additional 50-70 ms RTT incurred by Philippine users to reach in-country vs. Japanese M-Lab servers. However, post-May 2023, the latencies to in-country servers were reduced by 90% as the traffic was routed via the local PoP. Despite such artifacts, Starlink shows an evident trend towards more consistent sub-50 ms latencies globally over the past 17 months, specifically evident in Sydney (Figure 23a in Appendix D). We conclude that while Starlink lags behind terrestrial networks today, the gap will continue to shrink as the ground (and satellite) infrastructure expands. _Latency Under Load._ Recent findings suggest that Starlink may be susceptible to _bufferloat_(Kennedy et al., 2017; D'Amico et al., 2018), wherein latencies during traffic load can increase significantly due to excessive queue buildups (Sarlink et al., 2018). To explore this globally, we evaluate the RTT inflation, i.e., the difference between the maximum and minimum RTT observed during a speed test. Figure 8 reveals significantly increased RTTs under load within Starlink globally. During active downloads (Figure 8a), the Starlink-enabled clients can experience \(\approx 2\)-\(4\times\) increased RTTs, reaching almost 400-500 ms. While such inflations are consistent across _all_ Starlink service areas, they are more prominent in regions with subpar baseline performance, e.g., Mexico. Note that the Starlink latency under load is not symmetric. The \(60^{\text{th}}\) percentile of RTT during uploads increases to \(\leq 100\) ms globally (see Figure 8(b)) compared to \(\approx 200\) ms during downloads. We observe similar behavior while conducting iperf over our controlled terminals. Possible explanations can be queue size differences at the client-side Starlink router (affecting uploads), the ground station (affecting downloads), or satellites (impacting both). It is also plausible that Starlink employs active queue management (AQM) techniques (Bouquet et al., 2018) to moderate uplink latencies under congestion. This approach, however, may adversely affect the performance of applications that demand both high bandwidth and low latency - which we explore in SS5. _Goodput._ Figure 9 shows Starlink download and upload goodputs from speedtest globally. Unlike latencies (Figure 6), the goodput distributions appear relatively homogeneous. Most Starlink clients achieve \(\approx 50\)-\(100\) Mbps download and \(\approx 4\)-\(12\) Mbps upload rates at the \(75^{\text{th}}\) percentile. We do also not find any correlation between baseline latencies (see Figure 6) and upload/download goodput, evident from the contrasting cases of Dublin and Manila. However, we observe an inverse correlation between loss rates and goodputs; increasing from 4-8% at the \(75^{\text{th}}\)-percentile (see Figure 20 in Appendix D). Seattle, notable for its latency performance, records average goodputs. Given its high measurement density at this location, this trend might be attributable to Starlink's internal throttling or load-balancing policies aimed at preventing congestion on the shared network infrastructure (Zhou et al., 2017). We also find that over the past 17 months, Starlink goodputs have stabilized rather than increased, with almost all geographical regions demonstrating similar performance (shown in Figure 23 in Appendix D). _Takeaway #1_ -- Starlink exhibits competitive performance to terrestrial ISPs on a global scale, especially in regions with dense GS and PoP deployment. However, noticeable degradation is observable in regions with limited ground infrastructure. Our results further confirm that Starlink is affected by bufferbloat. Over the past 17 months, Starlink appears to be optimizing for consistent global performance, albeit with a slight reduction in goodput, likely due to the increasing subscriber base. ## 5. Real-Time Application Performance While the global Starlink performance in SS4 is promising for supporting web-based applications, it does not accurately capture the potential impact of minute network changes caused by routing, satellite switches, bufferbloating, etc., on application performance. Real-time web applications are known to be sensitive to such fluctuations (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). In this section, we examine the performance of Zoom and Amazon Luna cloud gaming over Starlink (see SS3.2 for details). This allows us to assess the suitability of the LEO network to meet the requirements of the majority of real-time Internet-based applications, as both applications impose a strict latency control loop. Cloud gaming necessitates high downlink bandwidth, while Zoom utilizes uplink and downlink capacity simultaneously. _Zoom Video Conferencing._ Figure 10 shows samples from Zoom calls conducted over a high-speed terrestrial network and over Starlink. The total uplink throughput over Starlink is slightly higher, which we trace to FEC (Forward Error Correction) packets that are frequently sent in addition to raw video data (on average 146\(\pm\)99 Kbps vs. 2\(\pm\)2 Kbps over terrestrial). The frame rate, inferred from the packets received by the Zoom peer, does not meaningfully differ between the two networks (\(\approx\) 27 FPS). Note that, since Zoom does not saturate the available uplink and downlink capacity, it should not be impacted by bufferbloating. Yet, we observe a slightly higher loss rate over LEO, which the application combats by proactively utilizing FEC. The uplink one-way delay (OWD) over Starlink is higher and more variable compared to the terrestrial connection (on average 52\(\pm\)14 ms vs. 27\(\pi\)\(\pi\) ms). All observations also apply to the downlink except that Starlink's downlink latency (35\(\pm\)11 ms) is similar to the terrestrial connection (32\(\pm\)2\(\pi\) ms). Our analysis broadly agrees with (Wang et al., 2018) but our packet-level insight reveals bitrate fluctuations partly caused by FEC. Further, our Starlink connection was more reliable and we did not experience second-long outages. Interestingly, we observe that the Starlink OWD often noticeably shifts at interval points that occur at 15 s increments. Further investigation reveals the cause to be the Starlink _reconfiguration interval_, which, as reported in FCC filings (Wang et al., 2018), is the time-step at which the satellite paths are reallocated to the users. Other recent work also reports periodic link degradations at 15 s boundaries in their experiments, with RTT spikes and packet losses of several orders (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). We explore the impact of reconfiguration intervals and other Starlink-internal actions on network performance in SS6. _Amazon Luna Cloud Gaming._ Table 1 shows 150 minutes of cloud gaming performance over terrestrial, 5G cellular, and Starlink networks. Overall, all networks realized close to 60 FPS playback rate at consistently high bitrate (\(\approx\) 20 Mbps). Starlink lies in between the better-performing terrestrial and cellular in terms of bitrate fluctuations, frame drops and freezes4. Starlink exhibits the highest Figure 5. Median of minimum RTT (in ms) of devices connected via Starlink (left) and top-3 serving ISPs (right) in the same country to the nearest M-Lab server. Figure 8. RTT inflation (maxRTT-minRTT) during M-Lab speedtests over Starlink: (a) download, (b) upload traffic. Figure 7. Distributions of M-Lab minRTTs from select cities in Europe and South America, respectively. Figure 9. Distribution of median (a) download and (b) upload goodput over Starlink from selected cities globally. game delay, i.e., the delay experienced by the player between issuing a command and witnessing its effect. Specifically, the wired network delivers the visual response about 2 frames (\(\approx 33\) ms) earlier than both 5G and Starlink. While examining the gaming performance over time, we observe occasional drops to \(<20\) FPS over Starlink (see Figure 11), that coincide with Starlink's reconfiguration interval. These fluctuations are only visible at sub-second granularity and, hence, are not reflected in global performance analysis (SS4). Despite these variations, Starlink's performance remains competitive with 5G, highlighting its potential to deliver real-time application support, especially in regions with less mature cellular infrastructure. Note, however, that our Starlink terminal was set up without obstructions and the weather conditions during measurements were favorable to its operation (Sarlink et al., 2017). Different conditions, especially mobility, may change the relative performance of Starlink and cellular, which we plan to explore further in the near future. _Takeaway #2_ -- Starlink is competitive with the current 5G deployment for supporting demanding real-time applications. We also observe that Starlink experiences regular performance changes every 15s linked to its reconfiguration interval period. While these internal black-box parameters do influence performance to a certain extent, application-specific corrective measures, like FEC, are effective in mitigating these artifacts. ## 6. Dissecting the bent-pipe We now attempt to uncover Starlink's behind-the-scenes operations and their impact on network performance. We follow a two-pronged approach to undertake this challenge. Our longitudinal traceroute measurements over RIPE Atlas accurately isolate the bent-pipe (terminal-to-PoP) global performance, allowing us to correlate it with parameters like ground station deployment, satellite availability, etc. (SS6.1). We then perform high-frequency, high-resolution experiments over Starlink terminals deployed in two EU countries to zoom in on bent-pipe operation and highlight traffic engineering signatures that may impact application performance (SS6.2). ### Global Bent-Pipe Performance _Starlink vs. Cellular Last-mile_ We contrast our end-to-end M-Lab and real-time application analysis by comparing the Starlink bent-pipe latencies from RIPE Atlas traceroutes to cellular wireless last-mile (device-to-ISP network) access. Given the under-representation of cellular probes in RIPE Atlas, we augment our dataset with recent comprehensive measurements from Dang et al. (Dang et al., 2018), which leveraged 115,000 cellular devices over the Speedchecker platform to analyze the performance of cellular networks worldwide. Figure 12 presents a comparative analysis of both networks across countries common in both datasets. Consistent with our previous findings, we find that the Starlink bent-pipe latencies fall within 36-48 ms, with the median hovering around 40 ms for almost all countries. Similarly, we find consistent cellular last-mile latencies across all countries, but almost 1.5\(\times\) less than Starlink. Recent investigations (Sarlink et al., 2018) report similar access latencies over WiFi and cellular networks. The bent-pipe latencies also corroborate our estimations in SS4 that the terminal-PoP path is the dominant \begin{table} \begin{tabular}{l r|r r} \hline \hline & Terrestrial & Cellular & Starlink \\ \hline Idle RTT (ms) & 9 & 46 & 40 \\ Throughput (Mbps) & 1000 & 150 & 220 \\ \hline Frames-per-second & 59\(\pm\)1.51 & 59\(\pm\)1.68 & 59\(\pm\)1.63 \\ Bitrate (Mbps) & 23.08\(\pm\)0.38 & 22.82\(\pm\)4.24 & 22.81\(\pm\)2.16 \\ Time at 1080p (\%) & 100 & 94.11 & 99.45 \\ Freezes (ms/min) & 0\(\pm\)0 & 0\(\pm\)220.34 & 0\(\pm\)119.74 \\ Inter-frame (ms) & 17\(\pm\)3.65 & 18\(\pm\)11.1 & 16\(\pm\)6.76 \\ \hline Game delay (ms) & 133.53\(\pm\)19.79 & 165.82\(\pm\)23.55 & 167.13\(\pm\)23.12 \\ RTT (ms) & 11\(\pm\)13.41 & 39\(\pm\)17.06 & 50\(\pm\)16.28 \\ Jitter buffer (ms) & 15\(\pm\)3.27 & 12\(\pm\)1.33 & 15\(\pm\)3.35 \\ \hline \hline \end{tabular} \end{table} Table 1. The game metrics are aggregated over 150 minutes of playtime per connection. Values denote median:SD and the worst performer is highlighted. Figure 11. Cloud gaming over 5G (left) and Starlink (right). Vertical dashed lines show Starlink reconfiguration intervals. Figure 12. Last-mile latencies for different countries. “Starlink” denotes satellite bent-pipe over RIPE Atlas while “Cellular” wireless access from Speedchecker (Dang et al., 2018). Figure 10. Uplink Zoom video traffic over a terrestrial network (left) and Starlink (right). Vertical dashed lines show Starlink reconfiguration intervals. contributor to the end-to-end latency. Out of the 21 countries with Starlink-enabled RIPE Atlas probes, the only exceptions where the bent-pipe latency is significantly higher (\(\approx\) 100 ms) are the Virgin Islands (US), Reunion Islands (FR), and Falkland Islands (UK). Correlating with Figure 3, we find that Starlink neither has a GS nor a PoP in these regions, which may result in traffic routing over ISLs to far-off GS leading to longer bent-pipe latencies. Impact of Ground Infrastructure.We extend our analysis by exploring the correlation between the distance from Starlink users to the GS and bent-pipe latencies. Recall that we rely on crowdsourced data (Sandhi et al., 2018) for geolocating Starlink ground infrastructure since these are not officially publicly disclosed. We deduce through our traceroutes that Starlink directs its subscribers to the nearest GS relative to the PoP, as the GS-PoP latencies are \(\approx\) 5 ms almost globally (see Figure 22 in Appendix D - sole exceptions being US and Canada with 7-8 ms, likely due to abundant availability of GSs and PoPs resulting in more complex routing). Figure 13 shows the correlation of reported bent-pipe latency with the terminal-GS distance. Each point in the plot denotes at least 1000 measurements. We observe a directly proportional relationship as bent-pipe latencies tend to increase with increasing distance to the GS. Furthermore, we find that the predominant distance between GS and the user terminal is \(\leq\) 1200 km, which is also the approximate coverage area width of a single satellite from 500 km altitude (Brandt et al., 2018) - suggesting that these connections are likely using direct bent-pipe, either without or with short ISL paths. Few terminals, specifically in Reunion, Falkland and the Virgin Islands, connect to GSs significantly farther away, possible only via long ISL chains, the impact of which we analyze further as a case study below. Case Study: Reunion Island.The majority of Starlink satellites (starting from v1.5 deployed in 2021) are equipped with ISLs (Sandhi et al., 2018), and reports from SpaceX suggest active utilization of these links (Sandhi et al., 2018). Recent studies also agree with the use of ISLs (Sandhi et al., 2018), but point out inefficiencies in space routing (Sandhi et al., 2018). Nonetheless, the invisibility of satellite hops in traceroutes poses a challenge in accurately assessing the latency impact of ISLs. As such, we focus on a probe in Reunion Island (RU), which connects to the Internet via Frankfurt PoP (\(\approx\) 9000 km). Figure 14 segments the bent-pipe RTT between the user terminal (Dishy) to GS (non-terrestrial), and from GS to the PoP (terrestrial). For comparison, we also plot the RTTs from a probe within Germany (DE) connecting to the same PoP (\(\approx\) 500 km, in red). The vertical lines represent the median RTT over terrestrial infrastructure from both probe locations to the PoP. Firstly, we observe minimal GS-PoP latency for both locations, verifying that the RU satellite link is using ISLs. Secondly, in RU, Starlink shows significant latency improvement over fiber (\(\approx\) 60 ms). This is because the island has limited connectivity with two submarine cables routing traffic 10,000 km away, either in Asia or South America (Sandhi et al., 2018). Starlink provides a better option by avoiding the terrestrial route altogether, directly connecting RU users to the dense backbone infrastructure in EU (Sandhi et al., 2018). However, since the bent-pipe incurs at least 30-40 ms latency in the best-case, Starlink is less attractive in regions with robust terrestrial network infrastructure (also evident from the DE probe where fiber achieves better latencies). Impact of Serving Orbit.Recall that the majority of Starlink satellites are deployed in the 53' inclination (see Table 2 in Appendix A). Consequently, network performance for clients located outside this orbit's range may vary widely as they are serviced by fewer satellites in 70' and 97.6' orbits. Figure 15 contrasts the bent-pipe latencies of probe in Alaska (61.5685N, 149.0125W) ["A"] to probes within 53' orbit. Despite dense GS availability, the bent-pipe latencies for Alaska are significantly higher (\(\approx\) 2\(\times\)). The Swedish probe ["B"] at 59.6395N is at the boundary of 53' orbit but still exhibits comparable latency to Canada, UK, and Germany. Furthermore, the Alaskan probe experiences intermittent connectivity, attributed to the infrequent passing of satellite clusters within the 70" and 97.6' orbits. These findings indicate substantial discrepancies in Starlink's performance across geographical regions, which may evolve for the better as more satellites are launched in these orbits. Nevertheless, we leverage the sparse availability of satellites at the higher latitude to further dissect the bent-pipe operations in SS6.2. Figure 14. Bent-pipe RTT segments from Reunion Island (yellow) vs. Germany (red) connecting to Germany PoP. Vertical lines show latency over Atlas probes connected via fiber from both locations to the Frankfurt server (PoP location). Figure 13. Correlation between Starlink bent-pipe latency and Dishy-GS distance. Red line denotes linear regression fit. Figure 15. Bent-pipe latencies for “A” (in Alaska) covered by the 70’ and 97.6’ while the rest (Sweden “B”, Canada “C”, UK “D”, and Germany “E”) are also covered by 53’. ### Controlled Experiments We now investigate the cause of periodic disruptions to real-time applications (SS5). Specifically, we perform high-resolution measurements to gain insights into Starlink network operation. _Global Scheduling._ We performed simultaneous RRT measurements from two countries that are sufficiently geographically removed that both cannot be connected to the same serving satellite. We also verify that both terminals are assigned different PoPs located within their country. The resulting RTTs, shown in Figure 15(a), vary in a consistent pattern, being comparatively stable within each Starlink reconfiguration interval but potentially changing significantly between intervals. Moreover, the time-wise alignment of reconfiguration intervals for both vantage points indicates that Starlink operates on a globally coordinated schedule, rather than on a per-Dishy or per-satellite basis. These results are in line with other recent studies (Yin et al., 2020), which also hint that Starlink utilizes a global network controller. Previous studies (Rendrik et al., 2020) have noticed drops in downlink throughput every 15s but have not correlated these with the reconfiguration intervals. We also observe throughput drops on both downlink and uplink, shown in Figure 15(b), that occur at the reconfiguration interval boundaries. Similar to the RTT, the throughput typically remains relatively consistent within an interval, but can experience sudden changes between interval transitions. These also corroborate the periodic performance degradation in our real-time application experiments. _Disproving Satellite Handoff Hypothesis._ Previous works have suggested satellite or beam changes at reconfiguration interval boundaries to be the root-cause of network degradation (Rendrik et al., 2020; Wang et al., 2020; Yin et al., 2020). To investigate this hypothesis, we deliberately obstructed the field-of-view of our high latitude Dishy to prevent it from connecting to the dense 53' orbital shell (see SS3.3 for details). The restriction curtailed the number of candidate (potentially connectable) satellites to 13%. This limitation led to intermittent connectivity, characterized by brief connectivity windows with long service downtimes. By synchronizing the timings of each connectivity window with the overhead positions of candidate satellites (from CelesTrak (CelesTrak, 2019) and other sources (Yin et al., 2020)), we identify several windows where the terminal can be served only by a single satellite. Figure 15(c) (upper) shows RTTs from one such window. The fact that there is significant RTT variance between intervals invalidates the hypothesis that the changes in RTT are caused by satellite handovers (considering a single candidate satellite during the observed period, leaving no room for hand-off occurrences). Separately, we perform the same experiment but focus on (both uplink and downlink) throughput. Similar to RTT, we also witness throughput drops at interval boundaries even when only one candidate satellite is visible. _Scheduling Updates._ Figure 15(c) (lower) shows the distribution of start and end times of the connectivity windows during our restricted field-of-view experiments. We observed a strong correlation between connectivity end times and reconfiguration interval (RI) boundary, which is not seen with start times5. The result hints at internal network scheduling changes at reconfiguration interval boundaries, i.e., Starlink assigns its terminals new satellites (or frequencies) every 15s. We hypothesize that with an obstructed view, the scheduler cannot find better alternatives in the 70deg and 97.6' orbits, resulting in connectivity loss at the end of the window. Footnote 5: The fact that many appear to end is after the boundary is an artifact of the limited (per-second) granularity of the gRPC data and that the gRPC timestamps originate from the client making the gRPC requests rather than the user terminal. _Analysis Summary_ Putting together our various observations, we theorize that Starlink relies on a global scheduler that re-allocates the user-satellite(s)-GS path every 15s. An FCC filing from Starlink implies this behavior (Sarlink, 2019) and recent studies also suggest that the LEO operator performs periodic load balancing at reconfiguration boundaries, reconnecting all active clients to satellites (Sarlink, 2019; Yin et al., 2020). The theory also explains our observed RTT and throughput changes when only a single candidate satellite is in view. It is plausible that Figure 16. (left, a) RRTT latencies with Dishyys in two countries connected to different ground infrastructure; (middle, b) Maximum uplink and downlink throughput over a 195-second (13 interval) period; (right, c) (upper) RTTs for a connectivity window where the Dishy was connected to only a single satellite; (lower) Probability distribution of the time between the connectivity window start / end and the previous reconfiguration interval (RI). Vertical dashed lines show Starlink reconfiguration intervals. Starlink may have rescheduled the terminal to the same satellite but with reallocated frequency and routing resources. Regardless, these reconfigurations result in brief sub-second connection disruptions, which may become more noticeable at the application-layer as the number of subscribers on the network increases over time. For instance, autonomous drones are an application that seem, at first glance, to be well-suited to Starlink, particularly if used in remote areas (Ballucci et al., 2017; Ballucci et al., 2018). However, the combination of the strict low-latency requirements and high throughput demands (especially for multi-camera drones) has the potential to saturate the connection to the point where the reconfiguration interval may become problematic. _Takeaway #4_ -- Starlink uses 15s-long reconfiguration intervals to globally schedule and manage the network. Such intervals cause latency/throughput variations at the interval boundaries. Handoffs between satellites are not the sole cause of these effects. Indeed, our findings hint at a scheduling system reallocating resources for connections once every reconfiguration interval. ## 7. Related Work LEO satellites have become a subject of extensive research in recent years, with a particular focus on advancing the performance of various systems and technologies. Starlink, the posterchild of LEO networks, continues to grow in its maturity and reach with \(>2\)M subscribers as of September 2023 (Selak et al., 2023). Despite its growing popularity, there has been limited exploration into measuring Starlink's performance so far. Existing studies either have a narrow scope, employing only a few vantage points (Selak et al., 2023; Ballucci et al., 2023; Ballucci et al., 2023) or focus on broad application-level operation (Selak et al., 2023; Selak et al., 2023) without investigating root-causes. Ma et al. (Ma et al., 2023) embarked on a journey across Canada with four dishes to scrutinize various factors, such as temperature and weather, that might influence Starlink's performance. A few endeavors have attempted to unveil the operations of Starlink's black-box network. Pan et al. (Pan et al., 2021) revealed the operator's internal network topology from traceroutes, whereas Tanveer et al. (Tanveer et al., 2021) spotlighted a potential global network controller. The absence of global measurement sites poses a predominant challenge hampering a comprehensive understanding of Starlink's performance. As we show in this work, Starlink's performance varies geographically due to differing internal configurations and ground infrastructure availability. Some researchers have devised innovative methods to combat this. For example, Izhikevich et al. (Izhikevich et al., 2023) conducted measurements towards exposed services behind the Starlink user terminal, while Taneja et al. (Taneja et al., 2021) mined social media platforms like Reddit to gauge the LEO network's performance. Our study not only corroborates and extends existing findings but also stands as the most extensive examination to date. Our approach - anchored in detailed insights from 34 countries, leveraging 19.2 million crowdsourced M-Lab measurements, 2.9 million active RIPE Atlas measurements, and two controlled terminals connecting to different Starlink orbits - provides a deeper understanding of the Starlink "bent-pipe" and overall performance. ## 8. Conclusions Despite its potential as a "global ISP" capable of challenging the state of global Internet connectivity, there have been limited performance evaluations of Starlink to date. We conducted a multifaceted investigation of Starlink, providing insights from a global perspective down to internal network operations. Globally, our analysis showed that Starlink is comparable to cellular for supporting real-time applications (in our case Zoom and Luna cloud gaming), though this varies based on proximity to ground infrastructure. Our case study shows Starlink inter-satellite connections helping remote users achieve better Internet service than terrestrial networks. However, at sub-second granularity, Starlink exhibits performance variations, likely due to periodic internal network reconfigurations at 15s intervals. We find that the reconfigurations are synchronized globally and are not caused only by satellite handovers. As such, this first-of-its-kind study is a step towards a clearer understanding of Starlink's operations and performance as it continues to evolve.
2306.03089
Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
A long standing goal in neuroscience has been to elucidate the functional organization of the brain. Within higher visual cortex, functional accounts have remained relatively coarse, focusing on regions of interest (ROIs) and taking the form of selectivity for broad categories such as faces, places, bodies, food, or words. Because the identification of such ROIs has typically relied on manually assembled stimulus sets consisting of isolated objects in non-ecological contexts, exploring functional organization without robust a priori hypotheses has been challenging. To overcome these limitations, we introduce a data-driven approach in which we synthesize images predicted to activate a given brain region using paired natural images and fMRI recordings, bypassing the need for category-specific stimuli. Our approach -- Brain Diffusion for Visual Exploration ("BrainDiVE") -- builds on recent generative methods by combining large-scale diffusion models with brain-guided image synthesis. Validating our method, we demonstrate the ability to synthesize preferred images with appropriate semantic specificity for well-characterized category-selective ROIs. We then show that BrainDiVE can characterize differences between ROIs selective for the same high-level category. Finally we identify novel functional subdivisions within these ROIs, validated with behavioral data. These results advance our understanding of the fine-grained functional organization of human visual cortex, and provide well-specified constraints for further examination of cortical organization using hypothesis-driven methods.
Andrew F. Luo, Margaret M. Henderson, Leila Wehbe, Michael J. Tarr
2023-06-05T17:59:05Z
http://arxiv.org/abs/2306.03089v2
# Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models ###### Abstract A long standing goal in neuroscience has been to elucidate the functional organization of the brain. Within higher visual cortex, functional accounts have remained relatively coarse, focusing on regions of interest (ROIs) and taking the form of selectivity for broad categories such as faces, places, bodies, food, or words. Because the identification of such ROIs has typically relied on manually assembled stimulus sets consisting of isolated objects in non-ecological contexts, exploring functional organization without robust _a priori_ hypotheses has been challenging. To overcome these limitations, we introduce a data-driven approach in which we synthesize images predicted to activate a given brain region using paired natural images and fMRI recordings, bypassing the need for category-specific stimuli. Our approach - Brain Diffusion for Visual Exploration ("BrainDiVE") - builds on recent generative methods by combining large-scale diffusion models with brain-guided image synthesis. Validating our method, we demonstrate the ability to synthesize preferred images with appropriate semantic specificity for well-characterized category-selective ROIs. We then show that BrainDiVE can characterize differences between ROIs selective for the same high-level category. Finally we identify novel functional subdivisions within these ROIs, validated with behavioral data. These results advance our understanding of the fine-grained functional organization of human visual cortex, and provide well-specified constraints for further examination of cortical organization using hypothesis-driven methods. ## 1 Introduction The human visual cortex plays a fundamental role in our ability to process, interpret, and act on visual information. While previous studies have provided important evidence that regions in the higher visual cortex preferentially process complex semantic categories such as faces, places, bodies, words, and food [1; 2; 3; 4; 5; 6; 7], these important discoveries have been primarily achieved through the use of researcher-crafted stimuli. However, hand-selected, synthetic stimuli may bias the results or may not accurately capture the complexity and variability of natural scenes, sometimes leading to debates about the interpretation and validity of identified functional regions [8]. Furthermore, mapping selectivity based on responses to a fixed set of stimuli is necessarily limited, in that it can only identify selectivity for the stimulus properties that are sampled. For these reasons, data-driven methods for interpreting high-dimensional neural tuning are complementary to traditional approaches. We introduce Brain Diffusion for Visual Exploration ("BrainDiVE"), a _generative_ approach for synthesizing images that are predicted to activate a given region in the human visual cortex. Several recent studies have yielded intriguing results by combining deep generative models with brain guidance [9; 10; 11]. BrainDiVE, enabled by the recent availability of large-scale fMRI datasets based on natural scene images [12; 13], allows us to further leverage state-of-the-art diffusion models in identifying fine-grained functional specialization in an objective and data-driven manner. BrainDiVE is based on image diffusion models which are typically driven by text prompts in order to generate synthetic stimuli [14]. We replace these prompts with maximization of voxels in given brain areas. The result being that the resultant synthesized images are tailored to targeted regions in higher-order visual areas. Analysis of these images enables data-driven exploration of the underlying feature preferences for different visual cortical sub-regions. Importantly, because the synthesized images are optimized to maximize the response of a given sub-region, these images emphasize and isolate critical feature preferences beyond what was present in the original stimulus images used in collecting the brain data. To validate our findings, we further performed several human behavioral studies that confirmed the semantic identities of our synthesized images. More broadly, we establish that BrainDiVE can synthesize novel images (Figure 1) for category-selective brain regions with high semantic specificity. Importantly, we further show that BrainDiVE can identify ROI-wise differences in selectivity that map to ecologically relevant properties. Building on this result, we are able to identify novel functional distinctions within sub-regions of existing ROIs. Such results demonstrate that BrainDiVE can be used in a data-driven manner to enable new insights into the fine-grained functional organization of the human visual cortex. ## 2 Related work Mapping High-Level Selectivity in the Visual Cortex.Certain regions within the higher visual cortex are believed to specialize in distinct aspects of visual processing, such as the perception of faces, places, bodies, food, and words [15; 3; 4; 16; 17; 18; 19; 5; 20]. Many of these discoveries rely on carefully handcrafted stimuli specifically designed to activate targeted regions. However, activity under natural viewing conditions is known to be different [21]. Recent efforts using artificial neural networks as image-computable encoders/predictors of the visual pathway [22; 23; 24; 25; 26; 27; 28; 29; 30] have facilitated the use of more naturalistic stimulus sets. Our proposed method incorporates an image-computable encoding model in line with this past work. Deep Generative Models.The recent rise of learned generative models has enabled sampling from complex high dimensional distributions. Notable approaches include variational autoencoders [31; 32], generative adversarial networks [33], flows [34; 35], and score/energy/diffusion models [36; 37; 38; 39]. By augmenting these models, it is possible to condition the model on category [40; 41], text [42; 43], or images [44]. Recent diffusion models have been conditioned with brain activations to reconstruct observed images [45; 46; 47]. Unlike BrainDiVE, these approaches tackle reconstruction but not synthesis of novel images that are predicted to activate regions of the brain. Brain-Conditioned Image Generation.The differentiable nature of deep encoding models inspired work to create images from brain gradients in mice, macaques, and humans [48; 49; 50]. Without constraints, the images recovered are not naturalistic. Other approaches have combined deep generative models with optimization to recover natural images in macaque and humans [10; 11]. Closest to our work is the NeuroGen method [9], which similarly uses the NSD dataset [13] but uses BigGAN trained on ImageNet [51]. Our work presents major improvements by enabling the use of diffusion models [44] trained on internet-scale datasets [52] over three magnitudes larger than ImageNet. By avoiding the search-based optimization procedures used in [9], our work is not restricted to images within a fixed class in ImageNet. Further, to the authors' knowledge we are Figure 1: **Images generated using BrainDiVE. Images are generated using a diffusion model with maximization of voxels identified from functional localizer experiments as conditioning. We find that brain signals recorded via fMRI can guide the synthesis of images with high semantic specificity, strengthening the evidence for previously identified category selective regions.** the first work to use image synthesis methods in the identification of functional specialization in sub-parts of ROIs. ## 3 Methods We aim to generate stimuli that maximally activate a given region in visual cortex using paired natural image stimuli and fMRI recordings. We first review relevant background information on diffusion models. We then describe how we can parameterize encoding models that map from images to brain data. Finally, we describe how our framework (Figure 2) can leverage brain signals as guidance to diffusion models to synthesize images that activate a target brain region. ### Background on Diffusion Models Diffusion models enable sampling from a data distribution \(p(x)\) by iterative denoising. The sampling process starts with \(x_{T}\sim\mathcal{N}(0,\mathbb{I})\), and produces progressively denoised samples \(x_{T-1},x_{T-2},x_{T-3}\ldots\) until a sample \(x_{0}\) from the target distribution is reached. The noise level varies by timestep \(t\), where the sample at each timestep is a weighted combination of \(x_{0}\) and \(\epsilon\sim\mathcal{N}(0,\mathbb{I})\), with \(x_{t}=\sqrt{\alpha_{t}}x_{0}+\epsilon\sqrt{1-\alpha_{t}}\). The value of \(\alpha\) interpolates between \(\mathcal{N}(0,\mathbb{I})\) and \(p(x)\). In the noise prediction setting, an autoencoder network \(\epsilon_{\theta}(x_{t},t)\) is trained using a mean-squared error \(\mathbb{E}_{(x,\epsilon,t)}\left[\|\epsilon_{\theta}(x_{t},t)-\epsilon\|_{2} ^{2}\right]\). In practice, we utilize a pretrained latent diffusion model (LDM) [44], with learned image encoder \(E_{\Phi}\) and decoder \(D_{\Omega}\), which together act as an autoencoder \(\mathcal{I}\approx D_{\Omega}(E_{\Phi}(\mathcal{I}))\). The diffusion model is trained to sample \(x_{0}\) from the latent space of \(E_{\Phi}\). ### Brain-Encoding Model Construction A learned voxel-wise brain encoding model is a function \(M_{\theta}\) that maps an image \(\mathcal{I}\in\mathbb{R}^{3\times H\times W}\) to the corresponding brain activation fMRI beta values represented as an \(N\) element vector \(B\in\mathbb{R}^{N}\): \(M_{\theta}(\mathcal{I})\Rightarrow B\). Past work has identified later layers in neural networks as the best predictors of higher visual cortex [30; 54], with CLIP trained networks among the highest performing brain encoders [28; 55]. As our target is the higher visual cortex, we utilize a two component design for our encoder. The first component consists of a CLIP trained image encoder which outputs a \(K\) dimensional vector as the latent embedding. The second component is a linear adaptation layer \(W\in\mathcal{R}^{N\times K},b\in\mathcal{R}^{N}\), which maps euclidean normalized image embeddings to brain activation. \[B\approx M_{\theta}(\mathcal{I})=W\times\frac{\texttt{CLIP}_{\texttt{img}}( \mathcal{I})}{\|\texttt{CLIP}_{\texttt{img}}(\mathcal{I})\|_{2}}+b\] Optimal \(W^{*},b^{*}\) are found by optimizing the mean squared error loss over images. We observe that use of a normalized CLIP embedding improves stability of gradient magnitudes w.r.t. the image. Figure 2: **Architecture of brain guided diffusion (BrainDiVE). Top:** Our framework consists of two core components: **(1)** A diffusion model trained to synthesize natural images by iterative denoising; we utilize pretrained LDMs. **(2)** An encoder trained to map from images to cortical activity. Our framework can synthesize images that are predicted to activate any subset of voxels. Shown here are scene-selective regions (RSC/PPA/OPA) on the right hemisphere. **Bottom:** We visualize every \(4\) steps the magnitude of the gradient of the brain w.r.t. the latent and the corresponding ”predicted \(x_{0}\)” [53] when targeting scene selective voxels in both hemispheres. We find clear structure emerges. ### Brain-Guided Diffusion Model BrainDiVE seeks to generate images conditioned on maximizing brain activation in a given region. In conventional text-conditioned diffusion models, the conditioning is done in one of two ways. The first approach modifies the function \(\epsilon_{\theta}\) to further accept a conditioning vector \(c\), resulting in \(\epsilon_{\theta}(x_{t},t,c)\). The second approach uses a contrastive trained image-to-concept encoder, and seeks to maximize a similarity measure with a text-to-concept encoder. Conditioning on activation of a brain region using the first approach presents difficulties. We do not know _a priori_ the distribution of other non-targeted regions in the brain when a target region is maximized. Overcoming this problem requires us to either have a prior \(p(B)\) that captures the joint distribution for all voxels in the brain, to ignore the joint distribution that can result in catastrophic effects, or to use a handcrafted prior that may be incorrect [47]. Instead, we propose to condition the diffusion model via our image-to-brain encoder. During inference we perturb the denoising process using the gradient of the brain encoder _maximization_ objective, where \(\gamma\) is a scale, and \(S\subseteq N\) are the set of voxels used for guidance. We seek to maximize the average activation of \(S\) predicted by \(M_{\theta}\): \[\epsilon^{\prime}_{theta}=\epsilon_{theta}-\sqrt{1-\alpha_{t}}\nabla_{x_{t}} (\frac{\gamma}{|S|}\sum_{i\in S}M_{\theta}(D_{\Omega}(x^{\prime}_{t}))_{i})\] Like [56; 14; 57], we observe that convergence using the current denoised \(x_{t}\) is poor without changes to the guidance. This is because the current image (latent) is high noise and may lie outside of the natural image distribution. We instead use a weighted reformulation with an euler approximation [53; 57] of the final image: \[\hat{x}_{0} =\frac{1}{\sqrt{\alpha}}(x_{t}-\sqrt{1-\alpha}\epsilon_{t})\] \[x^{\prime}_{t} =(\sqrt{1-\alpha})\hat{x}_{0}+(1-\sqrt{1-\alpha})x_{t}\] By combining an image diffusion model with a differentiable encoding model of the brain, we are able to generate images that seek to maximize activation for any given brain region. ## 4 Results In this section, we use BrainDiVE to highlight the semantic selectivity of pre-identified category-selective voxels. We then show that our model can capture subtle differences in response properties between ROIs belonging to the same broad category-selective network. Finally, we utilize BrainDiVE to target finer-grained sub-regions within existing ROIs, and show consistent divisions based on semantic and visual properties. We quantify these differences in selectivity across regions using human perceptual studies, which confirm that BrainDiVE images can highlight differences in tuning properties. These results demonstrate how BrainDiVE can elucidate the functional properties of human cortical populations, making it a promising tool for exploratory neuroscience. ### Setup We utilize the Natural Scenes Dataset (NSD; [13]), which consists of whole-brain 7T fMRI data from 8 human subjects, 4 of whom viewed \(10,000\) natural scene images repeated \(3\times\). These subjects, S1, S2, S5, and S7, are used for analyses in the main paper (see Supplemental for results for additional subjects). All images are from the MS COCO dataset. We use beta-weights (activations) computed using GLMSingle [58] and further normalize each voxel to \(\mu=0,\sigma=1\) on a per-session basis. We average the fMRI activation across repeats of the same image within a subject. The \(\sim\)\(9,000\) unique images for each subject ([13]) are used to train the brain encoder for each subject, with the remaining \(\sim\)\(1,000\) shared images used to evaluate \(R^{2}\). Image generation is on a per-subject basis and done on an Nvidia V100 using \(1,500\) compute hours. As the original category ROIs in NSD are very generous, we utilize a stricter \(t>2\) threshold to reduce overlap unless otherwise noted. We utilize stable-diffusion-2-1-base, which produces images of \(512\times 512\) resolution using \(\epsilon\)-prediction. Following best practices, we use multi-step 2nd order DPM-Solver++ [59] with 50 steps and apply \(0.75\) SAG [60]. We set hyperparameter \(\eta=130.0\). Images are resized to \(224\times 224\) for the brain encoder. "" (null prompt) is used as the input prompt, thus the diffusion performs unconditional generation without brain guidance. For the brain encoder we use ViT-B/16, for CLIP probes we use CoCa ViT-L/14. These are the highest performing LAION-2B models of a given size provided by OpenCLIP [61; 62; 63; 64]. We train our brain encoders on each human subject separately to predict the activation of all higher visual cortex voxels. See Supplemental for visualization of test time brain encoder \(R^{2}\). To compare images from different ROIs and sub-regions (OFA/FFA in 4.3, two clusters in 4.4), we asked human evaluators select which of two image groups scored higher on various attributes. We used \(100\) images from each group randomly split into \(10\) non-overlapping subgroups. Each human evaluator performed \(80\) comparisons, across \(10\) splits, \(4\) NSD subjects, and for both fMRI and generated images. See Supplemental for standard error of responses. Human evaluators provided written informed consent and were compensated at \(\$12.00\)/hour. The study protocol was approved by the institutional review board at the authors' institution. ### Broad Category-Selective Networks In this experiment, we target large groups of category-selective voxels which can encompass more than one ROI (Figure 3). These regions have been previously identified as selective for broad semantic categories, and this experiment validates our method using these identified regions. The face-, place-, body-, and word- selective ROIs are identified with standard localizer stimuli [65]. The food-selective voxels were obtained from [5]. The same voxels were used to select the top activating NSD images (referred to as "NSD") and to guide the generation of BrainDiVE images. In Figures 4 we visualize, for place-, face-, word-, and body- selective voxels, the top-\(5\) out of \(10,000\) images from the fMRI stimulus set (NSD), and the top-\(5\) images out of \(1,000\) total images as evaluated by the encoding component of BrainDiVE. For food selective voxels, the top-\(10\) are visualized. A visual inspection indicates that our method is able to generate diverse images that semantically represent the target category. We further use CLIP to perform semantic probing of the images, and force the images to be classified into one of five categories. We measure the percentage of images that match the preferred category for a given set of voxels (Table 1). We find that our top-\(10\%\) and \(20\%\) of images exceed the top-\(1\%\) and \(2\%\) of natural images in accuracy, indicating our method has high semantic specificity. Figure 4: **Results for category selective voxels (S1). We identify the top-\(5\) images from the stimulus set or generated by our method with highest average activation in each set of category selective voxels for the face/place/word/body categories, and the top-\(10\) images for the food selective voxels.** Figure 3: **Visualizing category-selective voxels in S1. See text for details on how category selectivity was defined.** ### Individual ROIs In this section, we apply our method to individual ROIs that are selective for the same broad semantic category. We focus on the occipital face area (OFA) and fusiform face area (FFA), as initial tests suggested little differentiation between ROIs within the place-, word-, and body- selective networks. In this experiment, we also compare our results against the top images for FFA and OFA from NeuroGen [9], using the top 100 out of 500 images provided by the authors. Following NeuroGen, we also generate \(500\) total images, targeting FFA and OFA separately (Figure 5). We observe that both diffusion-generated and NSD images have very high face content in FFA, whereas NeuroGen has higher animal face content. In OFA, we observe both NSD and BrainDiVE images have a higher animal face content than the top 1% of the original image. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & Faces & Places & \multicolumn{2}{c}{Bodies} & \multicolumn{2}{c}{Words} & \multicolumn{2}{c}{Food} & \multicolumn{2}{c}{Mean} \\ \cline{2-13} & S1\(\uparrow\) & S2\(\uparrow\) & S1\(\uparrow\) & S2\(\uparrow\) & S1\(\uparrow\) & S2\(\uparrow\) & S1\(\uparrow\) & S2\(\uparrow\) & S1\(\uparrow\) & S2\(\uparrow\) & S1\(\uparrow\) & S2\(\uparrow\) \\ \hline NSD all stim & 17.4 & 17.2 & 29.9 & 29.5 & 31.6 & 31.8 & 10.3 & 10.6 & 10.8 & 10.9 & 20.0 & 20.0 \\ NSD top-200 & 42.5 & 41.5 & 66.5 & 80.0 & 56.0 & 65.0 & 31.5 & 34.5 & 68.0 & 85.5 & 52.9 & 61.3 \\ NSD top-100 & 40.0 & 45.0 & 68.0 & 79.0 & 49.0 & 60.0 & 30.0 & 49.0 & 78.0 & 85.0 & 53.0 & 63.6 \\ \hline BrainDiVE-200 & **69.5** & **70.0** & **97.5** & **100** & **75.5** & 68.5 & **60.0** & 57.5 & 89.0 & 94.0 & **78.3** & 75.8 \\ BrainDiVE-100 & 61.0 & 68.0 & **97.0** & **100** & 75.0 & **69.0** & **60.0** & **62.0** & **92.0** & **95.0** & 77.0 & **78.8** \\ \hline \hline \end{tabular} \end{table} Table 1: **Evaluating semantic specificity with zero-shot CLIP classification.** We use CLIP to classify images from each ROI into five semantic categories: face/place/body/word/food. Shown is the percentage where the classified category of the image matches the preferred category of the brain region. We show this for each subject’s entire NSD stimulus set (\(10,000\) images for S1&S2); the top-200 and top-100 images (top-2% and top-1%) evaluated by mean true fMRI beta, and the top-200 and top-100 (\(20\%\) and \(10\%\)) of BrainDiVE images as self-evaluated by the encoding component of BrainDiVE. BrainDiVE generates images with higher semantic specificity than the top 1% of natural images for each brain region. Figure 5: **Results for face-selective ROIs.** For each ROI (OFA, FFA) we visualize the top-5 images from NSD and NeuroGen, and the top-10 from BrainDiVE. NSD images are selected using the fMRI betas averaged within each ROI. NeuroGen images are ranked according to their official predicted ROI activity means. BrainDiVE images are ranked using our predicted ROI activities from 500 images. Red outlines in the NSD images indicate examples of responsiveness to non-face content. strong face component, although we also observe text selectivity in S2 and animal face selectivity in S5. Again NeuroGen predicts a higher animal component than face for S5. By avoiding the use of fixed categories, BrainDiVE images are more diverse than those of NeuroGen. This trend of face and animals appears at \(t>2\) and the much stricter \(t>5\) threshold for identifying face-selective voxels (\(t>5\) used for visualization/evaluation). The differences in images synthesized by BrainDiVE for FFA and OFA are consistent with past work suggesting that FFA represents faces at a higher level of abstraction than OFA, while OFA shows greater selectivity to low-level face features and sub-components, which could explain its activation by off-target categories [66; 67; 68]. To quantify these results, we perform a human study where subjects are asked to compare the top-100 images between FFA & OFA, for both NSD and generated images. Results are shown in Table 2. We find that OFA consistently has higher animal and abstract content than FFA. Most notably, this difference is on average more pronounced in the images from BrainDiVE, indicating that our approach is able to highlight subtle differences in semantic selectivity across regions. ### Semantic Divisions within ROIs In this experiment, we investigate if our model can identify novel sub-divisions within existing ROIs. We first perform clustering on normalized per-voxel encoder weights using vmf-clustering [69]. We find consistent cosine difference between the cluster centers in the food-selective ROI as well as in the occipital place area (OPA), clusters shown in Figure 6. In all four subjects, we observe a relatively consistent anterior-posterior split of OPA. While the clusters within the food ROI vary more anatomically, each subject appears to have a more medial and a more lateral cluster. We visualize the images for the two food clusters in Figure 7, and for the two OPA clusters in Figure 8. We observe that for both the food ROI and OPA, the BrainDiVE-generated images from each cluster have noticeable differences in their visual and semantic properties. In particular, the BrainDiVE images from food cluster-2 have much higher color saturation than those from cluster-1, and also have more objects that resemble fruits and vegetables. In contrast, food cluster-1 generally lacks vegetables and mostly consist of bread-like foods. In OPA, cluster-1 is dominated by indoor scenes (rooms, hallways), while 2 is overwhelmingly outdoor scenes, with a mixture of natural and man-made structures viewed from a far perspective. Some of these differences are also present in the NSD images, but the differences appear to be highlighted in the generated images. To confirm these effects, we perform a human study (Table 3, Table 4) comparing the images from different clusters in each ROI, for both NSD and generated images. As expected from visual inspection of the images, we find that food cluster-2 is evaluated to have higher vegetable/fruit content, judged to be healthier, more colorful, and slightly more distant than food cluster-1. We find that OPA cluster-1 is evaluated to be more angular/geometric, include more indoor scenes, to be less natural and consisting of less distant scenes. Again, while these trends are present in the NSD images, they are more pronounced with the BrainDiVE images. This not only suggests that our method has \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Which ROI has more... & \multicolumn{4}{c}{photorealistic faces} & \multicolumn{4}{c}{animals} & \multicolumn{4}{c}{abstract shapes/lines} \\ \cline{2-11} & S1 & S2 & S5 & S7 & S1 & S2 & S5 & S7 & S1 & S2 & S5 & S7 \\ \hline FFA-NSD & **45** & **43** & **34** & **41** & 34 & 34 & 17 & 15 & 21 & 6 & 14 & 22 \\ OFA-NSD & 25 & 22 & 21 & 18 & **47** & **36** & **65** & **65** & **24** & **44** & **28** & **25** \\ \hline FFA-BrainDiVE & **79** & **89** & **60** & **52** & 17 & 13 & 21 & 19 & 6 & 11 & 18 & 20 \\ OFA-BrainDiVE & 11 & 4 & 15 & 22 & **71** & **61** & **52** & **50** & **80** & **79** & **40** & **39** \\ \hline \hline \end{tabular} \end{table} Table 2: **Human evaluation of the difference between face-selective ROIs. Evaluators compare groups of images corresponding to OFA and FFA; comparisons are done within GT and generated images respectively. Questions are posed as: “Which group of images has more X?”; options are FFA/OFA/Same. Results are in \(\%\). Note that the “Same” responses are not shown; responses across all three options sum to 100.** Figure 6: **Clustering within the food ROI and within OPA. Clustering of encoder model weights for each region is shown for two example subjects on an inflated cortical surface.** uncovered differences in semantic selectivity within pre-existing ROIs, but also reinforces the ability of BrainDiVE to identify and highlight core functional differences across visual cortex regions. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c} \hline \hline Which cluster is more... & \multicolumn{4}{c}{angular/geometric} & \multicolumn{4}{c}{indoor} & \multicolumn{4}{c}{natural} & \multicolumn{4}{c}{far away} \\ \cline{2-13} & S1 & S2 & S5 & S7 & S1 & S2 & S5 & S7 & S1 & S2 & S5 & S7 & S1 & S2 & S5 & S7 \\ \hline OPA-1 NSD & **45** & **58** & **49** & **51** & **71** & **88** & **80** & **79** & 14 & 3 & 9 & 10 & 10 & 1 & 6 & 8 \\ OPA-2 NSD & 13 & 12 & 14 & 16 & 7 & 8 & 11 & 14 & 73 & **89** & **71** & **81** & **69** & **93** & **81** & **85** \\ \hline OPA-1 BrainDiVE & **76** & **87** & **88** & **76** & **89** & **90** & **90** & **85** & 6 & 6 & 9 & 6 & 1 & 3 & 3 & 8 \\ OPA-2 BrainDiVE & 12 & 3 & 4 & 10 & 7 & 7 & 5 & 8 & **91** & **91** & **83** & **90** & **97** & **92** & **91** & **88** \\ \hline \hline \end{tabular} \end{table} Table 4: **Human evaluation of the difference between OPA clusters**. Evaluators compare groups of images corresponding to OPA cluster 1 (OPA-1) and OPA cluster 2 (OPA-2), with questions posed as “Which group of images is more X?”. Comparisons are done within NSD and generated images respectively. Note that the “Same” responses are not shown; responses across all three options sum to 100. Results are in \(\%\). \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline Which cluster is more... & \multicolumn{4}{c}{vegetables/fruits} & \multicolumn{4}{c}{healthy} & \multicolumn{4}{c}{colorful} & \multicolumn{4}{c}{far away} \\ \cline{2-13} & S1 & S2 & S5 & S7 & S1 & S2 & S5 & S7 & S1 & S2 & S5 & S7 & S1 & S2 & S5 & S7 \\ \hline Food-1 NSD & 17 & 21 & 27 & 36 & 28 & 22 & 29 & 40 & 19 & 18 & 13 & 27 & 32 & 24 & 23 & 28 \\ Food-2 NSD & **65** & **56** & **56** & **49** & **50** & **47** & **54** & **45** & **42** & **52** & **53** & **42** & **34** & **39** & **36** & **42** \\ \hline Food-1 BrainDiVE & 11 & 10 & 8 & 11 & 15 & 16 & 20 & 17 & 6 & 9 & 11 & 16 & 24 & 18 & 27 & 18 \\ Food-2 BrainDiVE & **80** & **75** & **67** & **64** & **68** & **68** & **46** & **51** & **79** & **82** & **65** & **61** & **39** & **51** & **39** & **40** \\ \hline \hline \end{tabular} \end{table} Table 3: **Human evaluation of the difference between food clusters**. Evaluators compare groups of images corresponding to food cluster 1 (Food-1) and food cluster 2 (Food-2), with questions posed as “Which group of images has/is more X?”. Comparisons are done within NSD and generated images respectively. Note that the “Same” responses are not shown; responses across all three options sum to 100. Results are in \(\%\). Figure 7: **Comparing results across the food clusters.** We visualize top-10 NSD fMRI (out of 10,000) and diffusion images (out of 500) for _each cluster_. While the first cluster largely consists of processed foods, the second cluster has more visible high color saturation foods, and more vegetables/fruit like objects. BrainDiVE helps highlight the differences between clusters. ## 5 Discussion Limitations and Future WorkHere, we show that BrainDiVE generates diverse and realistic images that can probe the human visual pathway. This approach relies on existing large datasets of natural images paired with brain recordings. In that the evaluation of synthesized images is necessarily qualitative, it will be important to validate whether our generated images and candidate features derived from these images indeed maximize responses in their respective brain areas. As such, future work should involve the collection of human fMRI recordings using both our synthesized images and more focused stimuli designed to test our qualitative observations. Future work may also explore the images generated when BrainDiVE is applied to additional sub-region, new ROIs, or mixtures of ROIs. ConclusionWe introduce a novel method for guiding diffusion models using brain activations - BrainDiVE - enabling us to leverage generative models trained on internet-scale image datasets for data driven explorations of the brain. This allows us to better characterize fine-grained preferences across the visual system. We demonstrate that BrainDiVE can accurately capture the semantic selectivity of existing characterized regions. We further show that BrainDiVE can capture subtle differences between ROIs within the face selective network. Finally, we identify and highlight fine-grained subdivisions within existing food and place ROIs, differing in their selectivity for mid-level image features and semantic scene content. We validate our conclusions with extensive human evaluation of the images. Figure 8: **Comparing results across the OPA clusters.** We visualize top-10 NSD fMRI (out of 10,000) and diffusion images (out of 500) for _each cluster_. While both consist of scene images, the first cluster have more indoor scenes, while the second has more outdoor scenes. The BrainDiVE images help highlight the differences in semantic properties.
2301.10734
Valuation of the Convertible Bonds under Penalty TF model using Finite Element Method
In this paper, the TF system of two-coupled Black-Scholes equations for pricing the convertible bonds is solved numerically by using the P1 and P2 finite elements with the inequality constraints approximated by the penalty method. The corresponding finite element ODE system is numerically solved by using a modified Crank-Nicolson scheme, in which the non-linear system is solved at each time step by the Newton-Raphson method for non-smooth functions. Moreover, the corresponding Greeks are also calculated by taking advantage of the P1-P2 finite element approximation functions. Numerical solutions by the finite element method compare favorably with the solutions by the finite difference method in literature.
Rakhymzhan Kazbek, Yogi Erlangga, Yerlan Amanbek, Dongming Wei
2023-01-25T17:48:48Z
http://arxiv.org/abs/2301.10734v1
# Valuation of the Convertible Bonds under Penalty TF model using Finite Element Method ###### Abstract In this paper, the TF system of two-coupled Black-Scholes equations for pricing the convertible bonds is solved numerically by using the P1 and P2 finite elements with the inequality constraints approximated by the penalty method. The corresponding finite element ODE system is numerically solved by using a modified Crank-Nicolson scheme, in which the non-linear system is solved at each time step by the Newton-Raphson method for non-smooth functions. Moreover, the corresponding Greeks are also calculated by taking advantage of the P1-P2 finite element approximation functions. Numerical solutions by the finite element method compare favorably with the solutions by the finite difference method in literature. _Keywords--_ TF model, Pricing convertible bonds, Finite element method, Finite difference method, Financial derivatives, Greeks. ## 1 Introduction Convertible bond (CB) is a type of financial derivatives used in financial risk management and in trading by investors and the issuers [18, 27, 26]. It has become a popular choice of corporations as a stable investment vehicle. Pricing financial derivatives for CBs is however a complicated problem. Tree methods (e.g., in [23]), Monte-Carlo simulation (e.g., [4]), and partial differential equations are among the widely used techniques in the literature. One of the pioneering work using PDE modelling for CB pricing is due to Ignersoll [19], who developed a method for the determination of the optimal conversion and call policies for convertible securities using Black-Scholes [8] methodology. Brennan and Schwartz [9] extended Ignersoll's model to incorporate callability and dividend payments. They later included stochastic interest rate, resulting in a two-factor model for CB pricing with callability and conversion strategies [10]. In the late 1990s, Tsiveriotis and Fernandes [28] proposed an innovative two-factor model for pricing CBs under callability, puttability and conversion provisions with _hypothetical_ cash-only convertible bond (COCB) part, known as the TF model. Another model for pricing CBs with credit risk and default strategy was developed by Ayache et al. [6] based on a system of triple partial differential equations, called the AFV model. Because of their complexity, the TF and AFV models have to be solved numerically. The review paper of Al Saedi and Tularam [1] provides a recent account on methodologies for option pricing under the Black-Scholes equation, such as the R3C scheme [5], cubic spline wavelets and multiwavelet bases methods [11], finite difference methods (FDM) [14], finite-volume methods (FVM) [22], and finite-element methods (FEM) [17, 7, 13, 20]. Due to their simplicity, FDM have become a popular numerical method in computational finance. In computational bond pricing, the authors of [9, 10, 28, 6] use FDM to solve their proposed model. In this paper, we discuss and expanded the existing literature on numerical solutions of the TF model for CB pricing with credit risk and without dividends by implementing finite-element methods. The TF model for convertible bond pricing is based on the system of partial differential equations (PDEs) \[\frac{\partial U}{\partial t}+\frac{\sigma^{2}}{2}S^{2}\frac{\partial ^{2}U}{\partial S^{2}}+r_{g}S\frac{\partial U}{\partial S}-r(U-V)-(r+r_{c})V=0, \tag{1}\] \[\frac{\partial V}{\partial t}+\frac{\sigma^{2}}{2}S^{2}\frac{ \partial^{2}V}{\partial S^{2}}+r_{g}S\frac{\partial V}{\partial S}-(r+r_{c})V=0, \tag{2}\] for the time \(t\in(0,T)\) and the underlying stock price \(S\in(0,\infty)\), with \(U\) being the value of the CB, \(V\) the value of hypothetical COCB, \(r\) the risk-free rate, \(r_{g}\) the growth rate, which can be counted as risk-free rate \(r\) (see [18]), \(r_{c}\) the credit spread reflecting payoff default risk, \(\sigma\) be volatility, and \(T\) the maturity time. The terminal condition at the maturity time \(T\) means that once the CB is expired, no one can call or put it back. Holder of the CB will get as much as possible depending on the conversion ratio \(k\) and stock price. There is however a minimum based on the face value \(F\) and the coupon payment \(K\), yielding the terminal condition: \[U(S,T)=\begin{cases}F+K,&\text{if }F+K\geq kS,\\ kS,&\text{otherwise},\end{cases} \tag{3}\] and \[V(S,T)=\begin{cases}F+K,&\text{if }F+K\geq kS,\\ 0,&\text{otherwise}.\end{cases} \tag{4}\] Throughout its lifetime, however, the CB can be converted to the underlying stock at the value \(kS\), and issuer should pay the principal value \(F+K\) to the holder, if the issuer has not converted until maturity time. These rights lead to three conditions that constrain the CB price: 1. Upside constraint due to conversion of bonds: for \(t\in[0,T]\), \[U(S,t)\geq kS,\] (5) \[V(S,t)=0\text{ if }U(S,t)\leq kS;\] (6) 2. Upside constraint due to callability with the call price \(B_{call}\): for \(t\in[T_{call},T]\), with \(T_{call}\) the earliest time the bond issuer is allowed to call back the bond, \[U(S,t)\leq\max(B_{call}(t),kS),\] (7) \[V(S,t)=0\text{ if }U(S,t)\geq B_{call}(t);\] (8) 3. Downside constraint due to putability with the put price \(B_{put}\): for \(t\in[T_{put},T]\), with \(T_{put}\) the earliest time the investor is allowed to put the bond back, \[U(S,t)\geq B_{put}(t),\] (9) \[V(S,t)=B_{put}(t)\text{ if }U(S,t)\leq B_{put}(t).\] (10) Following [6], the call and put price in the callability and puttability constraints include the effect of future's coupon payment and the underlying interest as follows. Let \(\mathcal{T}_{\text{coupon}}=\{t_{i}\}\), the set of the coupon payment time, with \(0<t_{i-1}<t_{i}\leq T\). Then \[B_{put,call}(t)=B_{put,call}^{cl}+AccI(t), \tag{11}\] where \[AccI(t)=K_{i}\frac{t-t_{i-1}}{t_{i}-t_{i-1}}, \tag{12}\] the accrued interest at any time \(t\) between the time of the last coupon payment \(t_{i-1}\) and the time of the next (pending) coupon payment \(t_{i}\). Note that the constraints (5) and (9) can be combined to \(U(S,t)\geq\max\{B_{put},kS\}\). In this way, the constraints for \(U\) can be rewritten as \[U(S,t)\leq\max(B_{call}(t),kS), \tag{13}\] \[U(S,t)\geq\max(B_{put}(t),kS), \tag{14}\] with \[B_{call}(t) =\begin{cases}B_{call}^{cl}+AccI(t),&\text{if }t\in[T_{call},T],\\ +\infty,&\text{otherwise},\end{cases} \tag{15}\] \[B_{put}(t) =\begin{cases}B_{put}^{cl}+AccI(t),&\text{if }t\in[T_{put},T],\\ 0,&\text{otherwise}.\end{cases} \tag{16}\] Two boundary conditions need to be supplemented to the PDEs (1) and (2). At \(S=0\), the PDEs are reduced to \[\frac{\partial U(0,t)}{\partial t}=rU(0,t)+r_{c}V(0,t), \tag{17}\] and \[\frac{\partial V(0,t)}{\partial t}=(r+r_{c})V(0,t), \tag{18}\] with the terminal conditions \(U(0,T)=V(0,T)=F+K\) (see Eqs. (3) and (4)). The other boundary condition is associated with the situation when the the stock price \(S\) increases unboundedly, under which the CB is converted into stock. Therefore, \[\lim_{S\rightarrow\infty}\begin{cases}U(S,t)=kS,\\ V(S,t)=0.\end{cases} \tag{19}\] After spatial discretization using, for instance, FDM or FEM, the initial-boundary value problems (IBVP) with constraints given by (1)-(19) can be solved by using a time-integration method, such as Crank-Nicolson, in combination with projected SSOR (PSSOR) method to tackle the nonlinearity. In our paper, however, we shall consider a formulation of the above-stated IBVP in a penalty PDE to explicitly include some constraints in the PDEs, and apply FEM and Crank-Nicolson method on the resulting penalty PDE. The paper is organized as follows. In Section 2, transformation of the TF model with penalty in a suitable form for numerical method is presented to implement our numerical schemes. Finite element formulation for the penalty TF model is established in Section 3. We detail the time-integration method for solving the resulting differential algebraic equation in Section 4. Section 5 presents numerical results and discussions on the convergence of the FEM method. We finish up the paper by drawing some conclusion and remarks in Section 6. ## 2 The transformed TF model The standard procedure for solving the Black-Scholes-type PDEs requires transformation of the terminal-boundary value problem to an initial-boundary value problem, on which many numerical methods can be devised. Let \(\tau=T-t\) and \(x=\ln\left(S/S_{\text{int}}\right)\), where \(S_{\text{int}}\) is the stock price at the initial time \(t=0\). With this change of variables, it can be shown that the PDEs (1)-(2) are transformed to \[\frac{\partial U}{\partial\tau} =\frac{\sigma^{2}}{2}\frac{\partial^{2}U}{\partial x^{2}}+\left( r-\frac{\sigma^{2}}{2}\right)\frac{\partial U}{\partial x}-r(U-V)-(r+r_{c})V, \tag{20}\] \[\frac{\partial V}{\partial\tau} =\frac{\sigma^{2}}{2}\frac{\partial^{2}V}{\partial x^{2}}+\left( r-\frac{\sigma^{2}}{2}\right)\frac{\partial V}{\partial x}-(r+r_{c})V, \tag{21}\] with \((\tau,x)\in(0,T)\times(-\infty,\infty)\). The terminal conditions at \(t=T\) now become initial conditions at \(\tau=0\), given by \[U(x,0)=\begin{cases}F+K,&\text{if }F+K\geq kS_{\text{int}}e^{x},\\ kS_{\text{int}}e^{x},&\text{otherwise},\end{cases} \tag{22}\] and \[V(x,0)=\begin{cases}F+K,&\text{if }F+K\geq kS_{\text{int}}e^{x},\\ 0,&\text{otherwise}.\end{cases} \tag{23}\] Furthermore, the constraints for \(U\) are transformed into \[U(x,\tau)\geq\max\{B_{pu}(\tau),kS_{\text{int}}e^{x}\}=:U_{pu}^{ \star}(\tau), \tag{24}\] \[U(x,\tau)\leq\max\{B_{call}(\tau),kS_{\text{int}}e^{x}\}=:U_{ call}^{\star}(\tau), \tag{25}\] where \[B_{call}(\tau)=\begin{cases}B_{call}^{cl}+AccI(\tau),&\text{if }t\in[0,\tau_{ call}],\\ +\infty,&\text{otherwise},\end{cases} \tag{26}\] \[B_{put}(\tau)=\begin{cases}B_{put}^{cl}+AccI(\tau),&\text{if }t\in[0,\tau_{put}],\\ 0,&\text{otherwise},\end{cases} \tag{27}\] with \(\tau_{put,call}=T-T_{put,call}\) and, for \(\tau_{i}\in\mathcal{T}_{\tau,\text{coupon}}=\mathcal{T}_{\text{coupon}}\), \[B_{put,call}(\tau)=B_{put,call}^{cl}+K_{i}\frac{\tau-\tau_{i-1}}{\tau_{i}- \tau_{i-1}}. \tag{28}\] The constraints for \(V\) now read 1. for \(\tau\in[0,T]\), \[V(x,\tau)=0,\text{ if }U(\tau)\leq kS_{\text{int}}e^{x},\] (29) 2. for \(\tau\in[0,\tau_{call}]\), \[V(x,\tau)=0,\text{ if }U(\tau)\geq B_{call}(\tau)\] (30) 3. for \(\tau\in[0,\tau_{put}]\), \[V(x,\tau)=B_{put},\text{ if }U(\tau)\leq B_{put}(\tau).\] (31) For the boundary conditions, we note here that \(x\) is not defined at \(S=0\). As in the actual numerical computation we set \(S\) as close as possible to \(0\), we assume that (17) also holds at the proximity of \(S=0\), which corresponds to \(x_{\min}\to-\infty\) in the \(x\)-space. This results in the boundary conditions at \(x_{\min}\) \[\frac{\partial U(x_{\min},\tau)}{\partial\tau} =-rU(x_{\min},\tau)-r_{c}V(x_{\min},\tau), \tag{32}\] \[\frac{\partial V(x_{\min},\tau)}{\partial t} =-(r+r_{c})V(x_{\min},\tau). \tag{33}\] Transformation of the boundary conditions at \(S\to+\infty\) is straightforward: at \(x_{\max}\to+\infty\), \[U(x_{\max},\tau) =kS_{int}e^{x_{\max}}, \tag{34}\] \[V(x_{\max},\tau) =0. \tag{35}\] As stated in Section 1, our focus in this paper is on the the penalty TF model. We therefore need to reformulate the model into a PDE model with penalty terms associated with some constraints. In particular, as in practice we are mainly concerned with the CB price \(U\), not \(V\), we shall reformulate the CB PDE (20) with the associated constraints (24) and (25) into a penalty PDE. To this end, note that the linear complementarity problem (LCP) for (20) with constraints (24) and (25) is given by [6] \[\left(\begin{array}{c}\mathcal{L}U-r_{c}V=0\\ U\geq\max(B_{p},\kappa S_{\text{int}}e^{x})\\ U\leq\max(B_{c},\kappa S_{\text{int}}e^{x})\end{array}\right)\vee\left( \begin{array}{c}\mathcal{L}U-r_{c}V\leq 0\\ U=\max(B_{p},\kappa S_{\text{int}}e^{x})\\ U\leq\max(B_{c},\kappa S_{\text{int}}e^{x})\end{array}\right)\vee\left( \begin{array}{c}\mathcal{L}U-r_{c}V\geq 0\\ U\geq\max(B_{p},\kappa S_{\text{int}}e^{x})\\ U=\max(B_{c},\kappa S_{\text{int}}e^{x})\end{array}\right), \tag{36}\] where \(\mathcal{L}=-\frac{\partial}{\partial\tau}+\frac{\sigma^{2}}{2}\frac{\partial ^{2}}{\partial x^{2}}+\left(r-\frac{\sigma^{2}}{2}\right)\frac{\partial}{ \partial x}-r\). The penalty PDE for the bond valuation can be constructed from the LCP (36): \[\frac{\partial U}{\partial\tau}=\frac{\sigma^{2}}{2}\frac{\partial^{2}U}{ \partial x^{2}}+\left(r-\frac{\sigma^{2}}{2}\right)\frac{\partial U}{\partial x }-rU-r_{c}V+\rho\max(U-U_{call}^{*},0)+\rho\max(U_{put}^{*}-U,0),\] where \(\rho>0\) is the penalty parameter, which typically is set very large. By rewriting \(\max(U-U_{call}^{*},0)=\alpha_{call}(U-U_{call}^{*})\) and \(\max(U_{put}^{*}-U,0)=\alpha_{put}(U-U_{put}^{*})\), with \[\alpha_{call}=\begin{cases}1,&\text{if }U-U_{call}^{*}\geq 0,\\ 0,&\text{otherwise},\end{cases}\quad\text{and}\quad\alpha_{put}=\begin{cases}1,& \text{if }U_{put}^{*}-U\geq 0,\\ 0,&\text{otherwise},\end{cases} \tag{37}\] the CB PDE can be reformulated into \[\frac{\partial U}{\partial\tau}=\frac{\sigma^{2}}{2}\frac{\partial^{2}U}{ \partial x^{2}}+(r-\frac{\sigma^{2}}{2})\frac{\partial U}{\partial x}-rU-r_{c} V+\rho\alpha_{call}(U-U_{call}^{*})+\rho\alpha_{put}(U_{put}^{*}-U). \tag{38}\] Finite element method In this section, we construct finite-element methods to approximately discretize the penalty CB PDE (38) and the COCB PDE (21) in the transformed TF model. To apply finite element spatial discretization to the PDEs, the domain \((0,\infty)\) is approximated by the bounded domain \(\Omega=(x_{\min},x_{\max})\) in the following: Suppose that the solution of the bond PDEs are functions of the following class: \[U(x,\tau),V(x,\tau)\in\mathcal{H}^{1}=\left\{f:\Omega\to\mathbb{R}\mid f, \frac{\partial f}{\partial x}\in L_{2}(\Omega)\text{ and }f\text{ satisfies boundary conditions on }\partial\Omega\right\}.\] Consider two test functions \(w,z\in\mathcal{H}_{0}^{1}\), where \(\mathcal{H}_{0}^{1}=\left\{f:\Omega\to\mathbb{R}\mid f,\frac{\partial f}{ \partial z}\in L_{2}(\Omega)\text{ and }f|_{\partial\Omega}=0\right\}\). The weak formulation of the penalty PDE (38) and (21) reads \[\int\limits_{\Omega}w\frac{\partial U}{\partial\tau} =\frac{\sigma^{2}}{2}\int\limits_{\Omega}w\frac{\partial^{2}U}{ \partial x^{2}}+\left(r-\frac{\sigma^{2}}{2}\right)\int\limits_{\Omega}w \frac{\partial U}{\partial x}-r\int\limits_{\Omega}wU-r_{c}\int\limits_{ \Omega}wV+\rho\int\limits_{\Omega}\alpha_{call}w(U-U_{call}^{*})+\rho\int \limits_{\Omega}\alpha_{put}w(U_{put}^{*}-U),\] \[\int\limits_{\Omega}z\frac{\partial V}{\partial\tau} =\frac{\sigma^{2}}{2}\int\limits_{\Omega}z\frac{\partial^{2}V}{ \partial x^{2}}+\left(r-\frac{\sigma^{2}}{2}\right)\int\limits_{\Omega}z\frac {\partial V}{\partial x}-\left(r+r_{c}\right)\int\limits_{\Omega}zV,\] where the integration is carried out along the \(x\)-direction (\(dx\) is not indicated to save space). Integration by parts and applying the vanishing property of the function \(w\) and \(z\) at the boundary results in the weak formulation \[\frac{\partial}{\partial\tau}\int\limits_{\Omega}wU =-\frac{\sigma^{2}}{2}\int\limits_{\Omega}\frac{\partial w}{ \partial x}\frac{\partial U}{\partial x}-\left(r-\frac{\sigma^{2}}{2}\right) \int\limits_{\Omega}\frac{\partial w}{\partial x}U-r\int\limits_{\Omega}wU-r _{c}\int\limits_{\Omega}wV+\rho\int\limits_{\Omega}\alpha_{call}w(U-U_{call }^{*})+\rho\int\limits_{\Omega}\alpha_{put}w(U_{put}^{*}-U),\] \[\frac{\partial}{\partial\tau}\int\limits_{\Omega}zV =-\frac{\sigma^{2}}{2}\int\limits_{\Omega}\frac{\partial z}{ \partial x}\frac{\partial V}{\partial x}-\left(r-\frac{\sigma^{2}}{2}\right) \int\limits_{\Omega}\frac{\partial z}{\partial x}V-\left(r+r_{c}\right)\int \limits_{\Omega}zV.\] To build our finite-element approximation, we consider the finite dimensional subspace \(S_{0}^{h}\subset\mathcal{H}_{0}^{1}\), spanned by the basis \(\{\psi_{1},\psi_{2},\ldots,\psi_{n}\}\). The finite-element approximation to the solution \(U\) is the function \[U_{h}=\sum_{i=1}^{n}u_{i}\psi_{i}+\sum_{i\in\mathcal{I}_{\partial}}u_{i}\psi_{i }\simeq U,\quad u_{i}\in\mathbb{R},\] where \(\psi_{i\in\mathcal{I}_{\partial}}\) are additional functions needed to interpolate the given solutions at the boundaries. A similar form of approximation to \(V\) is devised, namely \[V_{h}=\sum_{i=1}^{n}v_{i}\psi_{i}+\sum_{i\in\mathcal{I}_{\partial}}v_{i}\psi_{i }\simeq V,\quad v_{i}\in\mathbb{R}.\] The use of the above approximations results in the weak formulation in the finite-dimensional space: \[\frac{\partial}{\partial\tau}\left(\sum_{i=1}^{n}u_{i}\int\limits_ {\Omega}w\psi_{i}+\sum_{i\in\mathcal{I}_{\partial}}u_{i}\int\limits_{\Omega}w \psi_{i}\right) =-\frac{\sigma^{2}}{2}\left(\sum_{i=1}^{n}u_{i}\int\limits_{ \Omega}\frac{\partial w}{\partial x}\frac{\partial\psi_{i}}{\partial x}+\sum_{ i\in\mathcal{I}_{\partial}}u_{i}\int\limits_{\Omega}\frac{\partial w}{ \partial x}\frac{\partial\psi_{i}}{\partial x}\right)\] \[-\left(r-\frac{\sigma^{2}}{2}\right)\left(\sum_{i=1}^{n}u_{i}\int \limits_{\Omega}\frac{\partial w}{\partial x}\psi_{i}+\sum_{i\in\mathcal{I}_{ \partial}}^{n}u_{i}\int\limits_{\Omega}\frac{\partial w}{\partial x}\psi_{i}\right)\] \[-r\left(\sum_{i=1}^{n}u_{i}\int\limits_{\Omega}w\psi_{i}+\sum_{i \in\mathcal{I}_{\partial}}u_{i}\int\limits_{\Omega}w\psi_{i}\right)-r_{c} \left(\sum_{i=1}^{n}v_{i}\int\limits_{\Omega}w\psi_{i}+\sum_{i\in\mathcal{I}_{ \partial}}v_{i}\int\limits_{\Omega}w\psi_{i}\right)\] \[+\mathcal{P}_{call}+\mathcal{P}_{put}, \tag{39}\] where \(\mathcal{P}_{call}=\rho\int\limits_{\Omega}\alpha_{call}w\left(U-U_{call}^{*} \right)dx\) and \(\mathcal{P}_{put}=\rho\int\limits_{\Omega}\alpha_{put}w\left(U_{put}^{*}-U \right)dx\), and \[\frac{\partial}{\partial\tau}\left(\sum\limits_{i=1}^{n}v_{i}\int \limits_{\Omega}z\psi_{i}+\sum\limits_{i\in\mathcal{I}_{\partial}}v_{i}\int \limits_{\Omega}z\psi_{i}\right) =-\frac{\sigma^{2}}{2}\left(\sum\limits_{i=1}^{n}v_{i}\int \limits_{\Omega}\frac{\partial z}{\partial x}\frac{\partial\psi_{i}}{ \partial x}+\sum\limits_{i\in\mathcal{I}_{\partial}}v_{i}\int\limits_{\Omega} \frac{\partial z}{\partial x}\frac{\partial\psi_{i}}{\partial x}\right) \tag{40}\] \[-\left(r-\frac{\sigma^{2}}{2}\right)\left(\sum\limits_{i=1}^{n}v_ {i}\int\limits_{\Omega}\frac{\partial z}{\partial x}\psi_{i}+\sum\limits_{i \in\mathcal{I}_{\partial}}^{n}v_{i}\int\limits_{\Omega}\frac{\partial z}{ \partial x}\psi_{i}\right)\] \[-(r+r_{c})\left(\sum\limits_{i=1}^{n}v_{i}\int\limits_{\Omega}z \psi_{i}+\sum\limits_{i\in\mathcal{I}_{\partial}}v_{i}\int\limits_{\Omega}z \psi_{i}\right).\] In the Galerkin method [21], the test function \(w\) and \(z\) are chosen to coincide with the basis function \(\psi_{i}\). Imposing this condition for \(w,z=\psi_{j}\), \(j=1,\ldots,n\) results in the system of equations \[\frac{\partial}{\partial\tau}\left(\sum\limits_{i=1}^{n}u_{i}\int \limits_{\Omega}\psi_{j}\psi_{i}+\sum\limits_{i\in\mathcal{I}_{\partial}}u_{i }\int\limits_{\Omega}\psi_{j}\psi_{i}\right) =-\frac{\sigma^{2}}{2}\left(\sum\limits_{i=1}^{n}u_{i}\int \limits_{\Omega}\frac{\partial\psi_{j}}{\partial x}\frac{\partial\psi_{i}}{ \partial x}+\sum\limits_{i\in\mathcal{I}_{\partial}}u_{i}\int\limits_{\Omega }\frac{\partial\psi_{j}}{\partial x}\frac{\partial\psi_{i}}{\partial x}\right)\] \[-\left(r-\frac{\sigma^{2}}{2}\right)\left(\sum\limits_{i=1}^{n}u_ {i}\int\limits_{\Omega}\frac{\partial\psi_{j}}{\partial x}\psi_{i}+\sum\limits _{i\in\mathcal{I}_{\partial}}^{n}u_{i}\int\limits_{\Omega}\frac{\partial\psi_ {j}}{\partial x}\psi_{i}\right)\] \[-r\left(\sum\limits_{i=1}^{n}u_{i}\int\limits_{\Omega}\psi_{j} \psi_{i}+\sum\limits_{i\in\mathcal{I}_{\partial}}u_{i}\int\limits_{\Omega} \psi_{j}\psi_{i}\right)-r_{c}\left(\sum\limits_{i=1}^{n}v_{i}\int\limits_{ \Omega}\psi_{j}\psi_{i}+\sum\limits_{i\in\mathcal{I}_{\partial}}v_{i}\int \limits_{\Omega}\psi_{j}\psi_{i}\right)\] \[+\mathcal{P}_{call,j}+\mathcal{P}_{put,j}, \tag{41}\] where \(\mathcal{P}_{call,j}\) is \(\mathcal{P}_{call}\) with \(w\) be replaced by \(\psi_{j}\) and similarly for \(\mathcal{P}_{put,j}\), and \[\frac{\partial}{\partial\tau}\left(\sum\limits_{i=1}^{n}v_{i}\int \limits_{\Omega}\psi_{j}\psi_{i}+\sum\limits_{i\in\mathcal{I}_{\partial}}v_{i} \int\limits_{\Omega}\psi_{j}\psi_{i}\right) =-\frac{\sigma^{2}}{2}\left(\sum\limits_{i=1}^{n}v_{i}\int\limits _{\Omega}\frac{\partial\psi_{j}}{\partial x}\frac{\partial\psi_{i}}{\partial x }+\sum\limits_{i\in\mathcal{I}_{\partial}}v_{i}\int\limits_{\Omega}\frac{ \partial\psi_{j}}{\partial x}\frac{\partial\psi_{i}}{\partial x}\right)\] \[-\left(r-\frac{\sigma^{2}}{2}\right)\left(\sum\limits_{i=1}^{n}v_{i }\int\limits_{\Omega}\frac{\partial\psi_{j}}{\partial x}\psi_{i}+\sum\limits_{i \in\mathcal{I}_{\partial}}^{n}v_{i}\int\limits_{\Omega}\frac{\partial\psi_{j}}{ \partial x}\psi_{i}\right) \tag{42}\] \[-(r+r_{c})\left(\sum\limits_{i=1}^{n}v_{i}\int\limits_{\Omega}\psi _{j}\psi_{i}+\sum\limits_{i\in\mathcal{I}_{\partial}}v_{i}\int\limits_{\Omega }\psi_{j}\psi_{i}\right).\] In practice, systems of equations (41) and (42) are constructed via an assembly process using (local) element matrices, whose structures depend on the choice of the basis functions \(\psi_{i}\). The common choice for the basis functions is a class of functions satisfying the nodal condition \[\psi_{i}(x_{j})=\begin{cases}1,&i=j,\\ 0,&\text{otherwise},\end{cases} \tag{43}\] where \(x_{j}\) is the nodal point. This choice leads to global systems of linear equations with sparse and banded coefficient matrices. ### Linear polynomial bases Consider partition of the spatial domain \(\Omega\) into \(n_{E}\) non-overlapping elements \(\Omega_{j}=[x_{j-1},x_{j}]\), with \(|\Omega_{j}|=x_{j}-x_{j-1}=h\), \(x_{j}\), \(j=0,\ldots,n_{E}\), the nodal points, \(x_{0}=x_{\min}\) and \(x_{n_{E}}=x_{\max}\). In the basic element \(\Omega_{j}=[x_{j-1},x_{j}]\), we define two linear interpolation basis functions \[\psi_{j-1}(x) =(x-x_{j})/(x_{j-1}-x_{j})=-(x-x_{j})/h,\] \[\psi_{j}(x) =(x-x_{j-1})/(x_{j}-x_{j-1})=(x-x_{j-1})/h,\] resulting in the P1 finite element (P1-FEM). Evaluating the integrals in (41) and (42) using the above-stated basis functions over the element \(\Omega_{j}\) results in the following local (element) matrices: * For the \(\int\psi_{j}\psi_{i}dx\) term, the element matrix reads \[M_{j}=\begin{bmatrix}\int\limits_{\Omega_{j}}\psi_{j-1}\psi_{j-1}dx&\int\limits_ {\Omega_{j}}\psi_{j-1}\psi_{j}dx\\ \int\limits_{\Omega_{j}}\psi_{j}\psi_{j-1}dx&\int\limits_{\Omega_{j}}\psi_{j} \psi_{j}dx\end{bmatrix}=\frac{h}{6}\begin{bmatrix}2&1\\ 1&2\end{bmatrix}.\] * For the \(-\int\psi_{j,x}\psi_{i,x}dx\) term, the element matrix reads \[K_{j}=-\begin{bmatrix}\int\limits_{\Omega_{j}}\psi_{j-1,x}\psi_{j-1,x}dx&\int \limits_{\Omega_{j}}\psi_{j-1,x}\psi_{j,x}dx\\ \int\limits_{\Omega_{j}}\psi_{j,x}\psi_{j-1,x}dx&\int\limits_{\Omega_{j}}\psi _{j,x}\psi_{j,x}dx\end{bmatrix}=-\frac{1}{h}\begin{bmatrix}1&-1\\ -1&1\end{bmatrix}.\] * For the \(\int\psi_{j}\psi_{i,x}dx\) term, the element matrix reads \[N_{j}=\begin{bmatrix}\int\limits_{\Omega_{j}}\psi_{j-1}\psi_{j-1,x}dx&\int \limits_{\Omega_{j}}\psi_{j-1}\psi_{j,x}dx\\ \int\limits_{\Omega_{j}}\psi_{j}\psi_{j-1,x}dx&\int\limits_{\Omega_{j}}\psi _{j}\psi_{j,x}dx\end{bmatrix}=\frac{1}{2}\begin{bmatrix}-1&-1\\ 1&1\end{bmatrix}.\] ### Quadratic polynomial bases In this approach, we add a midpoint \(x_{j-\frac{1}{2}}=(x_{j-1}+x_{j})/2\) in the basic element \(\Omega_{j}\), giving three nodal points: \(x_{j-1}\), \(x_{j-1/2}\), and \(x_{j}\), and define three quadratic interpolation polynomials satisfying the nodal condition (43): \[\psi_{j-1}(x) =\frac{(x-x_{j-\frac{1}{2}})(x-x_{j})}{(x_{j-1}-x_{j-\frac{1}{2}} )(x_{j-1}-x_{j})}=2(x-x_{j-\frac{1}{2}})(x-x_{j})/h^{2}, \tag{44}\] \[\psi_{j-\frac{1}{2}}(x) =\frac{(x-x_{j-1})(x-x_{j})}{(x_{j-\frac{1}{2}}-x_{j-1})(x_{j- \frac{1}{2}}-x_{j})}=-4(x-x_{j-1})(x-x_{j})/h^{2},\] (45) \[\psi_{j}(x) =\frac{(x-x_{j-1})(x-x_{j-\frac{1}{2}})}{(x_{j}-x_{j-1})(x_{j}-x_ {j-\frac{1}{2}})}=2(x-x_{j-1})(x-x_{j-\frac{1}{2}})/h^{2}, \tag{46}\] resulting in P2-FEM. The local element matrices are as follows: * the \(-\int\psi_{j,x}\psi_{i,x}dx\) term: \[K_{j}=-\begin{bmatrix}\int\limits_{\Omega_{j}}\psi_{j-1,x}\psi_{j-1,x}dx&\int \limits_{\Omega_{j}}\psi_{j-1,x}\psi_{j-\frac{1}{2},x}dx&\int\limits_{ \Omega_{j}}\psi_{j-1,x}\psi_{j,x}dx\\ \int\limits_{\Omega_{j}}\psi_{j-\frac{1}{2},x}\psi_{j-1,x}dx&\int \limits_{\Omega_{j}}\psi_{j-\frac{1}{2},x}\psi_{j-\frac{1}{2},x}dx&\int \limits_{\Omega_{j}}\psi_{j-\frac{1}{2},x}\psi_{j,x}dx\\ \int\limits_{\Omega_{j}}\psi_{j,x}\psi_{j-1,x}dx&\int \limits_{\Omega_{j}}\psi_{j,x}\psi_{j-\frac{1}{2},x}dx&\int \limits_{\Omega_{j}}\psi_{j,x}dx\end{bmatrix}=-\frac{1}{3h}\begin{bmatrix}7&-8&1 \\ -8&16&-8\\ 1&-8&7\end{bmatrix}.\] * the \(\int\psi_{j}\psi_{i,x}dx\) term: \[N_{j}=\begin{bmatrix}\int\limits_{\Omega_{j}}\psi_{j-1}\psi_{j-1,x}dx& \int\limits_{\Omega_{j}}\psi_{j-1}\psi_{j-\frac{1}{2},x}dx&\int\limits_{\Omega _{j}}\psi_{j-1}\psi_{j,x}dx\\ \int\limits_{\Omega_{j}}\psi_{j-\frac{1}{2}}\psi_{j-1,x}dx&\int\limits_{\Omega _{j}}\psi_{j-\frac{1}{2}}\psi_{j-\frac{1}{2},x}dx&\int\limits_{\Omega_{j}}\psi _{j-\frac{1}{2}}\psi_{j,x}dx\\ \int\limits_{\Omega_{j}}\psi_{j}\psi_{j-1,x}dx&\int\limits_{\Omega_{j}}\psi_{j }\psi_{j-\frac{1}{2},x}dx&\int\limits_{\Omega_{j}}\psi_{j}\psi_{j,x}dx\end{bmatrix} =\frac{1}{6}\begin{bmatrix}-3&-4&1\\ 4&0&-4\\ -1&4&3\end{bmatrix}.\] ### Treating the constraints by Penalty method We now turn to the two nonlinear penalty terms in (38) and construct a finite-element approximation to them. We in particular apply _group_ finite element [15] to deal with the nonlinearity. We shall discuss the construction for \(\mathcal{P}_{call,j}\); construction of finite element approximation for \(\mathcal{P}_{put,j}\) is done in the same way. We assume that the term \(\zeta_{call}:=\alpha_{call}(U-U_{call}^{*})\) is approximated by \[\zeta_{call}=\sum_{i=1}^{n}\zeta_{i}\psi_{i}+\sum_{\mathcal{I}_{\partial}} \zeta_{i}\psi_{i},\] where \(\zeta_{call,i}=\alpha_{call}(x_{i})(U(x_{i})-U_{call}^{*}(x_{i}))=:\alpha_{ call,i}(u_{i}-u_{call,i}^{*})\). Therefore, for \(w=\psi_{j}\), \(j=1,\ldots,n\), we have \[\mathcal{P}_{call,j} =\rho\int\limits_{\Omega}\psi_{j}\left(\sum_{i=1}^{n}\zeta_{i} \psi_{i}+\sum_{\mathcal{I}_{\partial}}\zeta_{i}\psi_{i}\right)dx=\rho\left( \sum_{i=1}^{n}\zeta_{i}\int\limits_{\Omega}\psi_{j}\psi_{i}dx+\sum_{\mathcal{I }_{\partial}}\zeta_{i}\int\limits_{\Omega}\psi_{j}\psi_{i}dx\right)\] \[=\rho\left(\sum_{i=1}^{n}\alpha_{call,i}(u_{i}-u_{call,i}^{*}) \int\limits_{\Omega}\psi_{j}\psi_{i}dx+\sum_{\mathcal{I}_{\partial}}\alpha_{ call,i}(u_{i}-u_{call,i}^{*})\int\limits_{\Omega}\psi_{j}\psi_{i}dx\right). \tag{47}\] Each integral in the above-equation is evaluated element-wise, resulting in the local element matrices \(M_{j}\), \(K_{j}\), and \(N_{j}\), given in Sections 3.1 and 3.2. By using the same argument, \[\mathcal{P}_{put,j}=\rho\left(\sum_{i=1}^{n}\alpha_{put,i}(u_{i}-u_{put,i}^{*} )\int\limits_{\Omega}\psi_{j}\psi_{i}dx+\sum_{\mathcal{I}_{\partial}}\alpha_{ put,i}(u_{i}-u_{put,i}^{*})\int\limits_{\Omega}\psi_{j}\psi_{i}dx\right). \tag{48}\] ## 4 Time integration scheme The global finite-element system obtained from assembling the local finite-element matrices can be represented by the differential algebraic equations (DAEs): \[\frac{\partial}{\partial\tau}(M\mathbf{u}+\hat{\mathbf{b}}_{M,u})= -\frac{\sigma^{2}}{2}K\mathbf{u}-\left(r-\frac{\sigma^{2}}{2}\right)N \mathbf{u}-rM\mathbf{u}-r_{c}M\mathbf{v}-\mathbf{\beta}_{1}(\mathbf{u},\mathbf{v})\] \[+\rho MP_{put}(\mathbf{u}_{put}^{*}-\mathbf{u})+\rho MP_{call}(\mathbf{u}-\bm {u}_{call}^{*})+\rho\mathbf{b}_{put}+\rho\mathbf{b}_{call}:=F_{1}(\mathbf{u},\mathbf{v}), \tag{49}\] \[\frac{\partial}{\partial\tau}(M\mathbf{v}+\hat{\mathbf{b}}_{M,v})= -\frac{\sigma^{2}}{2}K\mathbf{v}-\left(r-\frac{\sigma^{2}}{2}\right)N\mathbf{v}-(r+ r_{c})M\mathbf{v}-\mathbf{\beta}_{2}(\mathbf{v}):=F_{2}(\mathbf{v}), \tag{50}\] where \(P_{put}=\text{diag}(\alpha_{put,j})\), \(P_{call}=\text{diag}(\alpha_{call,j})\), and \[\mathbf{\beta}_{1}(\mathbf{u},\mathbf{v}) =\frac{\sigma^{2}}{2}\mathbf{b}_{K,u}+\left(r-\frac{\sigma^{2}}{2} \right)\mathbf{b}_{N,u}+r\mathbf{b}_{M,u}+r_{c}\mathbf{b}_{M,v}, \tag{51}\] \[\mathbf{\beta}_{2}(\mathbf{v}) =\frac{\sigma^{2}}{2}\mathbf{b}_{K,v}+\left(r-\frac{\sigma^{2}}{2} \right)\mathbf{b}_{N,v}+(r+r_{c})\mathbf{b}_{M,v}, \tag{52}\] are the boundary condition vectors. Time integration of the DAEs (49) and (50) is carried out by applying the \(\theta\)-scheme on both equations, which results in the systems, with \(\theta\in[0,1]\), \(\Delta\tau=T/n_{\tau}\), and \(n_{\tau}\) the number of time steps, \[M\mathbf{u}^{m+1}+\hat{\mathbf{b}}_{M,u}^{m+1}-M\mathbf{u}^{m}-\hat{\mathbf{b}}_ {M,u}^{m}=\theta\Delta\tau F_{1}(\mathbf{u}^{m+1},\mathbf{v}^{m+1})+(1-\theta)\Delta \tau F_{1}(\mathbf{u}^{m},\mathbf{v}^{m}),\] \[M\mathbf{v}^{m+1}+\hat{\mathbf{b}}_{M,v}^{m+1}-M\mathbf{v}^{m}-\hat{\mathbf{b}}_{M,v}^{m}=\theta\Delta\tau F_{2}(\mathbf{v}^{m+1})+(1-\theta)\Delta\tau F_{2}(\mathbf{v}^ {m}),\] or \[A_{11}\mathbf{u}^{m+1} +A_{12}\mathbf{v}^{m+1}-\rho\theta\Delta\tau M\left(P_{put}^{m+1}(\mathbf{u} _{put}^{*,m+1}-\mathbf{u}^{m+1})+P_{ell}^{m+1}(\mathbf{u}^{m+1}-\mathbf{u}_{coll}^{*,m+1})\right)\] \[=\widetilde{A}_{11}\mathbf{u}^{m}+\widetilde{A}_{12}\mathbf{v}^{m}+\rho(1- \theta)\Delta\tau M\left(P_{put}^{m}(\mathbf{u}_{put}^{*,m}-\mathbf{u}^{m})+P_{ell}^{m} (\mathbf{u}^{m}-\mathbf{u}_{coll}^{*,m})\right)\] \[+\theta\Delta\tau\mathbf{\beta}_{11}^{m+1}+(1-\theta)\Delta\tau\mathbf{ \beta}_{1}^{m}+\mathbf{\hat{b}}_{M,u}^{m+1}-\mathbf{\hat{b}}_{M,u}^{m+1}+\theta\rho \Delta\tau(\mathbf{b}_{put}^{m+1}+\mathbf{b}_{coll}^{m+1})+(1-\theta)\rho\Delta\tau( \mathbf{b}_{put}^{m}+\mathbf{b}_{coll}^{m}), \tag{53}\] \[A_{22}\mathbf{v}^{m+1} =\widetilde{A}_{22}\mathbf{v}^{m}+\theta\Delta\tau\mathbf{\beta}_{2}^{m+1 }+(1-\theta)\Delta\tau\mathbf{\beta}_{2}^{m}+\mathbf{\hat{b}}_{M,v}^{m+1}\,, \tag{54}\] where \[A_{11} =M+\theta\Delta\tau\left(\frac{\sigma^{2}}{2}K+\left(r-\frac{ \sigma^{2}}{2}\right)N+rM\right),\] \[A_{12} =\theta\Delta\tau r_{c}M,\] \[A_{22} =M+\theta\Delta\tau\left(\frac{\sigma^{2}}{2}K+\left(r-\frac{ \sigma^{2}}{2}\right)N+(r+r_{c})M\right)\] \[\widetilde{A}_{11} =M-(1-\theta)\Delta\tau\left(\frac{\sigma^{2}}{2}K+\left(r-\frac {\sigma^{2}}{2}\right)N+rM\right),\] \[\widetilde{A}_{12} =-(1-\theta)\Delta\tau r_{c}M,\] \[\widetilde{A}_{22} =M-(1-\theta)\Delta\tau\left(\frac{\sigma^{2}}{2}K+\left(r-\frac {\sigma^{2}}{2}\right)N+(r+r_{c})M\right).\] Let the solutions \(\mathbf{u}^{m}\) and \(\mathbf{v}^{m}\) be known. The solutions at the next time level \(m+1\) can in principle be computed by first solving (54) for \(\mathbf{v}^{m+1}\). The solution \(\mathbf{u}^{m+1}\) is then computed via (53) using the known \(\mathbf{u}^{m}\), \(\mathbf{v}^{m}\), and \(\mathbf{v}^{m+1}\). This procedure however requires knowledge of the solutions at the boundaries at the time level \(m+1\). ### Boundary solutions At \(x_{\min}\), with \(u_{0}(\tau):=U(x_{\min},\tau)\), \(v_{0}(\tau):=V(x_{\min},\tau)\), etc, the boundary conditions with penalty in \(U\) can be written as follows: \[\begin{cases}\frac{\partial u_{0}(\tau)}{\partial\tau}=-ru_{0}(\tau)-r_{c}v_{ 0}(\tau)+\rho\max\left(u_{0}(\tau)-u_{call,0}^{*}(\tau),0\right)+\rho\max \left(u_{put,0}^{*}(\tau)-u_{0}(\tau),0\right),\\ \frac{\partial v_{0}(\tau)}{\partial t}=-(r+r_{c})v_{0}(\tau).\end{cases} \tag{55}\] Note that \(\max\left(u_{0}(\tau)-u_{call,0}^{*}(\tau),0\right)=p_{call,0}(\tau)(u_{0}( \tau)-u_{call,0}^{*}(\tau))\), with \[p_{call,0}=\begin{cases}1,&\text{if }u_{0}(\tau)>u_{call,0}^{*}(\tau),\\ 0,&\text{otherwise,}\end{cases}\] and \(\max\left(u_{put,0}^{*}(\tau)-u_{0}(\tau),0\right)=p_{put,0}(\tau)(u_{put,0}^{ *}(\tau)-u_{0}(\tau))\), with \[p_{put,0}=\begin{cases}1,&\text{if }u_{0}(\tau)<u_{put,0}^{*}(\tau),\\ 0,&\text{otherwise.}\end{cases}\] Application of the \(\theta\)-scheme on (55) leads to the discrete equations: \[u_{0}^{m+1}+\theta\Delta\tau \left(ru_{0}^{m+1}+r_{c}v_{0}^{m+1}+\rho\left(p_{call,0}^{m+1}(u_{0 }^{m+1}-u_{call,0}^{*,m+1})+p_{put}^{m+1}(\mathbf{u}_{put,0}^{*,m+1}-u_{0}^{m+1}) \right)\right)\] \[=u_{0}^{m}-(1-\theta)\Delta\tau\left(ru_{0}^{m}+r_{c}v_{0}^{m}+ \rho\left(p_{call,0}^{m}(u_{0}^{m}-u_{call,0}^{*,m})+p_{put}^{m}(\mathbf{u}_{put,0}^ {*,m}-u_{0}^{m})\right)\right), \tag{56}\] \[\left[1+\theta\Delta\tau(r+r_{c})\right]v_{0}^{m+1} =\left[1-(1-\theta)\Delta\tau(r+r_{c})\right]v_{0}^{m}\,. \tag{57}\] Let the boundary solution \(v_{0}^{m}\) be known. Then \(v_{0}^{m+1}\) can be computed from (57). With \(u_{0}^{m}\), \(v_{0}^{m}\), and \(v_{0}^{m+1}\) now known, (56) becomes a nonlinear function of \(u_{0}^{m+1}\), which can be solved approximately using Newton's method. First of all we assume that the penalty term in (56) is approximated in a fully implicit way at the new time level \(m+1\). This results in the equation \[0=(1+\theta\Delta\tau r)u_{0}^{m+1}+\Delta\tau\rho\left(p_{call,0}^{m+1}(u_{0}^ {m+1}-u_{call,0}^{*,m+1})+p_{put}^{m+1}(u_{put,0}^{*,m+1}-u_{0}^{m+1})\right)- \phi_{0}=:f(u_{0}^{m+1}), \tag{58}\] where \[\phi_{0}=u_{0}^{m}-(1-\theta)\Delta\tau\left(ru_{0}^{m}+r_{c}v_{0}^{m}\right)- \theta\Delta\tau r_{c}v_{0}^{m+1}.\] With \[f^{\prime}(u_{0}^{m+1})=1+\theta\Delta\tau r+\Delta\tau\rho\left(p_{call,0}^{m+1}- p_{put,0}^{m+1}\right), \tag{59}\] Newton's method for finding \(u_{0}^{m+1}\) satisfying (58) can be written as follows: with an initial guess \(u_{0}^{m+1,0}\), compute \(u_{0}^{m+1,k}=u_{0}^{m+1,k-1}-f(u_{0}^{m+1,k-1})/f^{\prime}(u_{0}^{m+1,k-1})\), for \(k=1,2,\dots\). In this paper, \(u_{0}^{m+1,0}\) is chosen to be solution of the unconstrained boundary problem given by (17). Since we expect that \(u_{0}^{m+1,0}\) computed in this way is a better approximation than, e.g., \(u_{0}^{m}\), we can use this value as well to evaluate the conditions to constraint \(v\). The complete algorithm for computing \(u_{0}^{m+1}\) and \(v_{0}^{m+1}\) is as follows: Algorithm 1: Computing the boundary solutions 1. input \(u_{0}^{m}\), \(v_{0}^{m}\); 2. compute \(B_{p}(\tau^{m+1})\) and \(B_{c}(\tau^{m+1})\); 3. compute \(v_{0}^{m+1}\) from (57); 4. compute \(u_{0}^{m+1}\) from (56) without penalty terms; 5. apply constraints on \(v_{0}^{m+1}\) using \(u_{0}^{m+1}\); 6. set \(u_{0}^{m+1,0}\gets u_{0}^{m+1}\); 7. for \(k=1,2,\dots\) until convergence \(\phantom{\frac{1}{2}}\) \(\phantom The nonlinear equation (60) is solved iteratively using Newton's method. Starting from an initial guess of the solution \(\mathbf{u}^{m+1,0}\), the solution \(\mathbf{u}^{m+1}\) is approximated using the iterands \[\mathbf{u}^{m+1,k}\leftarrow\mathbf{u}^{m+1,k-1}-\left(\nabla\mathbf{f}(\mathbf{u}^{m+1,k-1}) \right)^{-1}\mathbf{f}(\mathbf{u}^{m+1,k-1}),\quad k=1,2,\ldots\] where \(\nabla\mathbf{f}(\mathbf{u}^{m+1,k-1})\) is the Jacobian of \(\mathbf{f}\), given by \[\nabla\mathbf{f}(\mathbf{u}^{m+1,k-1})=A_{11}+\rho\Delta\tau M\left(P_{ call}^{m+1,k-1}-P_{put}^{m+1,k-1}\right). \tag{62}\] The initial guess \(\mathbf{u}^{m+1,0}\) is chosen such that it solves unconstrained CB PDE, which is equivalent to solving (53) without penalty terms. We shall also use this unconstrained solution \(\mathbf{u}^{m+1,0}\) to constrain the initially computed \(\mathbf{v}^{m+1}\) prior to the start of Newton's iterations. The procedure for computing the solutions in the interior after one \(\theta\)-scheme step is summarized in the following algorithm: Algorithm 2: Computing the interior solutions 1. input \(\mathbf{u}^{m}\), \(\mathbf{v}^{m}\); 2. compute \(B_{\rho}(\tau^{m+1})\) and \(B_{c}(\tau^{m+1})\); 3. compute \(\mathbf{v}^{m+1}\) from (54); 4. compute \(\mathbf{u}^{m+1}\) from (53) without penalty terms; 5. apply constraints on \(\mathbf{v}^{m+1}\) using \(\mathbf{u}^{m+1}\); 6. set \(\mathbf{u}^{m+1,0}\leftarrow\mathbf{u}^{m+1}\); 7. for \(k=1,2,\ldots\) until convergence 1. compute \(\mathbf{f}(\mathbf{u}^{m+1,k-1})\) using (60); 2. compute \(\nabla\mathbf{f}(\mathbf{u}^{m+1,k-1})\) using (62); 3. \(\mathbf{u}^{m+1,k}\leftarrow\mathbf{u}^{m+1,k-1}-\left(\nabla\mathbf{f}(\mathbf{u}^{m+1,k-1}) \right)^{-1}\mathbf{f}(\mathbf{u}^{m+1,k-1})\); 8. apply constraints on \(\mathbf{v}^{m+1}\) using \(\mathbf{u}^{m+1,k}\); 9. if \(\tau^{m+1}\in\mathcal{T}_{\text{coupon}}\) 1. \(\mathbf{u}^{m+1}\leftarrow\mathbf{u}^{m+1}+K\); 2. \(\mathbf{v}^{m+1}\leftarrow\mathbf{v}^{m+1}+K\); The existence of a solution of Newton's method requires nonsingularity of the Jacobian \(\nabla f\). Like in the case with Newton's method for computing boundary solutions, there may exist values of parameters \(r\), \(h\), \(\sigma\), \(\theta\), \(\Delta\tau\), and \(\rho\) such that the Jacobian is singular. Quantification of such conditions requires analysis, which is beyond the scope of this paper. Numerical tests using various realistic values of parameters exhibit no convergence issues with the methods, suggesting nonsingularity of \(\nabla f\). ### Summary of the time integration method By including Algorithm 1 and 2 in the \(\theta\)-scheme, the complete time stepping procedure to compute the solutions \(\mathbf{u}(x,\tau)\) and \(\mathbf{v}(x,\tau)\) is shown in Algorithm 3. Algorithm 3: \(\theta\)-scheme for time integration with constraints 1. input the initial solution at \(\tau^{0}=0\): \(\mathbf{u}^{0}\), \(\mathbf{v}^{0}\), \(u_{0}^{0}\), \(u_{n+1}^{0}\), \(v_{0}^{0}\), and \(v_{n+1}^{0}\); 2. for \(m=0,1,\ldots,n_{\tau}-1\) 1. Compute \(B_{\rho}(\tau^{m+1})\) and \(B_{c}(\tau^{m+1})\); 2. Compute \(u_{0}^{m+1}\) and \(v_{0}^{m+1}\) by performing Step 3-9 of Algorithm 1; 2. Compute \(u_{n+1}^{m+1}=\kappa S_{\text{int}}e^{x_{max}}\) and \(v_{n+1}^{m+1}=0\); 2. Apply the conversion-callability-puttability constraints on \(u_{n+1}^{m+1}\) and \(v_{n+1}^{m+1}\). 2. if \(\tau^{m+1}\in\mathcal{T}_{\text{coupon}}\) 2.1. \(u_{n+1}^{m+1}\gets u_{n+1}^{m+1}+K\); 2.1. \(v_{n+1}^{m+1}\gets v_{n+1}^{m+1}+K\); 2.1. Compute \(\mathbf{u}^{m+1}\) and \(\mathbf{v}^{m+1}\) by performing Step 3-9 of Algorithm 2; For increased stability, it is possible to initiate the time integration using Rannacher's step [24]. In this case, for \(m=0\), the solutions \(\mathbf{u}^{1}\), \(\mathbf{v}^{1}\), \(u_{0}^{1}\), \(u_{n+1}^{1}\), \(v_{0}^{1}\), and \(v_{n+1}^{1}\) are computed using the initial conditions using Step 2.1-2.6 but with a smaller time step than \(\Delta\tau\) (e.g, \(\Delta\tau_{rann}=\Delta\tau/n_{rann}\), where \(1<n_{rann}\in\mathbb{N}\)). Since we did not see stability issues when \(\theta=1/2\), we did not implement Rannacher's step to obtain numerical results presented in Section 5. Numerical solution of the TF model ### Comparison with FDM In this section, we present numerical results obtained from the FEM and time integration method discussed in Sections 3 and 4. The modeling and computational parameters are summarized in Table 1, taken from [6, 16]. The numerical solution is computed for \(x\in[-6,2]\), so that the left boundary is sufficiently close to \(0\) in the \(S\)-space. We compare the numerical results with a second-order finite difference method (FDM), combined with the \(\theta\)-scheme for time integration. To have a fair comparison, we set the number of unknowns in both methods to be equal. For instance, the same meshing can be used for P1-FEM and FDM. For P2-FEM, due to additional unknowns at the midpoint of each element, we double the number of gridpoints in the FDM meshing. Throughout, \(n_{E}\) and \(n_{t}=n_{r}\) denote the number of elements and time steps, respectively. Thus, for P1-FEM, the number of unknowns (nodal points) is \(n=n_{E}-1\) and, for P2-FEM, \(n=2n_{E}-1\). For the \(\theta\)-scheme, we set \(\theta=\frac{1}{2}\) (Crank-Nicolson). Figures 1, 2, and 3 show surfaces of the CB price \(U\) for \(40\leq S\leq 160\), computed using \(n_{E}=100\) and \(200\) and with \(n_{t}=200\). We note here that the steps in the solution surfaces correspond to the coupon payment time. Qualitatively, the surfaces indicate no noticeable difference between solutions of FDM and P1- or P2-FEM. That the FDM and FEM are qualitatively not distinguishable can be seen from Figure 4 and 5, where the solutions at \(t=0\) are plotted. Figure 3: P2-FEM solution of the penalty TF model at \(t\in[0,5]\), with \(r=0.05\), \(r_{c}=0.02\), \(\sigma=0.2\), \(F=\$100\), \(K=\$4\), \(\rho=10^{12}\); Left figure: \(n_{E}=100\), \(n_{t}=100\); Right figure: \(n_{E}=200\), \(n_{t}=200\). For convergence(mesh-independence) and comparison of the FE solutions with the FD solutions obtained using various spatial meshes, we present the bond price \(U\) at the initial time \(t=0\) and \(S=100\) in Table 2 computed using FDM, P1-FEM, and P2-FEM, with \(n_{E}=n_{t}\) to keep the ratio \(\Delta\tau/h\) constant (\(\Delta\tau/h=0.625<1\)). The standard stability condition for the convection-diffusion equation discretized by FDM with explicit scheme (\(\theta=0\)) requires that \[\Delta\tau\leq C_{\theta=0}\min\left(\frac{h}{|a|},\frac{h^{2}}{2\epsilon} \right),\quad 0<C_{\theta=0}\leq 1,\] where, for the TF model, \(a=r-\sigma^{2}/2\) and \(\epsilon=\sigma^{2}/2\). With \(n_{E}=n_{t}\), there exists however some constant \(n_{E}^{*}\) such that the above stability condition is violated for \(n_{E}>n_{E}^{*}\). Instability however was not observed in our numerical simulation with \(\theta=1/2\). Results in Table 2 suggest that both P2-FEM and FDM solutions converge to a value around 124.78 (up to 2 decimal places) as the time-spatial mesh is refined. The P1-FEM solution exhibits a similar convergence towards 124.78 but at lower rate. As, can be observed that our FE solutions compared favorably with the FD solutions across these spatial meshes. ### The Greeks In the literature, because FDM is mostly used to compute the solutions, the calculation of the Greeks is done approximately using finite differencing [12, 16] at grid points. Finite difference approaches can also in principle be used to approximate the Greeks, once the solutions are available via FEM. In this paper, we shall compute some of the Greeks, i.e., the Delta (\(\Delta\)) and the Gamma (\(\Gamma\)), directly from the finite-element approximation functions. We note, however, that since the approximating P1 FE function is piece-wise continuous and non-differentiable at the element boundaries, we can in this case only compute the Greeks at any point inside the elements. For P1-FEM, the approximating function is piece-wise linear continuous. Therefore, \(\Delta\) can be explicitly computed by differentiating the finite element solution. In the element \(\Omega_{j}\), the FEM solution is given by \[U_{h}(x)=-u_{j-1}(x-x_{j})/h+u_{j}(x-x_{j-1})/h,\quad x\in\Omega_{j}. \tag{63}\] Therefore, at an instant time \(t\), \[\Delta(S_{j-1/2})=\left.\frac{\partial U_{h}}{\partial S}\right|_{S_{j-1/2}}= \left.\frac{S_{\text{int}}}{S_{j-1/2}}\frac{\partial U_{h}}{\partial x}\right| _{x_{j-1/2}}=\left.\frac{S_{\text{int}}}{hS_{j-1/2}}(u_{j}-u_{j-1})\right.\] after using the change of variables, \(S_{j-1/2}=S_{\text{int}}e^{x_{j-1/2}}\), \(x_{j-1/2}=(x_{j-1}+x_{j})/2\). The Greeks corresponding to higher-order derivatives however vanish. In this case, we have to resort to finite differencing, using the FEM solutions at nodal points. For instance, \[\left.\frac{\partial^{2}U_{h}}{\partial x^{2}}\right|_{x_{j-1}}\simeq(u_{j}-2 u_{j-1}+u_{j-2})/h^{2}.\] Hence, \[\Gamma(S_{j-1})=\left.\frac{\partial^{2}U_{h}}{\partial S^{2}}\right|_{S_{j-1 }}=\left.\frac{S_{\text{int}}^{2}}{S^{2}}\frac{\partial^{2}U_{h}}{\partial x^{ 2}}\right|_{x_{j-1}}=\left.\frac{S_{\text{int}}^{2}}{h^{2}S^{2}}(u_{j}-2u_{j- 1}+u_{j-2}). \tag{64}\] For P2-FEM, the approximating FE function is differentiable, which in the element \(\Omega_{j}\) reads \[U_{h}(x)=2u_{j-1}(x-x_{j-\frac{1}{2}})(x-x_{j})/h^{2}-4u_{j-1/2}(x-x_{j-1})(x -x_{j})/h^{2}+2u_{j}(x-x_{j-1})(x-x_{j-\frac{1}{2}})/h^{2},\] which, after differentiation at \(x_{j-1/2}\), yields \[\left.\frac{\partial U_{h}}{\partial x}\right|_{x_{j-1/2}}=(u_{j}-u_{j-1})/h \text{ and }\left.\frac{\partial^{2}U_{h}}{\partial x^{2}}\right|_{x_{j-1/2}}=4(u_{j}-2 u_{j-1/2}+u_{j-1})/h^{2}.\] This in turn results in \(\Delta(S_{j-1/2})\) in (63) and \[\Gamma(S_{j-1/2})=\frac{4S_{\text{int}}^{2}}{h^{2}S^{2}}(u_{j}-2u_{j-1/2}+u_{j -1}). \tag{65}\] An example of Greeks computation results is shown in Figure 6 for \(\Delta\) and \(\Gamma\) during the first two years (\(t\in[0,2]\)). The corresponding Greeks are computed using P2-FEM for \(\Delta\) and \(\Gamma\) at \(S_{j-1/2}\). Visible peaks and spikes in the solution surfaces are due to the constraints and coupon payment, at which point the solution is not differentiable; Delta and Gamma, however, are smooth at the locations where the inequality constraints are not imposed. \begin{table} \begin{tabular}{c c c c} \hline \(n_{E}\) & P1-FEM & P2-FEM & FDM \\ \hline \hline 100 & 124.422 & 124.846 & 124.991 \\ 200 & 124.653 & 124.820 & 124.848 \\ 400 & 124.740 & 124.814 & 124.814 \\ 600 & 124.754 & 124.781 & 124.792 \\ 800 & 124.742 & 124.781 & 124.790 \\ 1000 & 124.746 & 124.775 & 124.780 \\ 1200 & 124.758 & 124.777 & 124.781 \\ \hline \end{tabular} \end{table} Table 2: Convertible bond price at \(t=0\) and \(S=100\) computed by P1-FEM, P2-FEM, and FDM, with \(n_{E}=n_{t}\) ### Accuracy of the approximations As a way to measure the accuracy of our FEM approach, we consider a linear model problem [25] without constraints (and hence no penalty term) and coupon payment corresponding to the exact solution \[U(x,\tau) =S_{int}^{2}e^{2x}\sqrt{S_{int}e^{x}}-Fe^{-r\tau}\sqrt{S_{int}e^{x}},\] \[V(x,\tau) =S_{int}^{2}e^{2x}\sqrt{S_{int}e^{x}}-Fe^{-r\tau}\sqrt{S_{int}e^{x }}+x^{2}\tau,\] where \(\tau\in(0,1)\). This manufactured solution corresponds to the initial conditions \[\begin{cases}U(x,0)=S_{int}^{2}e^{2x}\sqrt{S_{int}e^{x}}-F\sqrt{S_{int}e^{x}}, \\ V(x,0)=S_{int}^{2}e^{2x}\sqrt{S_{int}e^{x}}-F\sqrt{S_{int}e^{x}},\end{cases}\] the boundary conditions \[\begin{cases}U(0,\tau)=S_{int}^{2}\sqrt{S_{int}}-Fe^{-r\tau}\sqrt{S_{int}},& \quad\begin{cases}U(1,\tau)=S_{int}^{2}e^{2}\sqrt{S_{int}e}-Fe^{-r\tau}\sqrt{S _{int}e},\\ V(0,\tau)=S_{int}^{2}\sqrt{S_{int}}-Fe^{-r\tau}\sqrt{S_{int}},&\quad V(1,\tau)= S_{int}^{2}e^{2}\sqrt{S_{int}e}-Fe^{-r\tau}\sqrt{S_{int}e}+\tau,\end{cases}\] and additional forcing terms \[f_{1} =U_{\tau}-\frac{\sigma^{2}}{2}U_{xx}-(r-\frac{\sigma^{2}}{2})U_{x }+rU+r_{c}V,\] \[f_{2} =V_{\tau}-\frac{\sigma^{2}}{2}V_{xx}-(r-\frac{\sigma^{2}}{2})V_{ x}+(r+r_{c})V,\] in the CB and COCB PDE, respectively. Applying FEM and the \(\theta\)-scheme leads to the numerical procedure \[A_{11}\mathbf{u}^{m+1} =\widetilde{A}_{11}\mathbf{u}^{m}-A_{12}\mathbf{v}^{m+1}+\widetilde{A}_{12 }\mathbf{v}^{m}+\theta\Delta\tau\mathbf{\beta}_{1}^{m+1}+(1-\theta)\Delta\tau\mathbf{ \beta}_{1}^{m}+\hat{\mathbf{b}}_{M,u}^{m}-\hat{\mathbf{b}}_{M,u}^{m+1}+\theta\Delta \tau f_{1}+(1-\theta)\Delta\tau f_{1},\] \[A_{22}\mathbf{v}^{m+1} =\widetilde{A}_{22}\mathbf{v}^{m}+\theta\Delta\tau\mathbf{\beta}_{2}^{m+1 }+(1-\theta)\Delta\tau\mathbf{\beta}_{2}^{m}+\hat{\mathbf{b}}_{M,v}^{m}-\hat{\mathbf{b}}_ {M,v}^{m+1}+\theta\Delta\tau f_{2}+(1-\theta)\Delta\tau f_{2}.\] Calculated errors are presented Figure 7 and 8 using two measures [3, 2]: \[\|Error\|_{L^{2}} =\|U(x,1)-\mathbf{u}^{n_{t}}\|_{L^{2}},\] \[\|Error\|_{L^{\infty}(L^{2})} =\max_{1\leq m\leq n_{t}}\left(\|U(x,m\Delta\tau)-\mathbf{u}^{m}\|_{L^ {2}}\right),\] where \(\mathbf{u}^{m}\) is the solution of the model problem at \(\tau=m\Delta\tau\), computed by P1-FEM or P2-FEM. In Figure 7, the errors are calculated for varying \(\Delta\tau\) and a fixed value of \(h\). The errors decreases as \(\Delta\tau\) is reduced to \(0\), with a rate that is proportional to \(\Delta\tau\) (first-order convergence). This first order convergence is consistent with the result in [6]. In Figure 8, the errors are calculated for varying \(h\) and fixed \(\Delta\tau\). The plots suggest convergence of P1-FEM and P2-FEM at the rate proportional to \(h\) and \(h^{2}\), respectively (Theoretically, e.g. for P1-FEM the convergence rate is given by the relation \(\|Error\|_{L^{\infty}(L^{2})}\leq C(h+\Delta\tau)\)). This convergence rate is as expected for the linear model used but cannot, however, be expected when nonlinear (e.g., penalty) terms are added. ## 6 Conclusion We presented numerical methods based on P1 and P2 finite element methods for the TF model in convertible bond pricing, reformulated as a penalty PDE. The resultant differential algebraic equations were solved using \(\theta\)-scheme, with nonlinearity due to the penalty handled by Newton's method. We reported results from the numerical simulations using the two finite element approximations and compared them with finite difference methods. Comparing results from the two methods, we observed extremely comparable results. Furthermore, we verified the advantage of using P2-FEM over P1-FEM, when it comes to accuracy of the solutions using fewer elements. The finite-element method described in this paper can be implemented on different models for convertible bond pricing, such as the AFV model [6]. Results on this will be reported in the future.
2302.06406
On generalized preconditioners for time-parallel parabolic optimal control
The ParaDiag family of algorithms solves differential equations by using preconditioners that can be inverted in parallel through diagonalization. In the context of optimal control of linear parabolic PDEs, the state-of-the-art ParaDiag method is limited to solving self-adjoint problems with a tracking objective. We propose three improvements to the ParaDiag method: the use of alpha-circulant matrices to construct an alternative preconditioner, a generalization of the algorithm for solving non-self-adjoint equations, and the formulation of an algorithm for terminal-cost objectives. We present novel analytic results about the eigenvalues of the preconditioned systems for all discussed ParaDiag algorithms in the case of self-adjoint equations, which proves the favorable properties the alpha-circulant preconditioner. We use these results to perform a theoretical parallel-scaling analysis of ParaDiag for self-adjoint problems. Numerical tests confirm our findings and suggest that the self-adjoint behavior, which is backed by theory, generalizes to the non-self-adjoint case. We provide a sequential, open-source reference solver in Matlab for all discussed algorithms.
Arne Bouillon, Giovanni Samaey, Karl Meerbergen
2023-02-13T14:45:00Z
http://arxiv.org/abs/2302.06406v2
# On generalized preconditioners for time-parallel parabolic optimal control ###### Abstract The ParaDiag family of algorithms solves differential equations by using preconditioners that can be inverted in parallel through diagonalization. In the context of optimal control of linear parabolic PDEs, the state-of-the-art ParaDiag method is limited to solving self-adjoint problems with a tracking objective. We propose three improvements to the ParaDiag method: the use of alpha-circulant matrices to construct an alternative preconditioner, a generalization of the algorithm for solving non-self-adjoint equations, and the formulation of an algorithm for terminal-cost objectives. We present novel analytic results about the eigenvalues of the preconditioned systems for all discussed ParaDiag algorithms in the case of self-adjoint equations, which proves the favorable properties the alpha-circulant preconditioner. We use these results to perform a theoretical parallel-scaling analysis of ParaDiag for self-adjoint problems. Numerical tests confirm our findings and suggest that the self-adjoint behavior, which is backed by theory, generalizes to the non-self-adjoint case. We provide a sequential, open-source reference solver in Matlab for all discussed algorithms. O 49M05, 65F08, 65K10, 65Y05 ## 1 Introduction We are interested in optimal-control problems of the form \[\min_{y,u}J(y,u)\quad\text{s.t.}\quad y_{t}=g(y)+u,\quad y(0)=y_{\text{init}} \tag{1}\] over time \([0,T]\). Here, \(y\) represents a space- and time-dependent state variable with initial condition \(y_{\text{init}}\) evolving under the influence of a (potentially non-linear) function \(g\), while \(u\) is a control input with which \(y\) is steered. We want to choose \(u\) to minimize \(J\), which is either a _tracking_ or a _terminal-cost_ objective function \[J(y,u)=\begin{cases}\text{Tracking:}&\frac{1}{2}\int_{0}^{T}\bigl{\|}y(t)-y_{ \text{d}}(t)\bigr{\|}_{2}^{2}\,\text{d}t+\frac{7}{2}\int_{0}^{T}\bigl{\|}u(t) \bigr{\|}_{2}^{2}\,\text{d}t\\ \text{Terminal cost:}&\frac{1}{2}\bigl{\|}y(T)-y_{\text{target}}\bigr{\|}_{2 }^{2}+\frac{7}{2}\int_{0}^{T}\bigl{\|}u(t)\bigr{\|}_{2}^{2}\,\text{d}t.\end{cases} \tag{2}\] Tracking objectives aim to keep \(y\) as close as possible to a trajectory \(y_{\text{d}}\), while terminal cost only requires the final position to be close to some \(y_{\text{target}}\). The factor \(\gamma>0\) regularizes the control term and may also model the practical cost of control. We space-discretize this problem (using bold-faced vectors) and consider only linear functions \(\boldsymbol{g}(\boldsymbol{y})=-K\boldsymbol{y}\) for some \(K\in\mathbb{C}^{M\times M}\). A solution to (1) satisfies the boundary value problem (bvp) \[\boldsymbol{y}^{\prime}(t)=-K\boldsymbol{y}(t)-\boldsymbol{ \lambda}(t)/\gamma,\quad\boldsymbol{y}(0)=\boldsymbol{y_{\text{init}}} \tag{3b}\] \[\begin{cases}\text{Tracking:}&\boldsymbol{\lambda}^{\prime}(t)=K^{*} \boldsymbol{\lambda}(t)+\boldsymbol{y}_{\text{d}}(t)-\boldsymbol{y}(t),\quad \boldsymbol{\lambda}(T)=\boldsymbol{0}\\ \text{Terminal cost:}&\boldsymbol{\lambda}^{\prime}(t)=K^{*}\boldsymbol{ \lambda}(t),\quad\boldsymbol{\lambda}(T)=\boldsymbol{y}(T)-\boldsymbol{y_{ \text{target}}}\end{cases} \tag{3a}\] consisting of coupled equations in the state \(\boldsymbol{y}(t)\) and the _adjoint_ state \(\boldsymbol{\lambda}(t)\coloneqq-\gamma\boldsymbol{u}(t)\), with one initial and one terminal condition [10, 33, 17]. To solve (3), we consider _parallel-in-time_ methods for optimal control. These are inspired by time-parallel initial-value problem (ivp) solvers, which overcome the inherently serial nature of time integration. Leveraging these techniques enables the construction of algorithms for optimal control which scale well in parallel when increasing the amount of work in the time dimension. Some parallel-in-time approaches for the optimal-control problem (1) use the _direct-adjoint_ optimization loop [14, 28], where all embedded ivp solves are tackled using time-parallel methods such as pfast[7] or the well-known Parareal algorithm [20]. Others use the system (3); an example is ParaOpt [10], inspired by Parareal. In this paper, we will expand on the time-parallel algorithm proposed for self-adjoint tracking problems in [33]. Belonging to the ParaDiag family [23, 12, 22, 11, 32], this algorithm constructs a discretized _all-at-once_ system of (3) and solves it iteratively, using a preconditioner that is invertible in parallel. This paper is organized as follows. We start from the method in [33], whose current preconditioner \(P\) is limited as we will see; generalizations to new situations are found in sections 2 and 3. Section 2 examines the tracking case, containing an updated _alpha-circulant_ preconditioner \(P(\alpha)\), analytic expressions for the preconditioned eigenvalues of ParaDiag and an extension of the method to non-self-adjoint problems. Section 3 introduces a novel ParaDiag method for terminal-cost objective functions, again featuring an analytic eigenvalue analysis. In section 4, we use these results as a theoretical basis to predict weak scalability of both ParaDiag methods for self-adjoint, dissipative equations. The numerical results in section 5 confirm this scalability for both self-adjoint and non-self-adjoint equations. In section 6, we conclude and propose further research directions. As a final note, we mention the very recent paper [19], which also constructs alpha-circulant preconditioners for self-adjoint tracking problems. Our approach is very different, offering a more direct generalization of [33]. The analysis presented here results in exact analytical eigenvalues (both for our method and for [33]) instead of a bound and our method straightforwardly generalizes to non-self-adjoint equations. ## 2 ParaDiag for tracking objectives This section considers the tracking objective in (2), for which a ParaDiag procedure (limited to problems with self-adjoint \(K=K^{*}\)) is described in [33]. We review this method in subsection 2.1. Subsequently, subsection 2.2 looks at the limiting case \(T\to 0\), in which ParaDiag is discovered to lack robustness. We counteract this with an improvement to the preconditioner, using novel analytic results in subsections 2.3 and 2.4 to prove its more favorable properties. Subsection 2.5 concludes by proposing a generalization to problems where \(K\neq K^{*}\). For ease of exposition, we use an implicit-Euler time discretization with time step \(\tau\) throughout this paper, although other discretizations can be treated similarly. ### Existing method The existing algorithm requires a self-adjoint \(K\) - that is, \(K=K^{*}\). The all-at-once system for (3) then reads [33] \[A\begin{bmatrix}\boldsymbol{y}\\ \boldsymbol{\lambda}\end{bmatrix}\coloneqq\begin{pmatrix}B&\frac{\tau I}{ \gamma}\\ -\tau I&B^{\top}\end{pmatrix}\otimes I+\tau\begin{bmatrix}I&\\ &I\end{pmatrix}\otimes K\end{pmatrix}\begin{bmatrix}\boldsymbol{y}\\ \boldsymbol{\lambda}\end{bmatrix}=\begin{bmatrix}\boldsymbol{b_{1}}\\ \boldsymbol{b_{2}}\end{bmatrix}, \tag{4}\] where \(I\) is used to denote identity matrices - each of appropriate size - and \[B=\begin{bmatrix}\begin{smallmatrix}1&\\ -1&1&\\ &\ddots&\ddots&\\ &&-1&1\end{smallmatrix}\end{bmatrix},\quad\boldsymbol{b_{1}}=\begin{bmatrix} \boldsymbol{y_{\mathrm{init}}}^{\top}&0&\ldots&0\end{bmatrix}^{\top},\quad \text{and}\quad\boldsymbol{b_{2}}=-\tau\boldsymbol{y_{\mathrm{d}}}. \tag{5}\] In (4), we have grouped the discretized unknowns \(\boldsymbol{y}=\begin{bmatrix}\boldsymbol{y}_{1}^{\top}&\boldsymbol{y}_{2}^{ \top}&\cdots&\boldsymbol{y}_{L-1}^{\top}\end{bmatrix}^{\top}\) and \(\boldsymbol{\lambda}=\begin{bmatrix}\boldsymbol{\lambda}_{1}^{\top}& \boldsymbol{\lambda}_{2}^{\top}&\cdots&\boldsymbol{\lambda}_{L-1}^{\top} \end{bmatrix}^{\top}\), where \(\boldsymbol{y}_{l}\) and \(\boldsymbol{\lambda}_{l}\) are approximations to \(\boldsymbol{y}(t=l\tau)\) and \(\boldsymbol{\lambda}(t=l\tau)\). The relation of \(\boldsymbol{y_{\mathrm{d}}}\) to \(\boldsymbol{y_{\mathrm{d}}}(t)\) is analogous. \(\boldsymbol{y}_{0}\) and \(\boldsymbol{\lambda}_{L}\) are known from (3), while \(\boldsymbol{y}_{L}\) and \(\boldsymbol{\lambda}_{0}\) are not required to compute any other quantities in this discretization - hence they are left out of (4). We introduce \(\widehat{L}\coloneqq L-1\) such that \(B\) is \(\widehat{L}\times\widehat{L}\). Next, a rescaling is applied1: \(\widehat{\boldsymbol{\lambda}}\coloneqq\boldsymbol{\lambda}/\sqrt{\gamma}\) and \(\widehat{\boldsymbol{b}}_{\boldsymbol{2}}\coloneqq\boldsymbol{b_{2}}/\sqrt{\gamma}\). Solving Footnote 1: In [33], \(\boldsymbol{y}\) and \(\boldsymbol{b_{1}}\) are rescaled, but this is completely equivalent. \[\widehat{A}\begin{bmatrix}\boldsymbol{y}\\ \widehat{\boldsymbol{\lambda}}\end{bmatrix}\coloneqq\left(\begin{bmatrix}B& \frac{\tau I}{\sqrt{\gamma}}\\ -\frac{\tau I}{\sqrt{\gamma}}&B^{\gamma}\end{bmatrix}\otimes I+\tau\begin{bmatrix} I&\\ &I\end{bmatrix}\otimes K\right)\begin{bmatrix}\boldsymbol{y}\\ \widehat{\boldsymbol{\lambda}}\end{bmatrix}=\begin{bmatrix}\boldsymbol{b_{1} }\\ \widehat{\boldsymbol{b}}_{\boldsymbol{2}}\end{bmatrix} \tag{2.3}\] for \(\boldsymbol{y}\) and \(\widehat{\boldsymbol{\lambda}}\) is done iteratively, with a solver such as gmres[25]. By replacing \(B\) with a circulant approximation \(C\), we construct the preconditioner \[P=\begin{bmatrix}C&\frac{\tau I}{\sqrt{\gamma}}\\ -\frac{\tau I}{\sqrt{\gamma}}&C^{\top}\end{bmatrix}\otimes I+\tau\begin{bmatrix} I&\\ &I\end{bmatrix}\otimes K\quad\text{with}\quad C=\begin{bmatrix}\begin{smallmatrix}1 &&-1\\ -1&1&\\ &\ddots&\ddots&\\ &\ddots&-1&1\end{smallmatrix}\end{bmatrix}. \tag{2.4}\] As shown later, \(P\) is invertible in parallel. Any circulant \(C\in\mathbb{R}^{\widehat{L}\times\widehat{L}}\) diagonalizes as \[C=\mathbb{F}^{*}D\mathbb{F}\quad\text{with}\quad D=\operatorname{diag}(\sqrt {\widehat{L}}\mathbb{F}\boldsymbol{c_{1}}), \tag{2.5}\] as can be found in [3]. Here, \(\mathbb{F}=\{\mathrm{e}^{2\pi\mathrm{i}jk/\widehat{L}}/\sqrt{\widehat{L}}\}_{ j,k=0}^{\widehat{L}-1}\) is the discrete Fourier matrix and \(\boldsymbol{c_{1}}\) is \(C\)'s first column. The authors in [33] first factorize \[P=\left(\begin{bmatrix}\mathbb{F}^{*}&\\ &\mathbb{F}^{*}\end{bmatrix}\otimes I\right)\left(\begin{bmatrix}D&\frac{ \tau I}{\sqrt{\gamma}}\\ -\frac{\tau I}{\sqrt{\gamma}}&D^{*}\end{bmatrix}\otimes I+\tau\begin{bmatrix} I&\\ &I\end{bmatrix}\otimes K\right)\left(\begin{bmatrix}\mathbb{F}&\\ &\mathbb{F}\end{bmatrix}\otimes I\right) \tag{2.6}\] and then argue for a diagonalization \(\begin{bmatrix}D&\tau I/\sqrt{\gamma}\\ -\tau I/\sqrt{\gamma}&D^{*}\end{bmatrix}=WHW^{-1}\) with \(W=\begin{bmatrix}I&S_{2}\\ S_{1}&I\end{bmatrix}\), where \(H\) and \(S_{\{1,2\}}\) are diagonal (see [33] for details). Defining \(V\coloneqq\begin{bmatrix}\mathbb{F}^{*}&\mathbb{F}^{*}\end{bmatrix}W\), \[P^{-1}=(V\otimes I)(H\otimes I+\tau I\otimes K)^{-1}(V^{-1}\otimes I). \tag{2.7}\] Algorithm 2.1 summarizes the procedure for solving (2.1), including how to multiply by \(P^{-1}\) in parallel. ``` 1:Vectors \(\boldsymbol{b_{1}}\) and \(\boldsymbol{b_{2}}\) defined by (2.2) 2:Self-adjoint matrix \(K\) characterising the problem by (1.3) 3:Matrices \(H\) and \(W\) following from the time discretization 4:The vectors \(\boldsymbol{y}\) and \(\boldsymbol{\lambda}\) that solve (2.1) 5:Rescale \(\widehat{\boldsymbol{b}}_{\boldsymbol{2}}=\boldsymbol{b_{2}}/\sqrt{\gamma}\). 6:Solve (2.3) for \(\boldsymbol{y}\) and \(\widehat{\boldsymbol{\lambda}}\) using an iterative method, with preconditioner \(P\) from (2.4). When asked to compute \(\begin{bmatrix}\boldsymbol{x}\\ \boldsymbol{z}\end{bmatrix}=P^{-1}\begin{bmatrix}\boldsymbol{v}\\ \boldsymbol{w}\end{bmatrix}\): 7:Calculate \(\boldsymbol{r_{1}}\coloneqq(\mathbb{F}\otimes I)\boldsymbol{v}\) and \(\boldsymbol{s_{1}}\coloneqq(\mathbb{F}\otimes I)\boldsymbol{w}\) using the (parallel) fft. 8:Calculate \(\boldsymbol{q_{2}}\coloneqq(W^{-1}\otimes I)\big{[}\begin{smallmatrix} \boldsymbol{r_{1}}\\ \boldsymbol{s_{1}}\end{smallmatrix}\big{]}\). 9:For \(l=\{1,\ldots,2\widehat{L}\}\), solve (in parallel) (2.8) \[\boldsymbol{q_{3,l}}\coloneqq(h_{l,l}I+\tau K)^{-1}\boldsymbol{q_{2,l}}\] and partition the variables as \(\begin{bmatrix}\boldsymbol{r_{3}}\\ \boldsymbol{s_{3}}\end{bmatrix}\coloneqq\boldsymbol{q_{3}}\). 10:Calculate \(\begin{bmatrix}\boldsymbol{r_{4}}\\ \boldsymbol{s_{4}}\end{bmatrix}\coloneqq(W\otimes I)\big{[}\begin{smallmatrix} \boldsymbol{r_{3}}\\ \boldsymbol{s_{3}}\end{smallmatrix}\big{]}\). 11:Calculate \(\boldsymbol{x}=(\mathbb{F}^{*}\otimes I)\boldsymbol{r_{4}}\) and \(\boldsymbol{z}=(\mathbb{F}^{*}\otimes I)\boldsymbol{s_{4}}\) using the (parallel) fft. 12:Rescale \(\boldsymbol{\lambda}=\sqrt{\gamma}\widehat{\boldsymbol{\lambda}}\). ``` **Algorithm 2.1** ParaDiag for solving the tracking problem (2.1), based on [33] ### The small-\(T\) limit and alpha-circulants When using iterative linear-system solvers, convergence speed often depends substantially on the distribution of the eigenvalues of the preconditioned matrix [29] - in our case, of \(P^{-1}\widehat{A}\) (while there are exceptions such as cgn, which relies on singular values instead, the rest of this paper will assume an iterative solver whose behavior is mainly determined by the eigenvalues). Specifically, eigenvalues that are clustered together and lie far enough from \(0\) are beneficial. While we stress that convergence is not exclusively determined by eigenvalues (for an extreme example, see [15]), they do play an important role, and studying them to make an educated guess about convergence is commonplace [24]. In particular, the authors of [33] performed an empirical eigenvalue study for the method in subsection 2.1 and also compared the gmres and BiCGStab [30] iterative solvers, with the former proving to be faster. To illustrate the importance of eigenvalues, we choose gmres and follow [33] in considering the discretized Laplacian on spatial domain \(\Omega=[0,1]\) with isolated boundary \[K=\frac{1}{\Delta x^{2}}\left[\begin{smallmatrix}1&-1&&\\ -1&2&\ddots&&\\ &\ddots&\ddots&\ddots&\\ &&\ddots&2&-1\\ &&&-1&1\end{smallmatrix}\right]\in\mathbb{R}^{M\times M}, \tag{9}\] where \(\Delta x=1/M\). We use \(M=16\), \(L=128\) and \(\gamma=10^{-5}\) as in [33] and set \(y_{\mathrm{d}}(x,t)=y_{\mathrm{init}}(x)=\exp(-100(x-0.5)^{2})\) - these do not impact the preconditioned eigenvalues, but may still influence the iteration count. We study this in two regimes. Figure (a)a uses [33]'s time horizon \(T=1\). The eigenvalues are clustered around unity and the gmres2 iteration count \(k_{g}\) is low. When reducing the time interval by setting \(T=10^{-4}\), however, Figure (b)b reveals large variations in the eigenvalues and an increased iteration count. All this happens even though this case is well-behaved: since the time steps are smaller, two subsequent discretization points barely differ. Footnote 2: We use a relative gmres tolerance of \(10^{-6}\), which is Matlab’s default, throughout this paper. Later, in subsection 2.3, we discover the origin of this behavior for different \(T\)s. We first propose an altered preconditioner \(P(\alpha)\) with a parameter \(\alpha\in\mathbb{C}\). Let \[P(\alpha)=\begin{bmatrix}C(\alpha)&\tau\frac{I}{\sqrt{\gamma}}\\ -\tau\frac{I}{\sqrt{\gamma}}&C^{*}(\alpha)\end{bmatrix}\otimes I+\tau\begin{bmatrix} I&\\ &I\end{bmatrix}\otimes K\quad\text{with}\quad C(\alpha)=\begin{bmatrix}1&- \alpha\\ -1&1&\\ &\ddots&\ddots&\\ &\ddots&-1&1\end{bmatrix} \tag{10}\] Figure 1: Eigenvalues \(\theta\) of \(P^{-1}\widehat{A}\) and iteration count \(k_{g}\) of ParaDiag for the example in subsection 2.2, using gmres with relative tolerance \(10^{-6}\). Figure (a)a mimics [33, Figure 6], but Figure (b)b discovers issues when \(T\) is small. where \(C(\alpha)\) is not circulant, but _alpha-circulant_ (circulant except that super-diagonal entries have been multiplied by some \(\alpha\neq 0\)). Alpha-circulants diagonalize as [3] \[C(\alpha)=VD(\alpha)V^{-1}\quad\text{with}\quad V=\Gamma_{\alpha}^{-1}\mathbb{F }^{*}\quad\text{and}\quad D(\alpha)=\text{diag}(\sqrt{\widehat{L}}\mathbb{F} \Gamma_{\alpha}\mathbf{c}_{1}(\alpha)), \tag{11}\] where \(\Gamma_{\alpha}=\text{diag}(1,\alpha^{1/\widehat{L}},\cdots,\alpha^{(\widehat {L}-1)/\widehat{L}})\). The idea of using alpha-circulants occurs in the ivp literature [12, 22], but while ivps can use \(\alpha\approx 0\) such that \(P(\alpha)\approx\widehat{A}\), our \(P(\alpha)\) comes with severe limitations. To successfully perform a factorization like (6), \(C(\alpha)\) and \(C^{*}(\alpha)\) must be simultaneously diagonalizable. This is only the case when \[\Gamma_{\alpha}^{-1}=\Gamma_{\alpha}^{*}\Leftrightarrow\lvert\alpha\rvert=1. \tag{12}\] Contrary to the ivp situation, under the constraint (12) it is far less clear that setting \(\alpha\neq 1\) is advantageous. We choose3\(\alpha=-1\) to reiterate our previous experiment. Figure 2 shows that \(P(-1)\) does not display the same defects for small \(T\) as did \(P=P(1)\): the case \(T=1\) looks identical, but for \(T=10^{-4}\) the eigenvalues cluster instead of dispersing and the gmres iteration count remains low. In the next subsection, we unravel this fact analytically. Footnote 3: \((-1)\)-circulants have also been called _skew-circulant_ or _negacyclic_ matrices [6]. ### Analytic eigenvalue expressions We study the preconditioner \(P(\alpha)\) for the significant cases \(\alpha=\pm 1\). We start by rescaling the system (3) by multiplying by \(I\otimes(b_{0,0}I+\tau K)^{-1}\), where \(b_{0,0}\) is \(B\)'s top-left element. This yields the system \[\widehat{A}_{\text{p}}\left[\begin{matrix}\mathbf{y}\\ \hd We make use of the self-adjointness of \(K\) (and, thus, of \(\Phi\) and \(\Psi\)) to drastically reduce the number of parameters that determine the eigenvalues of \(P_{\mathrm{p}}^{-1}(\alpha)\widehat{A}_{\mathrm{p}}\). Write \(K=V^{-1}\Sigma V\) with diagonal \(\Sigma\). \(\Phi\) and \(\Psi\) are also diagonalizable by \(V\), so \(P_{\mathrm{p}}^{-1}(\alpha)\widehat{A}_{\mathrm{p}}\) can be decomposed into _separate_ versions for each of \(K\)'s eigenvalues \(\sigma\in\mathrm{diag}\,\Sigma\). The eigenvalues of \(P_{\mathrm{p}}^{-1}(\alpha)\widehat{A}_{\mathrm{p}}\) are the union of those of the matrices \[P_{\sigma}^{-1}(\alpha)\widehat{A}_{\sigma}\coloneqq\left[\begin{smallmatrix} 1&&-\alpha\varphi&&\psi&&\\ -\varphi&1&&\psi&&\\ &\ddots&\ddots&&\ddots&\\ &&\ddots&\ddots&&\ddots&\\ &&-\varphi&1&&\psi\\ \hline-\psi&&&&&1&-\varphi&\\ &\ddots&&&&\ddots&\\ &&-\psi&&-\alpha\varphi&&1^{-\varphi}\end{smallmatrix}\right]^{-1}\left[ \begin{smallmatrix}1&&&&&\psi&&\\ -\varphi&1&&\psi&&\\ &\ddots&\ddots&&\ddots&\\ &&\ddots&\ddots&\\ &&\ddots&-\varphi&1&&\varphi\\ \hline-\psi&&&&&1&-\varphi&\\ &&\ddots&&&&&1&-\varphi\\ &&\ddots&&&&&1^{-\varphi}\end{smallmatrix}\right] \tag{14}\] for all \(\sigma\), with \(\varphi=(1+\tau\sigma)^{-1}\) and \(\psi=\frac{\tau}{\sqrt{\gamma}}\varphi\). Then, by writing \(\widehat{\sigma}\coloneqq\tau\sigma\) and \(\widehat{\gamma}\coloneqq\frac{\tau}{\sqrt{\gamma}}\), \[\varphi=(1+\widehat{\sigma})^{-1}\quad\text{and}\quad\psi=\widehat{\gamma}(1+ \widehat{\sigma})^{-1}, \tag{15}\] which eliminates \(\tau\) as an independent variable. The only parameters left that are relevant to the eigenvalues of \(P^{-1}(\alpha)\widehat{A}\) are the different \(\widehat{\sigma}\)s (time step-rescaled eigenvalues of \(K\)), \(\widehat{\gamma}\) (a time step-rescaled value indicating "how much control" is present) and \(L\) (the number of time steps, dictating the size \(\widehat{L}=L-1\) of the matrices). In summary, each eigenvalue \(\sigma\) of \(K\) defines a preconditioned matrix \(P_{\sigma}^{-1}(\alpha)\widehat{A}_{\sigma}\), whose eigenvalues are also eigenvalues of \(P^{-1}(\alpha)\widehat{A}\). These eigenvalues influence the iterative solver's convergence, as explained in subsection 2.2. The matrices \(P_{\sigma}^{-1}(\alpha)\widehat{A}_{\sigma}\) can be rewritten as the identity plus a low-rank term: \[P_{\sigma}^{-1}(\alpha)\widehat{A}_{\sigma} =P_{\sigma}^{-1}(\alpha)(P_{\sigma}(\alpha)+(\widehat{A}_{\sigma }-P_{\sigma}(\alpha))) \tag{16}\] \[=I+P_{\sigma}^{-1}(\alpha)(\widehat{A}_{\sigma}-P_{\sigma}( \alpha))=:I+P_{\sigma}^{-1}(\alpha)R_{\sigma},\] from which it follows that the eigenvalues \(\theta\) of \(P_{\sigma}^{-1}(\alpha)\widehat{A}_{\sigma}\) are equal to one plus the eigenvalues \(\omega\) of \(P_{\sigma}^{-1}(\alpha)R_{\sigma}\). The latter are characterized by Theorem 2.1. **Theorem 2.1**: _Let \(\widehat{L}>3\), \(\alpha=\pm 1\) and \(\varphi,\psi\in\mathbb{R}\backslash\{0\}\). The \(2\widehat{L}\times 2\widehat{L}\) matrix_ \[M=\overbrace{\left[\begin{smallmatrix}1&&-\alpha\varphi&&\psi&&\\ -\varphi&1&&\psi&&\\ &\ddots&&&&\ddots&\\ &\ddots&&&&&\ddots&\\ -\psi&&&&&\ddots&\\ &&-\psi&&-\alpha\varphi&&1^{-\varphi}\end{smallmatrix}\right]^{-1}\overbrace{ \left[\begin{smallmatrix}\alpha\varphi&&\\ &\alpha\varphi&&\\ &\alpha\varphi&&\\ \hline&&\\ -\psi&&\\ &\ddots&&\\ &&-\psi&&\\ &&-\psi&&\\ &&-\alpha\varphi&&1^{-\varphi}\end{smallmatrix}\right]^{-1}}^{-1}\overbrace{ \left[\begin{smallmatrix}\alpha\varphi&&\\ &\alpha\varphi&&\\ &\alpha\varphi&&\\ \hline&&\\ &\alpha\varphi&&\end{smallmatrix}\right]}^{\alpha\varphi}} \tag{17}\] _has only two potentially non-zero eigenvalues \(\omega_{\{1,2\}}\). Specifically,_ \[\omega_{\{1,2\}}=\frac{1}{z_{2}-z_{1}}\bigg{(}\frac{z_{1}-\varphi\pm\psi \mathrm{i}}{1-\alpha z_{1}^{\widehat{L}}}-\frac{z_{2}-\varphi\pm\psi\mathrm{i} }{1-\alpha z_{2}^{\widehat{L}}}\bigg{)} \tag{18}\] _where_ \[z_{\{1,2\}}=\frac{1+\varphi^{2}+\psi^{2}\pm\sqrt{(1+\varphi^{2}+\psi^{2})^{2}- 4\varphi^{2}}}{2\varphi}. \tag{19}\] The proof of this theorem is given in Appendix A. **Corollary 2.2**: _Consider a tracking-type all-at-once system with implicit-Euler time discretization (3) where \(L>4\) and \(K\) is self-adjoint with eigenvalues \(\left\{\sigma_{m}\right\}_{m=1}^{M}\). When using ParaDiag with preconditioner \(P(\alpha=\pm 1)\) (see (10)), the eigenvalues of the preconditioned system matrix \(P(\alpha)^{-1}\widehat{A}\) are all either unity or equal to_ \[\theta_{m,\{1,2\}}=1+\omega_{m,\{1,2\}} \tag{20}\] _where \(\omega_{m,\{1,2\}}\) are given by the formula (18), having filled in \(\varphi=(1+\tau\sigma_{m})^{-1}\) and \(\psi=\frac{\tau}{\sqrt{\gamma}}(1+\tau\sigma_{m})^{-1}\). \({}_{\Box}\)_ **Corollary 2.3**: _Consider a tracking-type all-at-once system with implicit-Euler time discretization (3) where \(L>4\) and \(K\) is self-adjoint. Denote by \(\mathcal{D}_{0.5,+}\) the right half of a disk in the complex plane, centered at \(0.5\) and with radius \(0.5\). When using ParaDiag with preconditioner \(P(\alpha=-1)\) (see (10)), if \(0<\varphi<1\) for all \(\varphi\) (which occurs whenever \(K\) is positive definite), all eigenvalues of the preconditioned matrix \(P^{-1}(\alpha)\widehat{A}\) lie within \(\mathcal{D}_{0.5,+}\). \({}_{\Box}\)_ Proof: Lemma C.2(c) shows that the real parts of these eigenvalues are larger than \(0.5\), while Lemma C.2(d) proves that their distances from the point \(0.5\) are less than \(0.5\). Together, these bounds delineate the region \(\mathcal{D}_{0.5,+}\). \({}_{\blacksquare}\) ### Interpreting the eigenvalue results It is possible to visualize Corollary 2.2 to gain more insight into how the two preconditioners (that is, \(\alpha=1\) and \(\alpha=-1\)) perform, as well as how they compare. Recall from subsection 2.3 that every eigenvalue \(\sigma_{m}\) of \(K\) corresponds to two non-unity eigenvalues \(\theta_{m,\{1,2\}}\) of the preconditioned system matrix, which are complex conjugates of each other. Figure 3 plots the \(\theta\) with positive imaginary part for the cases \(\alpha=\pm 1\) in two distinct ways. This section considers dissipative problems, where \(K\) is positive definite and hence \(\sigma>0\). Figures 3a and 3b are based on the view that \(\widehat{\gamma}\) is typically known (one can set the regularization parameter \(\gamma\) and the time step \(\tau\)), while the eigenvalues \(\sigma\) (and thus \(\widehat{\sigma}=\tau\sigma\)) could lie anywhere. For each \(\widehat{\gamma}\) value, these figures mark the preconditioned eigenvalues for a whole range of \(\widehat{\sigma}>0\) options. We can see Corollary 2.3 in action: for \(\alpha=-1\) the eigenvalues lie inside \(\mathcal{D}_{0.5,+}\) while, for \(\alpha=1\), they lie outside it. Figures 3c and 3d add \(\widehat{\sigma}\) as a dimension to gain more insight into its influence. We stress that from these figures, little if anything can be said about how ParaDiag scales when increasing \(L\), the topic of section 4. Instead, we conclude that _for a fixed number of time steps_, ParaDiag can be expected to converge quickly unless * the equation in the absence of control evolves slowly _relatively to the size of the time interval_ (\(\widehat{\sigma}\approx 0\)); and * there is little control _relatively to the size of the time interval_ (\(\widehat{\gamma}\approx 0\)). If these conditions for potentially slow convergence are met, the difference between \(\alpha\) values becomes important. * When \(\alpha=1\), the smaller \(\widehat{\sigma}\) and \(\widehat{\gamma}\) become, the more \(\theta\) blows up. This will lead to preconditioned eigenvalues that lie far away from each other, resulting in slow convergence. * When \(\alpha=-1\), small values of \(\widehat{\sigma}\) and \(\widehat{\gamma}\) slightly pull the eigenvalues away from unity. However, they always stay relatively close due to Corollary 2.3. In addition, clustering may even become better for very small \(\widehat{\sigma}\) and \(\widehat{\gamma}\): Figure 3a shows that the worst clustering occurs at intermediate \(\widehat{\gamma}\)s. This analysis explains our observations in Figures 3 and 3. When \(T=1\), \(\widehat{\sigma}\) and \(\widehat{\gamma}\) are large enough that the eigenvalues lie close to the edge of \(\mathcal{D}_{0.5,+}\) for both \(\alpha=\pm 1\). When \(T=10^{-4}\), \(\widehat{\sigma}\) and \(\widehat{\gamma}\) are very small and \(\alpha=-1\) clusters, while \(\alpha=1\) disperses. Specifically for the gmres method, it is possible to harness Corollary 2.3 into an upper bound on the convergence of the iterative method. **Theorem 2.4**: _Consider the tracking-type all-at-once system with implicit-Euler time discretization (3) where \(L>4\) and \(K\) is self-adjoint. When using ParaDiag with gmres preconditioned by \(P(-1)\) (see (10)) and if \(0<\varphi<1\) for all \(\varphi\) (which occurs whenever \(K\) is positive definite), the following holds. For any \(0<\rho<2\), there exists a \(\kappa_{\rho}>0\) such that the residual \(\mathbf{r}^{k}\) at gmres iteration \(k\) satisfies_ \[\left\|\mathbf{r}^{k}\right\|_{2}/\left\|\mathbf{r}^{0}\right\|_{2}\leq\kappa(V)\kappa _{\rho}\rho^{-k}. \tag{21}\] _Here, \(\kappa(V)\) is the condition number of the eigenvector matrix \(V\) of \(P(-1)^{-1}\widehat{A}\). The gmres residual decreases exponentially with a mesh- and problem-independent factor._ From [29], we get the formula \[\left\|\mathbf{r}^{k}\right\|_{2}/\left\|\mathbf{r}^{0}\right\|_{2}\leq\kappa(V)\inf_{ p_{k}\in\mathbb{P}_{k}}\sup_{\sigma\in\Sigma}\bigl{|}p_{k}(\sigma)\bigr{|} \tag{22}\] which asks us to solve a polynomial-approximation problem: find a degree-\(k\) polynomial that takes the value \(1\) at the origin and, yet, is as small as possible on all the eigenvalues of the preconditioned matrix. However, if we want a generally applicable bound, we do not know these eigenvalues. Luckily, we have Corollary 2.3: if we can find a polynomial that is small on the _entire_ semi-disk \(\mathcal{D}_{0.5,+}\), then a fortiori, it must also be small on whatever eigenvalues a specific problem happens to generate. Figure 3: Non-unity eigenvalue \(\theta\) of \(P^{-1}(\alpha)\widehat{A}\) with \(\Im(\theta)\geq 0\) for \(L=1000\). Note the scale of the axes in the figures on the right, due to the effects of \(\alpha=1\). The color maps used throughout this text are designed by [5] to be color-vision–deficiency friendly. To eliminate the explicit condition \(p(0)=1\), we take the following steps. If we can find a degree-\(k\) polynomial \(\widehat{p}\) that satisfies \(\widehat{p}(0)=0\) and approximates \(1\) on \(\mathcal{D}_{0.5,+}\), it has the same error as \(p(z)=1-\widehat{p}(z)\). Since \(\widehat{p}(0)=0\), it must be that \(\widehat{p}(z)=zq(z)\), where \(q\) is of degree \(k-1\). If we find a polynomial \(q\) that approximates \(1/z\) on \(\mathcal{D}_{0.5,+}\), we have a function \(\widehat{p}(z)=zq(z)\) which approximates \(1\) with no higher error than that of \(q\) in approximating \(1/z\) (this follows easily from the fact that \(|z|\leq 1\)). In summary, (22)'s rightmost factor is bounded by the best degree-\((k-1)\) polynomial-approximation error to \(1/z\) on \(\mathcal{D}_{0.5,+}\), which is bounded by Lemma C.2. ### Generalizing past self-adjoint problems We now extend ParaDiag to the more general, non-self-adjoint setting where \(K\) and \(K^{*}\) are not necessarily equal, resulting in the optimality system (3). When self-adjointness is not assumed, (1) becomes \[A\begin{bmatrix}\boldsymbol{y}\\ \boldsymbol{\lambda}\end{bmatrix}\coloneqq\begin{pmatrix}\begin{bmatrix}B& \tau\frac{I}{\gamma}\\ -\tau I&B^{\top}\end{bmatrix}\otimes I+\tau\begin{bmatrix}I\otimes K\\ &I\otimes K^{*}\end{bmatrix}\end{pmatrix}\begin{bmatrix}\boldsymbol{y}\\ \boldsymbol{\lambda}\end{bmatrix}=\begin{bmatrix}\boldsymbol{b_{1}}\\ \boldsymbol{b_{2}}\end{bmatrix} \tag{23}\] and, after rescaling, \[\widehat{A}\begin{bmatrix}\boldsymbol{y}\\ \widehat{\boldsymbol{\lambda}}\end{bmatrix}\coloneqq\begin{pmatrix}\begin{bmatrix} B&\tau\frac{I}{\sqrt{\gamma}}\\ -\tau\frac{I}{\sqrt{\gamma}}&B^{\top}\end{bmatrix}\otimes I+\tau\begin{bmatrix} I\otimes K\\ &I\otimes K^{*}\end{bmatrix}\end{pmatrix}\begin{bmatrix}\boldsymbol{y}\\ \widehat{\boldsymbol{\lambda}}\end{bmatrix}=\begin{bmatrix}\boldsymbol{b_{1} }\\ \boldsymbol{b_{2}}\end{bmatrix}. \tag{24}\] By a similar procedure as before, we suggest an alpha-circulant preconditioner \[P(\alpha)=\begin{bmatrix}C(\alpha)&\frac{\tau I}{\sqrt{\gamma}}\\ -\frac{\tau I}{\sqrt{\gamma}}&C^{*}(\alpha)\end{bmatrix}\otimes I+\tau\begin{bmatrix} I\otimes K\\ &I\otimes K^{*}\end{bmatrix}. \tag{25}\] As \(K\neq K^{*}\) in this generalized case, the step from (6) to (7) cannot be reproduced with this preconditioner. Luckily, it does not need to be! The middle factor in (6) can already be inverted in parallel since quantities at different time indices have been decoupled. The further diagonalization in the self-adjoint case also decoupled the state and adjoint equations; skipping this step here only decreases the available parallelism by a factor of \(2\). Algorithm 2 incorporates the alpha-circulant improvement from subsection 2.2, as well as the above generalization. It can be compared to Algorithm 1. ## 3 ParaDiag for terminal-cost objectives Both the literature on ParaDiag and this paper have thus far focused on the tracking objective in (2). We next develop a ParaDiag-type preconditioner for problems with the terminal-cost objective function, without requiring self-adjointness. The method is designed in subsection 3.1, after which it is analyzed for self-adjoint problems in subsections 3.2 and 3.3. ### A new preconditioner The optimality system in the terminal-cost case can be discretized with time step \(\tau\) to form the all-at-once system \[A\begin{bmatrix}\boldsymbol{y}\\ \boldsymbol{\lambda}\end{bmatrix}\coloneqq\begin{pmatrix}\begin{bmatrix}B& \frac{\tau}{\gamma}I\\ -b_{0,0}E&B^{\top}\end{bmatrix}\otimes I+\tau\begin{bmatrix}I\otimes K\\ -E\otimes K^{*}&I\otimes K^{*}\end{bmatrix}\end{pmatrix}\begin{bmatrix} \boldsymbol{y}\\ \boldsymbol{\lambda}\end{bmatrix}=\boldsymbol{b} \tag{26}\] where \(E\) is a matrix with as only non-zero a one in the bottom right corner and \(b_{0,0}\) is \(B\)'s top-left element. Focusing once again on implicit Euler, we have that \[B=\begin{bmatrix}\begin{smallmatrix}\begin{smallmatrix}\begin{smallmatrix}1&1&1\\ -1&1&\\ &\ddots&\ddots&\\ &-1&1\end{smallmatrix}\end{smallmatrix}\end{bmatrix}\quad\text{and}\quad\boldsymbol{ b}=\begin{bmatrix}\boldsymbol{y_{\text{init}}}^{\top}&0&\ldots&0&-((I+\tau K^{*}) \boldsymbol{y_{\text{target}}})^{\top}\end{bmatrix}^{\top}. \tag{27}\] In contrast to the tracking situation, the discretization point at time \(t=T\) cannot be eliminated due to the more complex terminal condition in (3). Thus \(B\) is \(L\times L\). ParaDiag methods are fully reliant on the presence of good preconditioners, preferably with a mesh-independent convergence rate. Such a preconditioner must be invertible efficiently and in parallel. Leaving the bottom-left block of (13) out of the preconditioner makes this task significantly easier. Indeed, it allows replacing the \(B\) blocks by alpha-circulant \(C(\alpha)\) blocks to form the preconditioner \[P(\alpha)=\begin{bmatrix}C(\alpha)&\frac{\tau}{\gamma}I\\ &C^{*}(\alpha)\end{bmatrix}\otimes I+\tau\begin{bmatrix}I\otimes K\\ &I\otimes K^{*}\end{bmatrix}, \tag{14}\] which is block-triangular. Thus multiplication by \(P^{-1}(\alpha)\) is possible by first inverting the bottom-right block of (14) (which pertains to the adjoint variable \(\lambda\)) and only then solving a second system to find the state \(y\). Due to this procedure, \(C(\alpha)\) and \(C^{*}(\alpha)\) no longer need to be simultaneously diagonalizable, and \(|\alpha|\) can be smaller than 1, in contrast to the tracking method. Algorithm 1 spells out how to solve (13) using the ParaDiag method this subsection proposes. ``` 0: Vectors \(\mathbf{b_{1}}\) and \(\mathbf{b_{2}}\) defined by (2) 0: Arbitrary matrix \(K\) characterising the problem by (3) 0: Matrix \(D(\alpha)\) following from the time discretization by (11) (\(|\alpha|=1\)) 0: The vectors \(\mathbf{y}\) and \(\mathbf{\lambda}\) that solve (23) 1: Rescale \(\widehat{\mathbf{b}_{2}}=\mathbf{b}_{2}/\sqrt{\gamma}\). 2: Solve (24) for \(\mathbf{y}\) and \(\widehat{\mathbf{\lambda}}\) using an iterative method, with preconditioner \(P(\alpha)\) from (25). When asked to compute \(\begin{bmatrix}\mathbf{x}\\ \mathbf{z}\end{bmatrix}=P^{-1}(\alpha)\begin{bmatrix}\mathbf{v}\\ \mathbf{w}\end{bmatrix}\): 3: Calculate \(\mathbf{r_{1}}\coloneqq(\mathbb{F}\Gamma_{\alpha}\otimes I)\mathbf{v}\), \(\mathbf{s_{1}}\coloneqq(\mathbb{F}\Gamma_{\alpha}\otimes I)\mathbf{w}\) using the (parallel)fft. 4: For \(l=\{1,\dots,\widehat{L}\}\), solve (in parallel) (26) \[\begin{bmatrix}\mathbf{r_{2,l}}\\ \mathbf{s_{2,l}}\end{bmatrix}\coloneqq\begin{bmatrix}d_{l,l}(\alpha)I+\tau K& \frac{\tau}{\sqrt{\gamma}}I\\ -\frac{\tau}{\sqrt{\gamma}}I&d_{l,l}^{*}(\alpha)I+\tau K^{*}\end{bmatrix}^{-1} \begin{bmatrix}\mathbf{r_{1,l}}\\ \mathbf{s_{1,l}}\end{bmatrix}.\] 5: Calculate \(\mathbf{x}=(\Gamma_{\alpha}^{-1}\mathbb{F}^{*}\otimes I)\mathbf{r_{2}}\), \(\mathbf{z}=(\Gamma_{\alpha}^{-1}\mathbb{F}^{*}\otimes I)\mathbf{s_{2}}\) using the (parallel)fft. 6: Rescale \(\mathbf{\lambda}=\sqrt{\gamma}\widehat{\mathbf{\lambda}}\). ``` **Algorithm 2** ParaDiag for solving the generalized tracking problem (23) ### Analytic eigenvalue expressions As was the case for tracking, we will formulate analytic eigenvalue results for the special case of a self-adjoint matrix \(K=K^{*}\). The preparatory steps from subsection 2.3 are straightforward to repeat: we perform the same rescaling, resulting in \[A_{\mathrm{p}}\coloneqq\begin{bmatrix}\begin{smallmatrix}I\\ -\Phi&I\\ &\ddots&\\ &&\ddots\\ &&\ddots&\\ &&-\dot{\Phi}&I\end{smallmatrix}&\begin{bmatrix}\Psi\\ &\ddots&\\ &&\Psi\\ \end{smallmatrix}&P_{\mathrm{p}}(\alpha)\coloneqq\begin{bmatrix}I\\ -\Phi\\ -\Phi&I\\ &\ddots&\\ &&\ddots\\ &&-\alpha\Phi\end{bmatrix}&\begin{bmatrix}\Psi\\ &\Psi\\ &\ddots&\\ &&\ddots&\\ &&-\alpha\Phi\end{bmatrix}&\begin{bmatrix}\Psi\\ &\Psi\\ &\ddots&\\ &&\Psi\\ \end{bmatrix}\\ \hline&&\begin{array}{ccc}I&-\Phi\\ &\ddots&\\ &&\ddots&\\ &&-\alpha\Phi\end{array}&\begin{bmatrix}-\Phi\\ &\Psi\\ &\Psi\\ \end{array}\\ \hline&&\begin{array}{ccc}I&-\Phi\\ &\ddots&\ddots\\ &&\ddots&\\ &&-\alpha\Phi\end{array}&\begin{bmatrix}-\Phi\\ &I\end{array}\end{bmatrix} \tag{17}\] where \(\alpha\in\mathbb{R}\) was assumed. This time, \(\Phi=(I+\tau K)^{-1}\) and \(\Psi=\frac{\tau}{\gamma}(I+\tau K)^{-1}\). We again perform a decomposition to the scalar case, such that the eigenvalues of \(P^{-1}(\alpha)A=P_{\mathrm{p}}^{-1}(\alpha)A_{\mathrm{p}}\) are the union of those of (3.7) \[P_{\sigma}^{-1}(\alpha)A_{\sigma}=\left[\begin{smallmatrix}\begin{smallmatrix}1& &\begin{smallmatrix}-\alpha\varphi\\ \begin{smallmatrix}\ddots&\ddots\\ \begin{smallmatrix}-\varphi&1\\ \end{smallmatrix}\\ \begin{smallmatrix}-\varphi&1\\ \end{smallmatrix}\\ \begin{smallmatrix}\begin{smallmatrix}\ddots&\ddots\\ \begin{smallmatrix}-\varphi&1\\ \end{smallmatrix}\\ \begin{smallmatrix}\ddots&\begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\ddots&\begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\ddots&\begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\ddots&\begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\ddots&\begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}1&-\varphi\\ \end{smallmatrix}\\ \begin{smallmatrix}\ddots&\begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\ddots&\begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\ddots&\begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \end{smallmatrix}\end{smallmatrix}\end{smallmatrix}\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrix}\psi\\ \begin{smallmatrixsmallmatrix}\psi\\ \begin{smallmatrixsmallmatrix}\psi\\ \begin{smallmatrixsmallmatrix}\psi\\ \begin{smallmatrixsmallmatrix}\psi\\ \begin{smallmatrixsmallmatrixsmallmatrix}\psi\\ \psi\\ \begin{smallmatrixsmallmatrixsmallmatrix}\psi\\ \begin{smallmatrixsmallmatrixsmallmatrixsmallmatrix}\psi\\ \begin{\endsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrix} \psi\\ \begin{smallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrix}\begin{}\psi\\ \psi\\ \begin{smallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrix \end{}\psi\\ \begin{smallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrix \\ \end{\smallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrix \\ \endsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrix}\begin{}\psi\\ \psi\\ \begin{smallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrixsmallmatrix}\\ \begin{}\psi\\ \begin{ **Theorem 3.1**: _Let \(L>3\) and \(\alpha,\varphi,\psi\in\mathbb{R}\) with \(\alpha\neq 0\). The \(2L\times 2L\) matrix_ \[M=\left[\begin{array}{ccc|ccc}\frac{1}{-\omega}&\frac{-\alpha\varphi}{1}& \psi&\\ \ddots&\ddots&\\ &\ddots&\ddots&\\ &-\varphi&1\\ \end{array}\right]^{-1}\left[\begin{array}{ccc|ccc}&\alpha\varphi&\\ &\alpha\varphi&\\ &\ddots&\\ &\ddots&\\ &-\alpha\varphi&1\\ \end{array}\right] \tag{10}\] _where \(\varphi\neq\pm 1\) has two potentially non-zero eigenvalues \(\omega_{\{1,2\}}\): those of the matrix_ \[M_{\rm red}=\left[\begin{array}{ccc}\frac{\alpha\varphi^{L}}{1-\alpha \varphi^{L}}+\frac{\psi}{(1-\alpha\varphi^{L})^{2}}\frac{1-\varphi^{2L}}{1- \varphi^{2}}&-\frac{\alpha\varphi\psi}{(1-\alpha\varphi^{L})^{2}}\frac{1- \varphi^{2L}}{1-\varphi^{2}}\\ -\frac{\varphi^{L-1}}{1-\alpha\varphi^{L}}&\frac{\alpha\varphi^{L}}{1- \alpha\varphi^{L}}\\ \end{array}\right]. \tag{11}\] _When \(\alpha\) goes to zero, this simplifies to_ \[\lim_{\alpha\to 0}\omega_{1}=\psi\frac{1-\varphi^{2L}}{1-\varphi^{2}}\quad \text{and}\quad\lim_{\alpha\to 0}\omega_{2}=0. \tag{12}\] The proof is given in Appendix B. **Corollary 3.2**: _Consider a terminal-cost all-at-once system with implicit-Euler time discretization (1) where \(L>3\) and \(K\) is self-adjoint with eigenvalues \(\{\sigma_{m}\}_{m=1}^{M}\). When using ParaDiag with preconditioner \(P(\alpha)\) (see (3)), the eigenvalues of the preconditioned system matrix are all either unity or equal to_ \[\theta_{m,\{1,2\}}=1+\omega_{m,\{1,2\}} \tag{13}\] _where \(\omega_{m,\{1,2\}}\) are given as the eigenvalues of (11), having filled in \(\varphi=(1+\tau\sigma_{m})^{-1}\) and \(\psi=\frac{\tau}{\gamma}(1+\tau\sigma_{m})^{-1}\) (on the condition that \(\varphi\neq\pm 1\))._ ### Interpreting the eigenvalue results In practice, \(\alpha\) in (3) can usually be taken small enough such that the limit (12) is valid (the only constraint is rounding errors occurring for very small \(\alpha\)[9, 31]). Given a rescaled eigenvalue \(\widehat{\sigma}\) of \(K\), Figure 1 plots the non-unity preconditioned eigenvalue \(\theta\) in this limit. Based on this figure, _for a fixed number of time steps_, poor convergence can be expected when * the equation in the absence of control evolves slowly _relatively to the size of the time interval_ (\(\widehat{\sigma}\approx 0\)); and * there is a lot of control _relatively to the size of the time interval_ (\(\widehat{\gamma}\gg 0\)). The first condition is the same as for the tracking preconditioner, but the second is different. Interestingly, the ParaOpt algorithm [10], which also treats terminal-cost objectives, struggles in the high-\(\widehat{\gamma}\) regime as well. Figure 1: The non-unity preconditioned eigenvalues \(\theta\) of \(P^{-1}(0)A\) with \(L=1000\) ## 4 Parallel-scaling analysis for self-adjoint problems An oft-used metric in the context of parallel algorithms is _weak scalability_ (for time-parallel methods, it was studied in e.g. [1, 4]). The aim is that a program's execution time stays constant when increasing the problem size (in our case the number of time steps \(L\), as we investigate _time_-parallelism) in tandem with the number of processors, keeping their ratio constant. For our optimal-control problem (1), we identify two regimes [10]. * If we increase the time horizon \(T\) together with \(L\), the time step \(\tau\) stays constant. In this regime, \(\widehat{\sigma}\) and \(\widehat{\gamma}\) (from (15) or (18), depending on the objective function) do not change. The amount of work is increased by an expanding time scope, not by using a more accurate discretization. * We can also keep \(T\) constant but instead increase the amount of time steps \(L\) by lowering \(\tau\). Then \(\widehat{\sigma}\) and \(\widehat{\gamma}\) increase with it. The amount of work is increased by using a more fine-grained mesh for the same problem. This section will use the analytic results from subsections 2.3 and 3.2 to perform a theoretical analysis of ParaDiag's weak scaling, which section 5 later verifies in practice. The approach is to assume the inversion of our preconditioners scales well in all regimes, as attested to by previous ParaDiag algorithms that use similar preconditioners [13, 33, 32]. Then, all that needs to be analyzed is the number of such inversions: if the iterative solver's iteration count stays constant when increasing the problem size, we have achieved good weak scalability. As a proxy for the actual iteration count, we will use the distribution of the preconditioned eigenvalues - if they converge when increasing time parallelism, we will assume for the iteration count to do the same. This section is limited to implicit Euler and self-adjoint, dissipative equations. We thus have \(\sigma>0\) and the obvious \(\gamma,T>0\). We aim to show that each of the eigenvalues \(\theta\) converges to some finite, non-zero value in the relevant scaling limit. ### Increasing the time horizon Increasing \(T\) while keeping \(\tau\) constant does not affect \(\widehat{\sigma}\) or \(\widehat{\gamma}\). As a result, the only change in (18) and (12) is that of the variable \(L\) (and, therefore, of \(\widehat{L}=L-1\)). _Tracking._ In (18), we have \(0<z_{2}<1<z_{1}\) (see Lemma C.1(a)). As a result, in the limit for large \(T\), \(z_{1}^{\widehat{L}}\to\infty\) and \(z_{2}^{\widehat{L}}\to 0\). That means that, for both \(\alpha=\pm 1\), the eigenvalues of the preconditioned matrix converge to \[\lim_{L\to\infty,T\to\infty,T=\tau L}\theta_{\{1,2\}}=1+\frac{1}{z_{2}-z_{1}}( -(z_{2}-\varphi\pm\psi\mathrm{i}))=\frac{z_{1}-\varphi\pm\psi\mathrm{i}}{z_{1 }-z_{2}}. \tag{24}\] This is finite and non-zero; as assumed in the intro to section 4, weak scalability can be expected. Figures (a)a and (b)b start from finite \(L=10^{3}\) and show that \(|\theta|\) does not increase significantly when scaling \(L\) to \(10^{4}\). It even decreases when \(\widehat{\sigma},\widehat{\gamma}\gtrsim 0\), where Figure (b)b shows \(|\theta|\) is high to start with. Figure 1: Ratio \(\left|\theta(L=10^{4})\right|/\left|\theta(L=10^{3})\right|\) of the preconditioned-eigenvalue magnitudes when scaling \(L\) from \(10^{3}\) to \(10^{4}\) through \(T\), for different preconditioners _Terminal cost._ Something very similar occurs in (3.12). From \(0<\varphi<1\), it follows that \(\varphi^{2L}\to 0\) and the non-zero eigenvalue \(\theta_{1}\) approaches \[\lim_{L\to\infty,T\to\infty,T=\tau L,\alpha\to 0}\theta_{1}=\psi/(1-\varphi^{2}), \tag{4.2}\] which is finite and non-zero since \(\sigma,\gamma,T>0\). In the limit \(L\to\infty\), weak scalability is expected. Figure (c)c confirms that \(\theta\) scales well even when \(L\) is finite, except for very low \(\widehat{\sigma}\) values, where the asymptotic region is not yet reached. ### Decreasing the time step Keeping \(T\) constant and scaling \(\tau\) instead slightly complicates matters, as it changes not only \(L\) but also \(\widehat{\sigma}\) and \(\widehat{\gamma}\). _Tracking._ Using Matlab's symbolic toolbox allows us to solve the limit \[\lim_{L\to\infty,\tau\to 0,\tau L=T,\alpha=\pm 1}\theta_{\{1,2\}}=\frac{1}{2}+ \frac{\tanh\bigl{(}\frac{T\sqrt{\gamma\sigma^{2}+1}}{2\sqrt{\gamma}}\bigr{)} ^{-\alpha}(\sqrt{\gamma}\sigma\pm\mathrm{i})}{2\sqrt{\gamma\sigma^{2}+1}}. \tag{4.3}\] This is a finite expression for both \(\alpha=\pm 1\) (the denominator cannot reach zero) and is non-zero as well (the real part of the numerator is always positive). Figures (a)a and (b)b show that the eigenvalues stay almost constant when scaling a finite \(L\) from \(10^{3}\) to \(10^{4}\), which we assumed implies weak scalability. _Terminal cost._ This case is slightly simpler and can be computed by hand. \[\lim_{L\to\infty,\tau\to 0,\tau L=T,\alpha\to 0}\theta_{1}=1+(1-\exp(-2\sigma T ))/(\gamma\sigma), \tag{4.4}\] which is again finite. It also cannot reach zero: the exponential has a negative argument (because \(\sigma>0\)), so both terms of the sum are positive. Figure (c)c illustrates that the scaling translates well to finite \(L\) values, again implying weak scalability. ## 5 Numerical results This section presents the results of numerical tests assessing the performance of the ParaDiag methods discussed or designed in this paper. Subsection 5.1 first discusses tracking ParaDiag (Algorithm 2.2), considering both \(\alpha=1\) and the novel \(\alpha=-1\) variant. Subsection 5.2 then moves on to the new terminal-cost method (Algorithm 3.1). To perform these tests, we have implemented our ParaDiag algorithms in a Matlab code we call pintopt. The implementation is sequential and not optimized for speed, but rather serves as a readable and well-documented reference implementation that can be used to study iteration counts. The code is publicly available4. Footnote 4: The pintopt Matlab package is located at [https://gitlab.kuleuven.be/numa/public/pintopt](https://gitlab.kuleuven.be/numa/public/pintopt). Figure 4.2: Ratio \(\bigl{|}\theta(L=10^{4})\bigr{|}\,/\bigl{|}\theta(L=10^{3})\bigr{|}\) of the preconditioned-eigenvalue magnitudes when scaling \(L\) from \(10^{3}\) to \(10^{4}\) through \(\tau\), for different preconditioners All results use gmres as the iterative solver and are displayed in tables detailing the iteration counts for different parameter configurations. In each table, the rows investigate weak scaling, while the columns vary a different parameter such as the end time \(T\) or the regularization parameter \(\gamma\). The tables at the left perform scaling of \(L\) by increasing \(T\) - those on the right by decreasing \(\tau\). As a base problem, we study a parabolic diffusion equation, which is self-adjoint and dissipative such that our theoretical results are directly applicable. The problem involves a heat equation in two dimensions on the spatial domain \(\Omega=[0,1]^{2}\). It reads \[\partial_{t}y=\Delta y+u \tag{10}\] with periodic boundary conditions and, in the case of tracking, a target trajectory \[y_{\rm d}(t,x)=\Big{(}\Big{(}12\pi^{2}+\frac{1}{12\pi^{2}\gamma}\Big{)}(t-T)- \Big{(}1+\frac{1}{(12\pi^{2})^{2}\gamma}\Big{)}\Big{)}\sin(2\pi x_{1})\sin(2 \pi x_{2}) \tag{11}\] or, in the case of terminal cost, a target state \[y_{\rm target}(x)=\sin(2\pi x_{1})\sin(2\pi x_{2}). \tag{12}\] This is the two-dimensional version of a problem studied in [14]. In contrast to that paper, we use a non-smooth initial condition \[y_{\rm init}(x)=\frac{1}{12\pi^{2}\gamma}(1-T){\rm sign}(\sin(2\pi x_{1}))\sin ^{2}(2\pi x_{2}), \tag{13}\] shown in Figure 1. The choice for a non-smooth \(y_{\rm init}\) is important, as a smooth initial condition leads to very fast convergence, as noticed in [13, 32]. We want to test our algorithms with a more challenging, non-smooth case. Next to the self-adjoint equation (10) covered fully by this paper's analysis, we also consider a non-self-adjoint advection-diffusion equation that our new algorithms can solve, but for which we do not have theoretical results. Extending the previous equation with an advection term, consider \[\partial_{t}y=d\Delta y-\partial_{x_{1}}y-\partial_{x_{2}}y+u \tag{14}\] where \(d\in\mathbb{R}\) controls the amount of diffusion and may vary. For this equation, we use the same \(y_{\rm d}\), \(y_{\rm target}\) and \(y_{\rm init}\), given in (11)-(13), as for the diffusion equation. Both (10) and (14) are discretized with \(M=32\times 32\) points in space and all spatial derivatives are discretized with central differences. Figure 1: Initial condition \(y_{\rm init}\) from (13) \begin{table} \end{table} Table 1: gmres iteration counts (\(\alpha=1/\alpha=-1\)) for tracking ParaDiag applied to the diffusion equation (10) or the advection-diffusion equation (10). The symbol \(\varnothing\) indicates a failure to converge within 25 iterations. When one \(\alpha\) value outperforms the other, it is bold-faced. By default, \(T_{\mathrm{ref}}=2\), \(\gamma=0.05\), and \(d=0.1\) when applicable. ### Terminal cost The same experiments were done for terminal-cost objectives in Table 2, although \(\alpha=10^{-4}\) was chosen here. The results are very promising. For the diffusion equation, iteration counts are low across the board, with no scenario surpassing 4 iterations. The only exception is that of a _very_ small \(\gamma\) (that is, a large \(\widehat{\gamma}\)), which was indeed theorized to work poorly in subsection 3.3. When adding advection, slightly more iterations are needed, but the increase stays reasonable. Again, the qualitative insights from the self-adjoint case carry over: a small regularization parameter \(\gamma\) can cause slow convergence. Small \(d\) values also seem to result in an increased iteration count. However, scaling is excellent in all scenarios, confirming the theoretical conclusions from section 4. \begin{table} \end{table} Table 2: gmres iteration counts for terminal-cost ParaDiag applied to the diffusion equation (10) or the advection-diffusion equation (11). The symbol \(\varnothing\) indicates a failure to converge within 25 iterations. By default, \(T_{\mathrm{ref}}=2\), \(\gamma=0.05\), and \(d=0.1\) when applicable. All results use \(\alpha=10^{-4}\). ## 6 Conclusions This paper has extended optimization ParaDiag in three ways. For the existing algorithm [33], aimed at tracking objectives, we proposed an alpha-circulant extension to improve the small-\(T\) regime and a generalization to non-self-adjoint problems. We also designed a new algorithm to treat terminal-cost objectives. In doing so, we greatly expanded the range of problems for which efficient ParaDiag algorithms are available. Secondly, we were able to formulate a precise expression for the preconditioned eigenvalues of all optimization ParaDiag methods, in the self-adjoint case. This significantly improves our understanding of these algorithms, for which very little theory was available before. We used this knowledge for two purposes. * For dissipative, self-adjoint equations with a tracking objective, and when using the new parameter \(\alpha=-1\) to construct a preconditioner, we were able to prove a guaranteed gmres convergence factor of \(1/2\). * In a theoretical parallel-scaling analysis, we conjectured good weak scalability of all ParaDiag variants in the limit for many time steps. This scalability was confirmed by numerical experiments that used gmres iteration counts as an indicator of performance. In addition, these tests suggested the theoretical conclusions carry over to the non-self-adjoint case, even though our analysis does not apply there. As a third contribution, our progress clears the way for exciting research in the future. With a more general method, into which some theoretical insight is available, potential next steps include non-linear ParaDiag algorithms, such as those already proposed for ivp ParaDiag [8, 22]. Other interesting avenues for future work include the study of different time-discretization methods - especially if they can be written as (13) or (6) since then, our theoretical results apply - or improvements to ParaDiag for hyperbolic equations comparable to those in this paper. Different techniques have also been applied to solve the smaller systems in the inversion procedure for ivp ParaDiag more efficiently [21, 16]. Adapting these methods to optimization ParaDiag could substantially improve performance. Lastly, a very interesting recent result [18] in the domain of ivp ParaDiag suggests using alpha-circulant approximations, but not as preconditioners. Instead, it is noted that in the ivp situation, the exact system matrix is \(P(\alpha)\) with \(\alpha=0\) and its inversion is seen as an interpolation problem, with as data points several inversions with \(\alpha_{j}\neq 0\). Our terminal-cost preconditioner (3) is not suitable for this, as it has \(P(0)\neq A\). However, for the tracking preconditioner (10), \(P(0)=\widehat{A}\) does hold. As this text makes \(\alpha\neq 1\) feasible for tracking, we can use different \(\alpha_{j}\) with magnitude \(1\) as data points and [18]'s technique could now apply to tracking-type optimal-control ParaDiag. ## Appendix A Proof of Theorem 2.1 We denote by \(C(\alpha)\) the top-left block of the matrix inverted in (2.17). We start by switching the top and bottom halves of the rows of both \(R\) and the inverted matrix - which does not change \(M\) - and applying the block matrix inversion property from [2, page 44], giving \[M =\begin{bmatrix}-\psi I&C^{\top}(\alpha)\\ C(\alpha)&\psi I\end{bmatrix}^{-1}\left[\begin{array}{c|c}&\alpha\varphi\\ \hline\alpha\varphi&\end{array}\right]\] \[=\begin{bmatrix}\left(-\psi I-\frac{1}{\psi}C^{\top}(\alpha)C( \alpha)\right)^{-1}\\ \left(\psi I+\frac{1}{\psi}C(\alpha)C^{\top}(\alpha)\right)^{-1}\end{bmatrix} \begin{bmatrix}I&-\frac{1}{\psi}C^{\top}(\alpha)\\ \frac{1}{\psi}C(\alpha)&I\end{bmatrix}\left[\begin{array}{c|c}&\alpha\varphi \\ \hline\alpha\varphi&\end{array}\right]\] \[=\begin{bmatrix}\overbrace{\left(-\psi I-\frac{1}{\psi}C^{\top}( \alpha)C(\alpha)\right)^{-1}}^{=-H}\\ \underbrace{\left(\psi I+\frac{1}{\psi}C(\alpha)C^{\top}(\alpha) \right)^{-1}}_{=:H}\end{bmatrix}\left[\begin{array}{c|c}&-\frac{\alpha \varphi}{\psi}\\ \hline\frac{\varphi^{2}}{\psi}\alpha^{2}&\alpha\varphi&-\frac{\varphi^{2}}{ \psi}\alpha^{2}\\ &\frac{\alpha\varphi}{\psi}\end{array}\right],\] where we know \(\alpha^{2}=1\). The fact that \(M\) has only two potentially non-zero eigenvalues is clear: the second matrix in the product above has rank 2, such that the result of a multiplication by it cannot have any higher rank. To find out more about these non-zero eigenvalues, first observe that \[C^{\top}(\alpha)C(\alpha)=C(\alpha)C^{\top}(\alpha),\] (A.1) as can be easily checked. This justifies the use of the variable \(H\) for both blocks above. We will try to calculate \(H\) later, but in the spirit of not doing excess work, let us first see which parts of \(H\) we need at all. \[M=\begin{bmatrix}-H\\ &H\end{bmatrix}\left[\begin{array}{c|c}-\frac{\alpha\varphi}{\psi}\\ \hline\frac{\varphi^{2}}{\psi}\\ \hline\alpha\varphi&-\frac{\varphi^{2}}{\psi}\\ &\frac{\alpha\varphi}{\psi}\end{array}\right]=:\left[\begin{array}{c|c}a_{1} &b_{1}\\ \vdots&\vdots\\ a_{\widetilde{L}}&b_{\widetilde{L}}\\ \hline a_{\widetilde{L}+1}&b_{\widetilde{L}+1}\\ \vdots&\vdots\\ a_{2\widetilde{L}}&b_{2\widetilde{L}}\end{array}\right],\] (A.2) which means that \(M\)'s non-zero eigenvalues are the same as those of its middle block \[M_{\text{red}}=\begin{bmatrix}a_{\widetilde{L}}&b_{\widetilde{L}}\\ a_{\widetilde{L}+1}&b_{\widetilde{L}+1}\end{bmatrix}=\begin{bmatrix}\frac{ \alpha\varphi}{\psi}h_{\text{end},0}-\frac{\varphi^{2}}{\psi}h_{\text{end}, \text{end}}-\alpha\varphi h_{\text{end},\text{end}}\] (A.3) Thus, it suffices to find the corner values of \[H =(\psi I+\frac{1}{\psi}C(\alpha)C^{\top}(\alpha))^{-1}=\psi(\underbrace {\psi^{2}I+C(\alpha)C^{\top}(\alpha)}_{=:G})^{-1} \tag{10}\] \[=\psi\begin{bmatrix}1+\varphi^{2}+\psi^{2}&-\varphi&-\alpha\varphi \\ -\varphi&1+\varphi^{2}+\psi^{2}&\ddots&\\ &\ddots&\ddots&-\varphi\\ -\alpha\varphi&&-\varphi&1+\varphi^{2}+\psi^{2}\end{bmatrix}^{-1}.\] The matrix \(G\) being inverted is \(\alpha\)-circulant and symmetric - qualities that are maintained by the inversion. Then \(h_{0,0}=h_{\text{end,end}}\) and \(h_{0,\text{end}}=h_{\text{end,0}}\). Hence \[a_{\widehat{L}}=b_{\widehat{L}+1}=-\frac{\varphi^{2}}{\psi}h_{0,0}+\frac{ \alpha\varphi}{\psi}h_{0,\text{end}}\quad\text{and}\quad a_{\widehat{L}+1}=- b_{\widehat{L}}=\alpha\varphi h_{0,0}. \tag{11}\] This means that \[\operatorname{eig}(M_{\text{red}})=a_{\widehat{L}}\pm b_{\widehat{L}}\mathrm{ i}. \tag{12}\] All this assumes we have inverted \(G\). Alpha-circulant matrices can be inverted using their spectral decomposition (11). Up until now, we used this as a computational tool; extracting useful analytical expressions is not a trivial feat. The diagonalization reads \(G=\Gamma_{\alpha}^{-1}\mathbb{F}^{*}D\mathbb{F}\Gamma_{\alpha}\Leftrightarrow G ^{-1}=\Gamma_{\alpha}^{-1}\mathbb{F}^{*}D^{-1}\mathbb{F}\Gamma_{\alpha}\). Here, \[D =\operatorname{diag}(\sqrt{\widehat{L}}\mathbb{F}\Gamma_{\alpha }\boldsymbol{g_{1}}) \tag{13}\] \[=\operatorname{diag}\{1+\varphi^{2}+\psi^{2}-\operatorname{e}^{( j/\widehat{L})2\pi\mathrm{i}}\alpha^{1/\widehat{L}}\varphi-\operatorname{e}^{-(j/ \widehat{L})2\pi\mathrm{i}}\alpha^{(\widehat{L}-1)/\widehat{L}}\alpha\varphi \}_{j=0}^{\widehat{L}-1}\] \[=\operatorname{diag}\left\{d(\beta_{j}(\alpha,\widehat{L})) \right\}_{j=0}^{\widehat{L}-1}\] where \(\boldsymbol{g_{1}}\) denotes \(G\)'s first column and where we defined \[d(\beta)=1+\varphi^{2}+\psi^{2}-2\varphi\cos\beta\quad\text{and}\quad\beta_{ j}(\alpha,\widehat{L})=\begin{cases}2j\pi/\widehat{L}&\text{if }\alpha=1\\ 2(j+\frac{1}{2})\pi/\widehat{L}&\text{if }\alpha=-1.\end{cases} \tag{14}\] Then from \(H=\psi G^{-1}\) follows, noting the definitions of \(\Gamma_{\alpha}\) and \(\mathbb{F}\), \[h_{0,0}(\alpha,\widehat{L})=h_{\text{end,end}}(\alpha,\widehat{L}) =\frac{\psi}{\widehat{L}}\sum_{j=0}^{\widehat{L}-1}d(\beta_{j}( \alpha,\widehat{L}))^{-1}, \tag{15a}\] \[h_{0,\text{end}}(\alpha,\widehat{L})=h_{\text{end,0}}(\alpha, \widehat{L}) =\frac{\psi}{\widehat{L}}\sum_{j=0}^{\widehat{L}-1}\alpha^{( \widehat{L}-1)/\widehat{L}}\mathrm{e}^{-(j/\widehat{L})2\pi\mathrm{i}}d( \beta_{j}(\alpha,\widehat{L}))^{-1}\] (15b) \[=\alpha\frac{\psi}{\widehat{L}}\sum_{j=0}^{\widehat{L}-1} \mathrm{e}^{-\beta_{j}(\alpha,\widehat{L})\mathrm{i}}d(\beta_{j}(\alpha, \widehat{L}))^{-1}.\] These analytic expressions are not insightful. Luckily, we have yet another avenue to find \(h_{0,0}\) and \(h_{0,\text{end}}\): [27, Theorem 1(a)] offers explicit formulas for inverting certain three-element circulant matrices. For \(\alpha=1\), those formulas mean that \[h_{0,0}(1,\widehat{L})=h_{\text{end},\text{end}}(1,\widehat{L}) =\psi\frac{z_{1}z_{2}}{\varphi(z_{2}-z_{1})}\biggl{(}\frac{1}{1-z _{1}^{\widehat{L}}}-\frac{1}{1-z_{2}^{\widehat{L}}}\biggr{)}, \tag{10a}\] \[h_{0,\text{end}}(1,\widehat{L})=h_{\text{end},0}(1,\widehat{L}) =\psi\frac{z_{1}z_{2}}{\varphi(z_{2}-z_{1})}\biggl{(}\frac{z_{1}}{1-z_{1}^{ \widehat{L}}}-\frac{z_{2}}{1-z_{2}^{\widehat{L}}}\biggr{)} \tag{10b}\] with \(z_{\{1,2\}}=(1+\varphi^{2}+\psi^{2}\pm\sqrt{(1+\varphi^{2}+\psi^{2})^{2}-4 \varphi^{2}})/(2\varphi)\). However, [27] tells us nothing about the case \(\alpha=-1\). Now, we can utilize the expressions (10). Figure 1 shows the spacing of the \(\beta_{j}\)s when \(\widehat{L}=3\) for \(\alpha=1\) (red) and \(\alpha=-1\) (blue). Combined with (10a), it is clear that \(h_{0,0}(1,2\widehat{L})\) sums over the same \(\beta\)s as \(h_{0,0}(1,\widehat{L})\) and \(h_{0,0}(-1,\widehat{L})\) combined. After correcting for the scaling by \(\widehat{L}\) in (10a), we get \[h_{0,0}(-1,\widehat{L})=2h_{0,0}(1,2\widehat{L})-h_{0,0}(1,\widehat{L})=\psi \frac{z_{1}z_{2}}{\varphi(z_{2}-z_{1})}\left(\frac{1}{1+z_{1}^{\widehat{L}}}- \frac{1}{1+z_{2}^{\widehat{L}}}\right). \tag{11}\] A similar technique can be used for \(h_{0,\text{end}}\), yielding \[h_{0,\text{end}}(-1,\widehat{L}) =h_{0,\text{end}}(1,\widehat{L})-2h_{0,\text{end}}(1,2\widehat{L}) \tag{12}\] \[=-\psi\frac{z_{1}z_{2}}{\varphi(z_{2}-z_{1})}\left(\frac{z_{1}}{1 +z_{1}^{\widehat{L}}}-\frac{z_{2}}{1+z_{2}^{\widehat{L}}}\right).\] As a last step, we have that \(z_{1}z_{2}=1\). To see this, note that \[z_{1}z_{2} =\frac{1}{4\varphi^{2}}\left((1+\varphi^{2}+\psi^{2})^{2}-(\sqrt {(1+\varphi^{2}+\psi^{2})^{2}-4\varphi^{2}})^{2}\right) \tag{13}\] \[=\frac{1}{4\varphi^{2}}\left((1+\varphi^{2}+\psi^{2})^{2}-(1+ \varphi^{2}+\psi^{2})^{2}+4\varphi^{2}\right)=1.\] Eliminating the square root is allowed due to its contents always being non-negative. Indeed, \((1+\varphi^{2}+\psi^{2})^{2}-4\varphi^{2}\) reaches a minimum for \(\psi=0\), where we get \(1+2\varphi^{2}+\varphi^{4}-4\varphi^{2}=(1-\varphi^{2})^{2}\), which cannot be negative. Filling the expressions for the \(h\)s into (11) and (12) proves the theorem. ## Appendix B Proof of Theorem 3.1 We use the notation \(C(\alpha)\) for the top-left block of the matrix inverted in (3.10). We start the proof similarly to Appendix A. _Finding \(M_{\mathrm{red}}\)._ The inverse of the block-triangular matrix can be rewritten as \[M=\left[\begin{matrix}\overset{=:H}{C^{-1}(\alpha)}&-\psi\overbrace{C^{-1}( \alpha)C^{-\top}(\alpha)}^{\Xi:G}\\ &\underbrace{C^{-\top}(\alpha)}_{=H^{\top}}\end{matrix}\right]\left[ \begin{matrix}\alpha\varphi\\ \hline\hline-1\end{matrix}\right]\left[\begin{matrix}\alpha\varphi\\ \hline\hline\alpha\varphi\end{matrix}\right]\rightleftharpoons:\left[\begin{matrix} \begin{matrix}a_{1}&b_{1}\\ \vdots\\ \frac{a_{L}}{a_{L+1}}&\frac{b_{L}}{b_{L+1}}\\ \vdots&\vdots\\ a_{2L}&b_{2L}\end{matrix}\end{matrix}\right]\] (B.1) such that the potentially non-zero eigenvalues of \(M\) are the same as those of \[M_{\mathrm{red}}=\begin{bmatrix}a_{L}&b_{L}\\ a_{L+1}&b_{L+1}\end{bmatrix}=\begin{bmatrix}\alpha\varphi h_{\mathrm{end},0}+ \psi g_{\mathrm{end},\mathrm{end},\mathrm{end}\quad-\alpha\varphi\psi g_{ \mathrm{end},\mathrm{end},\mathrm{end}\quad.\] (B.2) _Reducing the unknowns to \(H\)._ It seems that we need the values \(h_{\mathrm{end},0}\) and \(g_{\mathrm{end},\mathrm{end}}\) to make further progress. First, let us consider \[G=C^{-1}(\alpha)C^{-\top}(\alpha)=(C^{\top}(\alpha)C(\alpha))^{-1}.\] (B.3) To solve a similar problem in Theorem 2.1's proof, we noted that the matrix being inverted was alpha-circulant and acted on that knowledge. However, if \(|\alpha|\neq 1\), this is not the case anymore (as can easily be checked), so another method needs to be found. We can first express \(g_{\mathrm{end},\mathrm{end}}\) in terms of \(H\) as \[g_{\mathrm{end},\mathrm{end}}=\boldsymbol{H}_{\mathbf{end},\mathbf{i}}( \boldsymbol{H}^{\top})_{\mathbf{i},\mathbf{end}}=\left\lVert\boldsymbol{H}_ {\mathbf{end},\mathbf{i}}\right\rVert_{2}^{2}=\sum\nolimits_{j=0}^{L-1}h_{ \mathrm{end},j}^{2}.\] (B.4) _Inverting \(C(\alpha)\)._ Let us now work on the problem of finding \(H=C^{-1}(\alpha)\). If \(C(\alpha)\) were fully circulant, [27, Theorem 1(d)] would offer a relatively simple analytical expression for its inverse; unfortunately, it is alpha-circulant. Even the technique to invert \((-1)\)-circulant matrices from Theorem 2.1's proof does not suffice here. Luckily, we have yet another trick up our sleeves. Again, the key is inverting the diagonalization in (2.11). Consider doing so for \(C(\alpha)\), as well as for an _actually_ circulant matrix \(\widehat{C}\) - defined later - giving \[C^{-1}(\alpha) =\Gamma_{\alpha}^{-1}\mathbb{F}^{*}\operatorname{diag}(\sqrt{L} \mathbb{F}\Gamma_{\alpha}\boldsymbol{c_{1}}(\alpha))^{-1}\mathbb{F}\Gamma_{ \alpha},\] (B.5a) \[\widehat{C}^{-1} =\mathbb{F}^{*}\operatorname{diag}(\sqrt{L}\mathbb{F}\widehat{ \boldsymbol{c_{1}}})^{-1}\mathbb{F}.\] (B.5b) If we now require \(\widehat{\boldsymbol{c_{1}}}=\Gamma_{\alpha}\boldsymbol{c_{1}}(\alpha)\), this fully defines \(\widehat{C}\). But then (B.5) means that \(H=C^{-1}(\alpha)=\Gamma_{\alpha}^{-1}\widehat{C}^{-1}\Gamma_{\alpha}\), which allows computing \(H\). By [27, Theorem 1(d)], \[h_{\mathrm{end},j}=\alpha^{(j-(L-1))/L}\alpha^{(L-j-1)/L}\frac{\varphi^{L-j-1 }}{1-\alpha\varphi^{L}}=\frac{\varphi^{L-j-1}}{1-\alpha\varphi^{L}}\] (B.6) from which immediately follow \(h_{\mathrm{end},0}=\frac{\varphi^{L-1}}{1-\alpha\varphi^{L}}\) and \[g_{\mathrm{end},\mathrm{end}} =\sum_{j=0}^{L-1}h_{\mathrm{end},j}^{2}=\sum_{j=0}^{L-1}\left( \frac{\varphi^{j}}{1-\alpha\varphi^{L}}\right)^{2}=\frac{1}{(1-\alpha\varphi ^{L})^{2}}\sum_{j=0}^{L-1}(\varphi^{2})^{j}\] \[=\frac{1}{(1-\alpha\varphi^{L})^{2}}\frac{1-\varphi^{2L}}{1- \varphi^{2}}.\] Filling these into (B.2) gives (3.11), while the limits (3.12) are then trivial as the entire second column of \(M_{\mathrm{red}}\) goes to zero when \(\alpha\to 0\). ## Appendix C Proofs of auxiliary lemmas **Lemma C.1** (Some properties of Theorem 2.1's \(z_{1}\) and \(z_{2}\)).: _Defining_ \[z_{\{1,2\}}=(1+\varphi^{2}+\psi^{2}\pm\sqrt{(1+\varphi^{2}+\psi^{2})^{2}-4 \varphi^{2}})/(2\varphi),\] (C.1) _the following properties hold._ 1. _If_ \(0<\varphi<1\)_, both_ \(z_{1}\) _and_ \(z_{2}\) _are real-valued and it holds that_ \(0<z_{2}<1<z_{1}\)_._ 2. _If_ \(0<\varphi<1\)_, it holds that_ \(z_{2}\leq\varphi\)_._ Proof.: We prove these claims one by one. 1. The quantity in (C.1)'s square root reads (C.2) \[(1+\varphi^{2}+\psi^{2})^{2}-4\varphi^{2}=(1+\varphi^{2}+\psi^{2}+2\varphi) \underbrace{(1+\varphi^{2}+\psi^{2}-2\varphi)}_{=(1-\varphi)^{2}+\psi^{2}},\] which is positive, such that the \(z\)s are real numbers. They are also positive, as follows from \(4\varphi^{2}>0\) and \(\varphi>0\). Furthermore, \(z_{1}>z_{2}\). Since their product is \(1\) (see Appendix A), \(z_{1}\) must be larger than \(1\) while \(z_{2}\) is smaller. 2. We write \[z_{2} \leq\varphi\] \[\Leftrightarrow 1+\varphi^{2}+\psi^{2}-\sqrt{(1+\varphi^{2}+\psi^{2})^{2}-4 \varphi^{2}} \leq 2\varphi^{2}\] \[\Leftrightarrow 1-\varphi^{2}+\psi^{2} \leq\sqrt{(1+\varphi^{2}+\psi^{2})^{2}-4\varphi^{2}}\] \[\Leftrightarrow (1-\varphi^{2}+\psi^{2})^{2} \leq(1+\varphi^{2}+\psi^{2})^{2}-4\varphi^{2}\] \[\Leftrightarrow 4\varphi^{2} \leq 4\varphi^{2}(1+\psi^{2}),\] which is clearly true. **Lemma C.2** (Some properties of Theorem 2.1's \(\omega_{1}\) and \(\omega_{2}\)).: _Define_ \[\omega_{\{1,2\}}=\frac{1}{z_{2}-z_{1}}\bigg{(}\frac{z_{1}-\varphi\pm\psi{\rm i }}{1+z_{1}^{\widehat{L}}}-\frac{z_{2}-\varphi\pm\psi{\rm i}}{1+z_{2}^{\widehat {L}}}\bigg{)}\] (C.3) _with \(z_{1}\) and \(z_{2}\) as in (C.1), where \(\widehat{L}\geq 1\) and \(0<\varphi<1\). Then denote by \(\Re(\omega)=\Re(\omega_{1})=\Re(\omega_{2})\) the real part characterising the \(\omega s\) and by \(\Im(\omega)=\Im(\omega_{1})=-\Im(\omega_{2})\) the imaginary part._ 1. _It holds that_ \(\Re(\omega)<0\) _increases monotonically with increasing_ \(\widehat{L}\)_._ 2. _It holds that_ \(\Im(\Omega)>0\) _increases monotonically with increasing_ \(\widehat{L}\)_._ 3. _Following (a), it holds that_ \(-\frac{1}{2}<\Re(\omega_{\{1,2\}})\)_._ 4. _Following (a) and (b), it holds that_ \(\left|\frac{1}{2}+\omega_{\{1,2\}}\right|<\frac{1}{2}\)_._ Proof.: Once again, the claims are addressed one by one. 1. We rewrite \(\Re(\omega)=-\frac{1}{z_{1}-z_{2}}\big{(}\frac{z_{1}-\varphi}{1+z_{1}^{ \widehat{L}}}+\frac{\varphi-z_{2}}{1+z_{2}^{\widehat{L}}}\big{)}\) where, due to Lemma C.1(a) and Lemma C.1(b), all numerators and denominators are positive. Appendix A showed \(z_{1}z_{2}=1\Leftrightarrow z_{2}=1/z_{1}\) - filling this in, we get (C.4) \[\frac{{\rm d}}{{\rm d}\widehat{L}}\Re(\omega)=\frac{1}{z_{1}-1/z_{1}}\frac{z_{1 }^{\widehat{L}-1}(-2\varphi z_{1}+z_{1}^{2}+1)\log z_{1}}{(z_{1}^{\widehat{L} }+1)^{2}}.\] This is always positive (recall that \(z_{1}>1>\varphi\)), such that the claim holds. 2. A similar technique works for \(\Im(\omega)=\frac{\psi}{z_{1}-1/z_{1}}\big{(}\frac{1}{1+1/z_{1}^{L}}-\frac{1}{1+z _{1}^{L}}\big{)}\). We find \[\frac{\mathrm{d}}{\mathrm{d}\widehat{L}}\Im(\omega)=\frac{\psi}{z_{1}-1/z_{1}} \frac{2z_{1}^{\widehat{L}}\log z_{1}}{(z_{1}^{\widehat{L}}+1)^{2}},\] which is a positive quantity, confirming the claim. 3. From Lemma C.2(a), it follows that \[\Re(\omega) \geq-\frac{1}{z_{1}-z_{2}}\bigg{(}\frac{z_{1}-\varphi}{1+z_{1}}+ \frac{\varphi-z_{2}}{1+z_{2}}\bigg{)}=\frac{(z_{1}-\varphi)(1+z_{2})+(\varphi- z_{2})(1+z_{1})}{(z_{2}-z_{1})(1+z_{1})(1+z_{2})}\] \[=-\frac{z_{1}-z_{2}+\varphi(z_{1}-z_{2})}{(z_{1}-z_{2})(2+z_{1}+z _{2})}=-\frac{1+\varphi}{2+z_{1}+z_{2}}\] \[=-\frac{1+\varphi}{2+(1+\varphi^{2}+\psi^{2})/\varphi}=-\frac{ \varphi(\varphi+1)}{(\varphi+1)^{2}+\psi^{2}}.\] Thus \[-\frac{1}{2}<\Re(\omega)\Leftarrow\frac{\varphi(\varphi+1)}{(\varphi+1)^{2} +\psi^{2}}<\frac{1}{2}\Leftrightarrow\varphi^{2}+\varphi<\frac{\varphi^{2}}{2 }+\varphi+\frac{1}{2}+\frac{\psi^{2}}{2},\] the latter of which is true from the condition \(0<\varphi<1\). 4. Since \(-1/2<\Re(\omega)<0\), it holds that \(\left|1/2+\omega_{\{1,2\}}\right|^{2}\) is bounded above by the squares of \(\Re(1/2+\omega_{\{1,2\}})\) maximized over \(\widehat{L}\) and \(\Im(1/2+\omega)\) maximized over \(\widehat{L}\). Thus, using the fact that these maxima are attained for \(\widehat{L}\to\infty\) (where \(z_{1}^{\widehat{L}}\to\infty\) and \(z_{2}^{\widehat{L}}\to 0\)), \[\left|\frac{1}{2}+\omega_{\{1,2\}}\right|^{2} \leq\left(\frac{1}{2}-\frac{\varphi-z_{2}}{z_{1}-z_{2}}\right)^{2 }+\left(\frac{\psi}{z_{1}-z_{2}}\right)^{2}\] \[=\left(\frac{(1+\varphi^{2}+\psi^{2})/(2\varphi)-\varphi}{z_{1}- z_{2}}\right)^{2}+\left(\frac{\psi}{z_{1}-z_{2}}\right)^{2}\] \[=\frac{(1-\varphi^{2}+\psi^{2})^{2}+(2\varphi\psi)^{2}}{\left(2 \sqrt{(1+\varphi^{2}+\psi^{2})^{2}-4\varphi^{2}}\right)^{2}}=\frac{1}{4}.\] This proves the claim. **Lemma C.3** (Approximation of \(1/z\) on a semi-disk).: _Denote by \(\mathcal{D}_{0.5,+}\) the right half of a disk in the complex plane, centered at \(0.5\) and with radius \(0.5\). Define \(R=2\). Then, for any \(0<\rho<R\), there exists some constant \(\kappa_{\rho}\) such that, for any integer \(k\geq 0\), there exists a degree-\(k\) polynomial that approximates \(f(z)=1/z\) on \(\mathcal{D}_{0.5,+}\) with an infinity-norm error of at most \(\kappa_{\rho}\rho^{-k}\)._ Proof.: \(f\) is analytic in the complex plane, except for the origin \(z=0\). According to [26, Theorem 4.1], we must find the unique Riemann (conformal) mapping \(z\to w(z)\) of the exterior of \(\mathcal{D}_{0.5,+}\) to the exterior of the unit disk \(\mathcal{D}\) for which \(w(\infty)=\infty\) and \(w^{\prime}(\infty)>0\). The lemma then holds for any \(R\) for which \(f\) can be analytically extended to the interior of the \(w\)-preimage \(\Gamma_{R}\) of the radius-\(R\) origin-centered circle. In essence, we must find a conformal mapping \(w\) from \(\mathbb{C}\backslash\mathcal{D}_{0.5,+}\) to \(\mathbb{C}\backslash\mathcal{D}\) for which \(w(\infty)=\infty\) and \(w^{\prime}(\infty)>0\), checking how far \(w(0)\) is from the origin. We construct \[w(z)=w_{5}(w_{4}(w_{3}(w_{2}(w_{1}(z))))).\] (C.6) First, \(w_{1}\) takes \(\mathcal{D}_{0.5,+}\), moves it with its bottom corner to the origin and magnifies it by a factor of two. The mapping that accomplishes this is \(w_{1}=2z-1+\mathrm{i}\). Then \(w_{2}\) maps the semi-disk onto a quadrant - or, more to the point, the exterior of the semi-disk into three quadrants. This can be done by the mapping \(w_{2}=1/w_{1}+\mathrm{i}/2\). Next, \(w_{3}\) collapses three quadrants into a half-plane with the mapping \(w_{3}=w_{2}^{2/3}\). We can then turn a half-plane into the exterior of the unit disk through a Mobius transformation of the form \(w_{4}=\frac{w_{3}-\beta^{7}}{w_{3}-\beta}\) for some \(\beta\). Recall that \(w\) should map \(\infty\) to \(\infty\); this can be done by taking \(\beta\) to be the image of \(\infty\) up until now. If \(z=\infty\), we get \(w_{1}=\infty\), \(w_{2}=\mathrm{i}/2\) and \(w_{3}=(\mathrm{i}/2)^{2/3}\). So setting \(\beta=(\mathrm{i}/2)^{2/3}\), \(w_{4}\) is now determined. Finally, we find \(w_{4}^{\prime}(z=\infty)=(-3\sqrt{3}+9\mathrm{i})/4\), so with \(w_{5}=\exp(-2\pi\mathrm{i}/3)w_{4}\) we get \(w^{\prime}(\infty)=(3\sqrt{3})/2>0\). Figure 1 illustrates the mapping \(w\). The pole at \(z=0\) maps to \(w(0)=-2\), which is at distance \(R=2\) from the origin. This concludes the proof. **Acknowledgments.** The authors are grateful to Ignace Bossuyt, Giovanni Conni, Toon Ingelaere, and Vince Maes for their thorough reviews and helpful comments. This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No. 955701. The JU receives support from the European Union's Horizon 2020 research and innovation programme and Belgium, France, Germany, and Switzerland. The work by Karl Meerbergen is partly supported by the Research Foundation Flanders (FWO) grants G0B7818N and G088622N, as well as by the KULeuven Research Council.
2308.13736
A Comprehensive Survey for Evaluation Methodologies of AI-Generated Music
In recent years, AI-generated music has made significant progress, with several models performing well in multimodal and complex musical genres and scenes. While objective metrics can be used to evaluate generative music, they often lack interpretability for musical evaluation. Therefore, researchers often resort to subjective user studies to assess the quality of the generated works, which can be resource-intensive and less reproducible than objective metrics. This study aims to comprehensively evaluate the subjective, objective, and combined methodologies for assessing AI-generated music, highlighting the advantages and disadvantages of each approach. Ultimately, this study provides a valuable reference for unifying generative AI in the field of music evaluation.
Zeyu Xiong, Weitao Wang, Jing Yu, Yue Lin, Ziyan Wang
2023-08-26T02:44:33Z
http://arxiv.org/abs/2308.13736v1
# A Comprehensive Survey for Evaluation Methodologies of AI-Generated Music ###### Abstract In recent years, AI-generated music has made significant progress, with several models performing well in multimodal and complex musical genres and scenes. While objective metrics can be used to evaluate generative music, they often lack interpretability for musical evaluation. Therefore, researchers often resort to subjective user studies to assess the quality of the generated works, which can be resource-intensive and less reproducible than objective metrics. This study aims to comprehensively evaluate the subjective, objective, and combined methodologies for assessing AI-generated music, highlighting the advantages and disadvantages of each approach. Ultimately, this study provides a valuable reference for unifying generative AI in the field of music evaluation. copyright: ©2023 Zeyu Xiong et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. ## 1 Introduction With the development of artificial intelligence generation technology, a large amount of work and applications have been generated for intelligent music generation [1, 2, 3, 4]. In particular, Music generation can be further divided into two types: the symbolic domain and the audio domain. Music generation in the symbolic domain is stored in MIDI format, and its textual and sequential data nature facilitates its applications (e.g., MidNet [5], MuseGAN [6], BandNet [7] and Telekoldy [8]) in major deep learning models (e.g., LSTM [9, 10], autoencoder [11], RBM [12], and GAN [13]). For the audio domain, it is also possible for the analysis of the different bands according to the characteristics of the audio to obtain vectorized data for model training (e.g., Jukebox [14], WaveNet [15]). In addition to generating music from MIDI datasets or audio datasets, many works have started to look for connections between multimedia. For example, MusicLM [16] generates music from text, and BGT-G2G [17] generates music from images. All the above-mentioned works reach a certain level of accepted musicality. However, these ratings are either entirely referenced to parameters such as the accuracy of model training or are subjective ratings that rely entirely on user study. Due to the different experimental processes and judgment criteria for subjective ratings, the objective model training metrics do not directly represent subjective feelings. There has not been a broad consensus on the evaluation of such generative models for a long time, resulting in a great challenge for music generation models in determining evaluation criteria [18, 19]. While subjective evaluation is usually better suited for evaluating generative models, it can be resource-intensive, and there are no uniform criteria. In contrast, objective methods, even if easy to implement, are usually less explanatory. To this end, we are dedicated to performing a survey for evaluation methodologies of AI-generated music and providing reference values for designing more scientific and effective evaluation methods in the future. Figure 1 shows the overview structure of this survey. We separate our survey into three main categories: (1) subjective evaluation, (2) objective evaluation, and (3) combined evaluation. Our contributions to this work are listed below: 1. We provide a classification reference scheme of evaluation for creators in the field of AI music. 2. We provide a reference value for the unification of generative AI in the field of music evaluation. ## 2 Related Work In recent years, artificial intelligence (AI) systems have been increasingly involved in various applications of music composition, ranging from entertainment to therapeu tic uses. However, as the popularity of AI-generated music grows, an increasing number of issues have come to light (e.g., concerns about plagiarism in AI-generated music content [20], etc.). Therefore, to ensure the success of these applications, it is crucial to establish effective evaluation metrics. Existing surveys on the evaluation method of AI-generated music predominantly categorize the evaluation methodologies into two primary classifications: subjective evaluations and objective evaluations. Subjective evaluation methods entail the solicitation of human listeners to provide ratings based on specific criteria, such as musicality, novelty, or emotional impact. Scholars such as Ji et al. [2] and Zhao et al. [21] have notably emphasized the use of listening tests as a widely employed approach to assess the melodic output of AI-generated music. Given the inherent subjectivity inherent in music appreciation, strict standardization of such evaluations remains a challenging endeavor, as noted by Yamshchikov et al. [22]. On the other hand, objective evaluation methods center on quantitative measurements applied to both the generated music and the underlying generative models. Ji et al. [2] elaborate on the objective evaluation as the quantitative measurement for both the generated music and the generative model. Notably, Theis et al. [23] conducted evaluations focusing on generative model performance from the perspective of log-likelihood, while Civit et al. [24] extended this evaluation to encompass considerations of dataset, code integration, and system structure. The selection of subjective or objective assessment methods often hinges on the specific purposes and criteria of measurement. Objective evaluation methods are commonly utilized in tasks such as classification, prediction, and recommendation, whereas subjective evaluations are more frequently employed to assess the quality of the generated musical content. In the following subsections, we delve into existing subjective and objective evaluation methods, and we also explore the amalgamation of these approaches. Furthermore, we analyze the application scenarios wherein each evaluation methodology finds relevance and effectiveness in the context of AI-generated music assessment. ## 3 Subjective Evaluation Subjective evaluation of generated music is predominantly reliant on assessments provided by human listeners emphasizing their satisfaction. Due to the subjective evaluative nature of music in the real world, even though it tends to be more resource-intensive and less reproducible than objective evaluation, subjective evaluation is still an integral part of the process of AI music generation evaluation. Among the existing generative models, music listening tests and visual analysis are the two most important parts. ### Music Listening Test The music listening test is the most common method in subjective evaluation. Such evaluations are commonly conducted through two approaches: the musical Turing test or subjective query metrics based on modeled compositional theory [25]. Generally, a validated music listening test should require these conditions [26]: (1) The experiment was conducted in a controlled environment with specific acoustic characteristics and equipment, (2) The music knowledge level of subjects was evenly distributed, including both music amateurs who are lacking in music knowledge and experts in the field of music composition, (3) Each subject received the same instructions, and (4) Each subject received statistically significant results. #### 3.1.1 Musical Turing Test Musical Turing Test measures the extent to which the generated music is indistinguishable from human-composed music. For example, Nadeem et al. [27] tested 28 users from the aspects of accuracy and preference to evaluate the generator output of a deep learning architecture combined with the proposed musical data. Although 57% of them correctly recognized that the music piece was generated by the computer, some claimed that the decision process was difficult. Ferreira et al. [28] invited 117 participants to differentiate the music composed by humans or five models (three transformers and two RNN models). What this study did good was that it divided all members into three groups according to their experience of classical music. It proved that people with better musical sensitivity had a higher correct rate of distinguishment. To demonstrate statistically significant differences in evaluation results, hypothesis testing is usually performed (e.g., t-test [29], h-test [30], etc.). #### 3.1.2 Subjective Query Metrics Nadeem et al. [27] and Ferreira et al. [28] also ask if the audiences love experimental music by a binary question. However, the answers cannot quantify the degree of their preferences with less representative. Therefore, more metrics should be considered during the evaluation of different models, which need to be specifically explained. In the study conducted by Chu et al. [31], 100 people participated in evaluating their interests in the music performed by different transformer models. This survey ruled out nine parameters (including overall creativity, naturalness, melodiousness, richness, rhythmicity, correctness, structures, and coherence) in a 7-point Likert scale. Take creativity as an example. It was described as the degree of novelty, value, and origin of the music pieces. In this way, participants can evaluate the music more specifically after a quantification process. Compared with the study, Hernandez-Olivan et al. [32] considered the melody, harmony, and rhythm of the music on a 5-point Likert scale. But they designed respective questions for different groups. Specifically, for Figure 1: Overview Structure of The Survey: AI Music Evaluation Methods people with less music knowledge, three parameters can be reflected by two questions. As for professionals with a solid foundation in music theory, they should answer six questions to determine those metrics. What's more, the experience of users should be paid more attention. Like the duration and the number of music pieces, these factors possibly make people fatigued, which may lead to a higher deviation in the results. ### Visual Analysis For visual analysis, the involvement of a music expert is often required. It is up to the expert to analyze the score, chord progression sheet, piano roll, etc., after visualization. For example, Dong et al. [6] analyzed the stability, fluidity, and musicality of melodies generated in chordal and rhythmic patterns in music. The waveform and spectrogram of the audio samples are also considered indicators for subjective evaluation. For example, Engel et al. [33] show each note as a "Rainbowgram", which is a visualization technique to show the relationship between time and frequency. ### Summary In conclusion, we believe that a good subjective evaluation should not only contain a precise design but also consider the users' characteristics. In this way, target metrics can be better determined while participants react in a comfortable environment, which can also contribute to the development of improved algorithms and models in the field. Besides, although the subjective judgment is indispensable because of the artistic subjectivity of music, the resources expended behind it are enormous. At the same time, it is difficult to ensure the reproducibility as well as the stability of the experiments. Therefore, with the help of some objective quantitative indicators, it would be helpful to analyze the quality of music generation in a more scientific way. ## 4 Objective Evaluation The objective evaluation involves using computational techniques to analyze the music and generate objective measures of its quality. Dong et al. [6] and Sturm [34] have used evaluation metrics based on probabilistic measures such as likelihood and density estimates (especially in the field of image generation [23]), yet whether there is a direct link between good or bad models and music quality is not yet known. Besides, metrics such as model metrics and music metrics are often used. For example, researchers may use metrics such as pitch entropy, chord progression complexity, or rhythmic variance to evaluate the music quality. We discuss the application of these metrics in detail in this section. ### Model-based Metrics Model-based Metrics refer to the general generative model evaluation metrics that do not contain music domain knowledge. Some common model-based metrics include _training loss_, _precision_, _recall_,_f1 score_, etc. Other metrics like the _chord prediction accuracy_[35], _style likelihood_[36], and _reconstruction accuracy_[37] are also applied for the objective evaluation. Model-based evaluation methods are also limited to specific models or methods without strong universality because the methods and models of different generation systems are very different. Bretan et al. [11] considered a unit to be a variable length number of measures of music, and utilized objective metrics to assess the generative model, such as mean rank and accuracy, by evaluating the rank of the target unit. This is a specific evaluation metric based on model characteristics, not general metrics. Thus, a model-based metric inspired by domain knowledge is not universal but performs well on a particular task, even though its interpretability remains questionable in terms of music quality. ### Music Domain Metrics Music Domain Metrics (MDM) refers to the evaluation index under the domain knowledge of music, such as volume, pitch, chord, score, etc. Ji et al. [2] categorized these metrics into 4 categories: (1) pitch-related, (2) rhythm-related, (3) harmony-related, and (4) style-related. #### 4.2.1 Pitch & Rhythm Related Metrics Widely-used pitch-related and rhythm-related metrics include scale consistency, tone spam, consecutive pitch repetitions, qualified rhythm frequency, rhythm variations, etc [38, 39]. For the state-of-the-art metric design, Yang and Alexander [25] propose a set of musicological objective assessment metrics, using which the output of the music generation model can be evaluated and compared. These metrics were validated in experiments and are reproducible. The proposed features include pitch counts, pitch category histograms, pitch shift matrices, pitch spans, average pitch intervals, note counts, average repetition intervals, note length histograms, and note length shift matrices. #### 4.2.2 Harmony Related Metrics Harmony-related metrics focus on measuring harmonic consistency, chord histogram entropy, chord coverage, polyphony (how often two tones are played simultaneously), tone span, etc [40]. For example, C-RNN-GAN [38] and JazzGAN [39] used harmony-related metrics to measure the compatibility and musicality of generated outputs. #### 4.2.3 Style Related Metrics In terms of style transfer, "Style Fit" (how well the generated music fits the desired style) and "Content Preservation" (how much content it retains from the original) are most commonly mentioned [41]. Cifka et al. [42] proposed a new set of objective evaluation metrics to be used alongside existing metrics. To capture the consistency in the harmonic structure, they preserve the content by calculating the frame-by-frame cosine similarity between chromatic features. For style fit, they collected so-called style profiles [43] to measure how well they are matched by the style transfer outputs. ### Summary In this section, we introduce model-based metrics and music domain metrics, as well as state-of-the-art innovative metrics. While the purpose of the above approaches is to reduce the workload of crowd-sourcing through scientific data, the interpretability of the above methods remains to be verified as the quantitative indicators do not fully represent subjective human perceptions. Therefore, studying the interpretability of objective evaluation indicators remains an issue worth exploring [44]. ## 5 Combined Evaluation Combining subjective and objective evaluation methods can be an effective approach to evaluating AI-generated music. Recently, heuristic algorithmic frameworks have also emerged as important tools for combined evaluation. In this section, we discuss how the combined evaluation is performed and tested. ### Subjective + Objective Evaluation Subjective plus objective evaluation refers to evaluation methods that combine subjective user study and objective metrics, and a comparison between the subjective and objective evaluation forms the final conclusion. For example, Zhao et al. [45] evaluated the AI-generated music by combining objective musical metrics (polyphony, scale consistency, 3-tone repetitions, and tone span) analysis and subjective query metrics (harmonious, rhythmic, musically structured, and coherent) user study together. Huang et al. [46] trained a music mashup model and evaluated the outputs by combining objective evaluation (model-based metric plus music domain metric) and subjective listening tests (analyze mean of scores). The combined-style assessment of the above work intercepts the respective strengths of subjective and objective assessments and provides a more comprehensive assessment of the model's strengths and weaknesses in a broader dimension. However, the level of diversity in the database used above poses a challenge to the generalizability of the model. Besides, since the combination of the two also requires aligning final results, there is still no uniformity in the interpretable migration of the objective assessment compared to the subjective assessment. ### Heuristic Evaluation Framework Dervakos et al. [47] propose a heuristic framework to calculate the frequency of different features by using tools such as the "five-degree circle" to output quantitative scores for each metric. In this framework, the authors define four heuristic objective assessment attributes based on intuition and empirical observations as musicality. However, due to interpretability limitations, the authors still made a subjective assessment to prove the existence of the objective property was meaningful. In the subjective test, over 1,000 users participated in scoring three dimensions: 1) how much they liked the music, 2) how interesting the music was, and 3) the Turing test: whether the composer of the music was a human or a computer. The authors compared the final results of the user survey with the results of their proposed heuristic and eventually found a high degree of similarity in the results between the two, thus demonstrating the significance of the heuristic. ### Summary Subjective + objective evaluation is designed for learning the broad domain of the generated music. Through this method, many work evaluations become more robust. However, we have not yet found a unified assessment paradigm because of the different objectives of the different efforts. The heuristic evaluation framework seems to contribute significantly to mitigating the resource consumption of purely subjective evaluations. While subjective evaluation can provide valuable insights into the general public's perception of creative AI and the evaluation of music, heuristics can be used to evaluate specific features of AI-generated music without the need for comparison between generated and real data. However, the robustness of the method still needs to be compared in parallel with a purely subjective evaluation. This, in turn, gets caught in the trade-off between interpretability and experimental reproducibility. ## 6 Discussion As AI music generation continues to evolve, the methods of evaluating these generated outputs must also adapt to the increasing complexity and creativity of the models. There are several challenges and future directions that we have identified in this field. ### Establishing Standards At present, the evaluation of AI-generated music lacks standardization, which results in a process that is inconsistent and lacks a common reference point for stakeholders, including developers, musicologists, and audiences. The creation of a comprehensive, standardized evaluation system would streamline this process, benefitting all parties involved. The envisioned system would include a set of standard metrics, incorporating both subjective and objective elements, applicable across various AI models and across different music genres. This is vital as different genres of music possess unique characteristics, influencing the type of metrics required for their evaluation. For instance, the complexity of harmonic structure forms a critical component of classical music evaluation. In contrast, the "catchiness" or melodic hooks are often a major focus in the evaluation of pop music. This diversity necessitates the challenge of formulating genre-specific evaluation metrics, allowing for a fair and accurate appraisal of AI-generated music within the context of its intended genre. In essence, the development of a standardized evaluation system that accounts for both general musical elements and genre-specific characteristics is a pressing need in the field of AI-generated music. ### Bridging the Gap between Subjective and Objective Evaluation As discussed in Section 5, one of the main challenges lies in bridging the gap between subjective and objective evaluations. While subjective evaluation considers the listeners' personal preferences and emotions, objective evaluation relies on mathematical and computational analysis. The challenge is to find a balance and a correlation between these two methods. Future research could focus on developing methods that can effectively combine these two approaches to provide a comprehensive evaluation. ### Interpretability of Objective Metrics Although objective metrics provide a quantitative measure of the quality of AI-generated music, their interpretability remains an issue. Many of these metrics are based on abstract mathematical concepts that may not necessarily correlate with human perception of music quality. Therefore, it is crucial to develop objective metrics that can accurately represent subjective human perceptions and can be easily interpreted in terms of music quality. ### Evaluating Creativity Evaluating creativity in AI-generated music is a complex task as it involves assessing different criteria in different contexts. Some of these criteria are novelty, originality, and value. Novelty refers to the newness or uniqueness of a musical piece. A composition that sounds distinctly different from existing pieces can be considered novel. However, it's essential to remember that novelty alone does not equate to creativity. For instance, random notes played together might be novel but not necessarily creative or enjoyable. Another criterion is originality: it is closely related to novelty, but it adds an extra layer of refinement. An original piece of music introduces something new while also demonstrating an understanding of existing musical traditions and structures. It should show a level of sophistication and skill, breaking from the norm in a purposeful and artful way. Value is the third critical component of creativity. A creative piece of music should be novel and original with value, which can be emotional, cultural, aesthetic, or intellectual. It might be a piece that resonates deeply with listeners, offers a new perspective, or pushes boundaries in the music world. Current evaluation methods may not fully capture these aspects, and different audiences' definition of creativity varies. Therefore, developing methods to evaluate creativity effectively is a significant challenge. This could involve devising new metrics or modifying existing ones to measure these aspects. ## 7 Conclusions AI music generation is a promising field with significant potential for both creative and technological advancements. The evaluation of AI-generated music, however, is still a challenging and complex task that requires both subjective and objective methods. In this paper, we discussed various evaluation methods, including subjective evaluations like music listening tests and visual analysis, objective evaluations like model-based metrics and music domain metrics, and combined methods. We conducted a comprehensive survey for the evaluation methodologies of AI-generated music; we separated these methods into three parts: (1) subjective evaluation, (2) objective evaluation, and (3) combined evaluation. We discussed in detail the advantages and disadvantages of various evaluation methods and provide a future perspective on the evaluation of generative AI in the music domain. This work also provided insights for the future release of a unified style of assessment method. We also outlined several future directions and challenges in this field, such as establishing standards, bridging the gap between subjective and objective evaluations, and evaluating creativity and different music genres. We believe that addressing these challenges will lead to more reliable and comprehensive evaluation methods for AI-generated music, contributing to the further development of this field.
2301.07248
Deep Learning Enables Reduced Gadolinium Dose for Contrast-Enhanced Blood-Brain Barrier Opening
Focused ultrasound (FUS) can be used to open the blood-brain barrier (BBB), and MRI with contrast agents can detect that opening. However, repeated use of gadolinium-based contrast agents (GBCAs) presents safety concerns to patients. This study is the first to propose the idea of modeling a volume transfer constant (Ktrans) through deep learning to reduce the dosage of contrast agents. The goal of the study is not only to reconstruct artificial intelligence (AI) derived Ktrans images but to also enhance the intensity with low dosage contrast agent T1 weighted MRI scans. We successfully validated this idea through a previous state-of-the-art temporal network algorithm, which focused on extracting time domain features at the voxel level. Then we used a Spatiotemporal Network (ST-Net), composed of a spatiotemporal convolutional neural network (CNN)-based deep learning architecture with the addition of a three-dimensional CNN encoder, to improve the model performance. We tested the ST-Net model on ten datasets of FUS-induced BBB-openings aquired from different sides of the mouse brain. ST-Net successfully detected and enhanced BBB-opening signals without sacrificing spatial domain information. ST-Net was shown to be a promising method of reducing the need of contrast agents for modeling BBB-opening K-trans maps from time-series Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) scans.
P. Lee, H. Wei, A. N. Pouliopoulos, B. T. Forsyth, Y. Yang, C. Zhang, A. F. Laine, E. E. Konofagou, C. Wu, J. Guo
2023-01-18T01:20:49Z
http://arxiv.org/abs/2301.07248v1
# Deep Learning Enables Reduced Gadolinium Dose for Contrast-Enhanced Blood-Brain Barrier Opening ###### Abstract Focused ultrasound (FUS) can be used to open the blood-brain barrier (BBB), and MRI with contrast agents can detect that opening. However, repeated use of gadolinium-based contrast agents (GBCAs) presents safety concerns to patients. This study is the first to propose the idea of modeling a volume transfer constant (Ktrans) through deep learning to reduce the dosage of contrast agents. The goal of the study is not only to reconstruct artificial intelligence (AI) derived Ktrans images but to also enhance the intensity with low dosage contrast agent T1 weighted MRI scans. We successfully validated this idea through a previous state-of-the-art temporal network algorithm, which focused on extracting time domain features at the voxel level. Then we used a Spatiotemporal Network (ST-Net), composed of a spatiotemporal convolutional neural network (CNN)-based deep learning architecture with the addition of a three-dimensional CNN encoder, to improve the model performance. We tested the ST-Net model on ten datasets of FUS-induced BBB-openings aquired from different sides of the mouse brain. ST-Net successfully detected and enhanced BBB-opening signals without sacrificing spatial domain information. ST-Net was shown to be a promising method of reducing the need of contrast agents for modeling BBB-opening K-trans maps from time-series Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) scans. Blood-brain barrier, CNN, deep learning, dynamic contrast-enhanced magnetic resonance imaging, spatial-temporal longitudinal study. ## I Introduction Systemic therapy options for central nervous system (CNS) diseases, including brain tumors, Alzheimer's, or Parkinson's disease, have been limited due to the presence of the blood-brain barrier (BBB) [1, 11, 12, 13]. The BBB is a unique vascular feature of the CNS [2, 15]. Tight junctions connect adjacent cerebral endothelial cells to the highly regulated transport system of the endothelial cell membrane. Together they form a physiological diffusion barrier that maintains the homeostasis of the brain by protecting it from exogenous and endogenous substances, but also hinders the delivery of therapeutic agents to the brain [14]. Since the BBB prevents over 98% of small-molecule drugs and approximately 100% of large-molecule drugs from entering the brain parenchyma, this makes it a major limiting factor for systemic treatment of CNS diseases [12, 13]. Extensive and ongoing research has been done to optimize drug delivery by overcoming the BBB, including intracranial injections, hyperosmotic solutions, convection-enhanced delivery (CED), and focused ultrasound (FUS) [15, 16]. Several studies have shown that FUS with intravenously injected microbubbles can temporarily induce BBB-openings for non-invasive drug delivery to the brain [2]. FUS treatments are performed using acoustic waves similar to diagnostic ultrasound. However, instead of constructing images from echoes generated by tissue interfaces, FUS uses a transducer that concentrates acoustic waves at a focal point, where the acoustic energy is significantly augmented. This allows the sonication to generate mechanical effects, thermal effects, or both. Optimization of acoustic parameters and MB dosage has been shown to achieve local and reversible BBB-openings without damaging the brain parenchyma in multiple preclinical models, including rodents and non-human primates [17, 18]. Clinical advancement of FUS technology has progressed rapidly over the past few years with clinical trials showing safety with BBB-openings in both patients with brain tumors, amyotrophic lateral sclerosis, and Alzheimer's disease [19] [22]. Multiple clinical trials are currently being conducted to study the safety and feasibility of BBB-opening. These trials Many strategies have been developed to identify the BBB-opening [23]. Among those strategies, FUS-induced BBB-opening is typically detected using Magnetic Resonance Imaging (MRI) with gadolinium-based contrast agents (GBCAs) using a T1 weighted sequence. However, there are limitations to this strategy. For drug delivery, systemic therapies can be given from daily to weekly to monthly, and repeated BBB-opening would theoretically be needed to ensure optimal drug delivery. To validate a BBB-opening, multiple MRI scans with contrast agents would be needed. It has been documented that repeated use of GBCAs can accumulate and be retained in body tissues, including the brain. This raises serious concerns for patient safety as renal impairment, specifically nephrogenic systemic fibrosis, may occur [4]-[7], [24]. In 2018, the FDA warned that gadolinium is retained in the organs after scanning by GBCA MRI, which has potential risks in humans based on toxicities observed in preclinical studies. Secondly, routine non-contrast scans are obtained in addition to post-contrast MRI scans. The additional contrast-based sequences can extend MRI scanning time. This can lead to increased costs, patient discomfort, and movement/motion artifacts. To address the potential safety concerns, it is necessary to develop alternative imaging techniques with reduced or no GBCAs. Recently, a deep learning algorithm has been shown to extract diagnostic quality images with a 10-fold lower gadolinium dose than typically used, suggesting its potential to reduce GBCAs dose in brain MRI [25]. Further, using a deep contrast algorithm, artificial intelligence (AI) was able to predict regions of contrast enhancement in the absence of a contrast based on pixelated changes not observable to the human eye. Thus, we hypothesize that with deep-contrast AI, we can generate a computer algorithm to predict full-dose GBCAs BBB-openings using low-dose GBCAs T1 sequences. Furthermore, these low-contrast scanning sequences can be optimized to minimize overall scan time. Here, we present a proof-of-concept study in mice to demonstrate feasibility before conducting human trials. Figure 1: (A) Timeline of the experimental procedure. Focused ultrasound (FUS) disrupts the blood-brain barrier (BBB) with the injection of microbubbles. After twelve hours, the BBB-opening was stable, and the mouse was placed on a Magnetic Resonance Imaging (MRI) system and scanned for the baseline for the first four acquisitions. We first injected 10 mmol/kg contrast agent (3.3% of the full dosage) gadolinium and acquired eighty T1-weighted (T1W) dynamic contrast-enhanced (DCE)-MRI. Following the injection of the remaining 97.7% contrast agent (full dose), eighty-four T1W DCE-MRI were acquired. (B) Image preprocessing pipeline. We first converted the raw DCE-MRI images from DICOM format to NIF1 format and performed within-subject robust registration. We then generated the volume transfer constant (Kitrans) map through the general kinetic model (GKM) model. Finally, we extracted the whole brain Kitrans map with the manually labeled brain mask. ## 2 Methods ### _FUS-induced BBB-opening Protocol_ BBB-opening induced in murine brains by FUS with the administration of microbubbles has been described in detail [3]. Briefly, a single-element, spherical-segment FUS transducer was driven by a function generator through a 50-dB power amplifier. A single-element, pulse-echo transducer was housed within the central core of the FUS transducer and used for passive cavitation detection (PCD) of acoustic emissions. In-house manufactured microbubbles (concentration: 8\(\times\)10\({}^{8}\) bubbles /mL, diameter: 1.37 \(\pm\) 1.02 \(\mu\)m) were diluted in saline to 200 \(\mu\)L and injected intravenously]. Soniaction was delivered at 0.5 MHz with a peak-negative pressure of 0.3 MPa in bursts of 10 ms length at 5 Hz repetition time over 120 s (600 pulses). ### _DCE-MRI Protocol and Data Acquisitions_ Following the FUS procedure, the mouse was transferred to the Bruker BioSpec 94/20 scanner (field strength, 9.4 T; bore size, 30 cm) horizontal small animal MRI scanner with software ParaVision 6.0.1 (Bruker BioSpin, Billerica, MA, USA), an 86-mm inner diameter birdcage 1H volume transmits coil and a 1H mouse-head-only Cryogenic RF coil (CryoProbeTM). Mice were anesthetized using medical air and isoflurane (3% volume for induction, 1.1-1.5% for maintenance at 1 liter/min airflow, via a nose cone). The DCE-MRI images were acquired using a 2-D FLASH T1- weighted sequence (180 \(\times\) 150 \(\times\) 18 \(\times\) 84 matrix size, spatial resolution of 100 \(\times\) 100 \(\mu m^{2}\), slice thickness of 500 \(\mu\)m, TR/TE = 200/2.12 ms) before (i.e., the first four scans) and during the intraperitoneal (IP) injections of the contrast agent Gadodiamide (Gd) (Omniscan; GE Healthcare, Princeton, NJ, USA). A contrast agent was used as a tracer to depict the area of the BBB-opening. We injected the contrast agent at two time points to obtain MRI scans with different volumes of contrast agent. We first injected 10 mmol/kg GBCAs, which is 3.3% of the full dosage of the GBCAs (low dose), then administered the remaining 97.7% GBCAs (full dose). For each of the injections, we acquired four-dimensional DCE T1-weighted anatomical brain MRI images with 18 slices and 48 acquisitions respectively. The total acquisition time for DCE-MRI was approximately 1 hour. The timeline for FUS and DCE-MRI image acquisition is shown in Fig. 1(a). Schematic Figure 2: Proposed ST-Net architecture. The four-dimensional dynamic contrast-enhanced (DCE)-Magnetic Resonance Imaging (MRI) scans were first cropped to 7\(\times\)748 patches. We then extracted spatial information, using a three-dimensional convolutional neural network (CNN) encoder. Concatenation of spatial features with two reference arrays: (1) The average of DCE-MRI signal before contrast agent injection for each voxel (2) contrast agent concentration in muscle tissues. The size of the output features for each layer was provided in the figure. Each fully connected layer is followed by a leaky ReLU activation. The output from the proposed ST-Net is a volume transfer constant (Ktrans) value, which was reconstructed to acquire a whole-brain Ktrans map showing the timeline for MRI image acquisition and image processing pipeline. ### Image Preprocessing The raw DCE-MRI images were first converted from DICOM to NIFTI format and within-subject robust registration was performed using the FreeSurfer tool [42]. We used the volume transfer constant (Ktrans) generated by a MATLAB program as the desired ground truth for deep learning [9]. Ktrans denotes the transfer rate from the blood plasma to the extravascular extracellular space of each voxel, which is a voxel-level mapping quantified from the DCE protocol and models the capillary permeability. Therefore, it can be used to detect BBB-opening [34, 35]. The Ktrans map was calculated with two kinetic models, the general kinetic model (GKM) [9, 10, 26, 43] and the reference region model (RRM), to quantify the permeability [9]. In addition, we manually labeled the brain mask with 3DSlicer to extract and train the model with only whole brain (WB) information. The preprocessing pipeline is shown in Fig. 1(b). ### Deep Learning Model #### 4.4.1 Spatial Network Combining spatial and temporal deep learning networks, we designed an ST-Net, a voxel-based model to predict full dose 6 BBB-opening from low dose Gd DCE-MRI images. We first cropped each voxel of the WB scan over time to 7x7x48 patches and used a three-dimensional convolutional neural network (CNN) encoder to extract and preserve spatial features. #### 4.4.2 Temporal Network Following the spatial network, we concatenated the output one-dimensional array (64x84) with the other two channels, which were used as reference: (1) Broadcast a single value, the average of the four pre-contrast images, to the same length of frames (48 acquisitions), and (2) Averaged Gd concentration changes in muscle tissue area with time. The concatenated array (66x84) was conducted in a temporal network in reference to a previous model, fast-eTofts, proposed by Fang et al [8]. We first performed a one-dimensional CNN layer in the temporal model to fuse the spatial information with reference information and extract low-level temporal features. The following two parallel CNN pathways were used to extract long-term (global pathway) and short-term (local pathway) features. Finally, two one-dimensional CNN layers and a fully connected layer were used to fuse the long-term and short-term information and to predict the full dose Ktrans value for each pixel. Additionally, we added dropout layers after the fully connected layers to prevent model overfitting. We reconstructed the resulting Ktrans values to acquire a 3d WB Ktrans map. The ST-Net architecture is illustrated in Fig. 2. #### 4.4.3 Model Hyperparameters The ST-Net was trained using the Adam optimizer [31] and the loss function was defined as the mean absolute error (MAE) with early stopping at 300 epochs. We trained the ST-Net with several hyperparameters. To fine-tune the model, we set the network with batch size 512, learning rate 1e-4, and added four layers of a CNN encoder without batch normalization. All the models were trained on three 24 GB NVIDIA Quadro 6000 graphical processing units using PyTorch. ### Dataset Details #### 4.4.4 Training and Testing Dataset Selection We repeatedly selected two mice for testing, and the rest of the eight mice for training set data. The WB voxel of the eight mice was shuffled and split at the ratio of four to one for training and validation, respectively. The cross-validation strategy showed the robustness of the deep learning model. #### 4.4.5 Strategies for Removal of Abnormal Values Both the input DCE-MRI data and the ground truth Ktrans map did not apply any filters. All input data for the model (DCE-MRI patches, averaged pre-contrast scan, and Averaged Gd concentration in muscle) were normalized by the 99 percentiles of the averaged pre-contrast scan. The derived ground truth Ktrans maps had some extremely high values due to the noise. Therefore, to minimize the effect of the GKM model reconstruction errors, only voxels with the Ktrans value at the range of [0,0.05] 1/min were considered when calculating the loss. #### 4.4.6 BBB-Opening Patches Steps The ST-Net tended to overfit due to the high overlap between BBB-opening patches in the training dataset. Therefore, we set a patch step rate to reduce the overlapping area between input patches. #### 4.4.7 Using Two Regions of Interest as Input The input voxels for the deep learning model were composed of two regions of interest (ROI) within the WB. We manually depicted the brain area and the BBB- opening region using 3DSlicer. For all the mice datasets, we chose two multiple slices of ROIs, with one encompassing all the voxels in BBB-opening, and the other from normal-appearing brain tissue having four-fold more voxels than BBB-opening ROI. ### Statistical Analysis To evaluate the quality of the predicted K-trans map, we analyzed the similarity between the predicted K-trans map generated by deep learning and the ground truth K-trans map derived using experimental data from the DCE protocol. To investigate any advantage of adding a spatial network, we displayed the comparison between ST-Net and the modified fast-eTofts, a purely temporal network (T-Net). Additionally, we performed an evaluation on the GKM derived low dose image to show the improvement of detecting BBB-opening using deep learning. Noise from the GKM derived and deep learning predicted Ktrans images was first removed using a 3D median filter with local window-size 3x3x3 from Python library-SciPy. The post-processed WB Ktrans maps were then used to visualize and quantify the performance of the algorithms mentioned above (ST-Net, T-Net,GKM derived low dose image) using structural similarity index (SSIM) (1) [29], peak signal-to-noise ratio (PSNR) (2), Pearson correlation coefficient (PCC) (3), concordance correlation coefficient (CCC) [30], area under the curve (AUC) [33], and normalized root MSE (NRMSE) (4) [32] metrics. \[SSIM=l^{\alpha}(x,y)c^{\beta}(x,y)s^{\gamma}(x,y) \tag{1}\] \[PSNR=10\cdot log_{10}(\frac{MAX_{x}^{2}}{MSE}) \tag{2}\] \[PCC=\frac{\sigma_{xy}}{\sigma_{x}\sigma_{y}} \tag{3}\] \[CCC=\frac{2\sigma_{xy}}{(\mu_{x}-\mu_{y})^{2}+\sigma_{x}^{2}+\sigma_{y}^{2}} \tag{4}\] \[NRMSE=\frac{\sqrt{\frac{1}{N}\Sigma(x-y)^{2}}}{\sqrt{\frac{1}{N}\Sigma x^{2}}} \tag{5}\] Where \(x\) and \(y\) represent the voxel of ground truth and derived/predicted images. The \(l(x,y)\), \(c(x,y)\), and \(s(x,y)\) in SSIM respectively measure the differences between the luminance, contrast, and structure of the two images, and \(\alpha\), \(\beta\), and \(\gamma\) are three constants. \(MAX_{x}\) and \(MSE\) in PSNR represent the maximum voxel intensity of the ground truth and the mean Figure 4: Mapping the BBB-opening Ktrans map in a three-dimensional brain volume. An “iron” color scheme is applied in the figure. (L: left; R: right; T: tail; H: head) Figure 3: Full dose and low dose volume transfer constant (Ktrans) map obtained from the general kinetic model (GKM) model and the predicted Ktrans map from two neural networks (green box), along with the residual map between the low-dose/predicted map and full-dose ground truth (orange box). square error of the two images. \(\mu_{x}\) and \(\mu_{x}\) are the means for the two images, and \(\sigma_{x}\) and \(\sigma_{y}\) are the corresponding variances. \(\sigma_{xy}\) is the covariance and N is the voxel number within the ROI. Both ST-Net and T-Net networks were trained using five-fold cross-validation and Student's t-test was performed on the metrics mentioned above. Significant differences are shown in box plots, with each number of * indicating the order of significance (*: p<0.05; **: p<0.01, ***: p<0.001, ***: p<0.0001). ### Animal Model We used ten C576J/BL mice at the age of 3-6 months old for this study. Mice were scanned using the protocol proposed in the DCE-MRI protocol described previously. A total of 162 scans were acquired for ten subjects. ## III Results We compared our proposed ST-Net with low contrast agent dosage Ktrans images derived from the conventional GKM model and the temporal-only deep learning model, T-Net. The derived/predicted 2D Ktrans images for one testing subject from three different orientations were visualized in Fig. 3. As shown in Fig. 3, the first column was the full dose Ktrans images derived by conventional GKM fitting and was used as ground truth in the deep learning model. The second to fourth columns were low-dose Ktrans images mapped by the GKM model, predictions by T-Net, and predictions by ST-Net, Fig. 5: Box plots visualizing model performance across ten testing mice. Each * indicates order of significance (*: p<0.05; **: p<0.01, ***: p<0.001, ***: p<0.0001). respectively. The following four columns displayed the residual differences between full dose and derived/predicted Ktrans images. Additionally, three-dimensional rendering of BBB-opening for low dose, full dose, T-Net, and ST-Net are shown in Fig. 4. The quantitative comparison among low dose, T-Net, and ST-Net on all the ten testing subjects for WB and opening area, are summarized in Table 1. The average performance with standard deviation is shown in the last row of the Table. In the post-processed reconstructed WB Ktrans maps, ST-Net achieved the highest PCC (\(0.759\pm 0.075\)), CCC (\(0.663\pm 0.174\)), and AUC (\(0.775\pm 0.091\)) across ten testing mice. On the other hand, T-Net performed better in SSIM (\(0.984\pm 0.016\)), PSNR (\(22.876\pm 2.582\)), and NEMSE (\(0.685\pm 0.159\)). For the BBB-opening area, the ST-Net model outperformed in every metrics (SSIM \(=0.959\pm 0.029\), PSNR \(=28.790\pm 4.366\), PCC \(=0.799\pm 0.055\), CCC \(=0.697\pm 0.187\), AUC \(=0.803\pm 0.040\), and NRMSE \(=0.456\pm 0.175\)). The box plots across the ten testing mice for each metric in two ROIs are shown in Fig. 5. Fig. 5 also shows that both T-Net and ST-Net have significant differences compared to low dose Ktrans images in both WB and BBB-opening only areas. ST-Net and T-Net also show significant differences in the opening areas for every metric. ## 4 Discussion FUS with intravenous administration of microbubbles has been shown to open the BBB in small animals in-vivo and in clinical trials. This opening can be targeted and is transient. The FUS-enhanced BBB-opening can be validated with MRI, Positron emission tomography (PET), Single-photon emission computed tomography (SPECT), etc., using labeled molecules. Given the limited tolerance of GBCA, novel approaches are needed for clinical applicability [44]. In this study, we proposed ST-Net, a spatiotemporal CNN deep learning architecture designed for predicting a full dose time-series BBB-opening by low dose T1W MRI. We not only successfully investigated the efficacy of detecting BBB-opening with low dosage contrast agent administration but also improved the model performance with an additional 3D CNN. We first validated a deep learning algorithm that can be used to acquire full dose Ktrans maps while decreasing contrast agent dosage using T-Net. The comparison from three directions in Fig. 3 shows a high similarity between T-Net and ground truth. Compared to low dose derived K-trans, T-Net depicts the BBB-opening area and outlines more accurately. However, we noticed the edges of the BBB-opening in T-Net look noisy in the residual maps. The reason is that we only focus on temporal information in T-Net, therefore, the model cannot differentiate the boundaries between opening and non-opening tissues. The intensity observed within the BBB-opening being lower than the ground truth was caused by the same reason. The intensity of the FUS focus point of the BBB-opening in the Ktrans map should be the highest. However, since T-Net only learned the contrast agent concentration changes for each pixel, it cannot detect the intensity difference among adjacent pixels. As a result, we proposed adding a spatial network to share the features across the brain. One of our main contributions is the novelty of adding a spatial network to further enhance the performance of predicting Ktrans while retaining high fidelity. Instead of simply inputting data on the voxel level, we cropped the WB ROI to patches across time and extracts the spatial features for each patch through a three-dimensional CNN encoder. With spatial information, ST-Net was able to learn the brain structure, and predict the BBB-opening location and shape in reference to the neighbor voxels. The t-test results in Fig. 5 shows there are significant differences between ST-Net predicted results and GKM derived low dose BBB-opening among all metrics in both ROIs. The 2D whole-head (WH) Ktrans images overlapped with structural MRI scans in Fig. 3 and the 3D WB volumes in Fig. 4 visualize one of the testing subjects. Both figures show a clear BBB-opening in ST-Net; however, we can barely visualize the opening in the low dose image. The results demonstrate the efficiency and potential of using ST-Net with a low-dose contrast agent in detecting BBB-opening. The advantages of adding a spatial network in ST-Net include increasing model robustness and improving the prediction at the edges of the BBB-opening. The box plot in Fig. 5 shows the SSIM, PSNR, CCC, and SSIM significantly increase and the NRMSE significantly decreases within the opening area in ST-Net. The standard deviations of ST-Net for all metrics for both ROIs are the smallest, demonstrating that the spatial network was a crucial element in predicting 3D images by providing spatial information from neighbor voxels. The PSNR of the ST-Net opening area is significantly improved in Fig. 5 shows ST-Net provides a denoise effect. The statistical improvement can be visualized in the sagittal direction in Fig. 3. The comparison shows that ST-Net is predicted better on the opening boundary and the opening edge was less noisy than T-Net model. Moreover, the intensity of the BBB-opening in ST-Net was observed to be more similar to the ground truth and matches the structure of the BBB-opening. As a result, adding a spatial network proved to be of high value/importance. ST-Net not only predicted the BBB-opening area accurately, but the normal-appearing brain tissues also show a high resemblance to the ground truth. The BBB-opening might not be induced by FUS in some cases; therefore, the ability of modeling non-opening areas is as critical as modeling opening areas. Moreover, in practice, the BBB-opening will not be limited to one position, therefore, the BBB-opening locations were different across ten datasets in our data to increase data diversity and to simulate the realistic clinical application. The non-opened areas were used for negative control. The 2D WH Krans images overlapped with structural MRI scans and the 3D brain volumes for a subject without BBB-opening as shown in the supplementary material. Some future directions may further improve the utility of ST-Net. First, the FUS parameters we used in the experiment were consistent, therefore the range of the BBB-opening was not much different across subjects. Additionally, although we collected BBB-opening data for both sides of the brain, the locations of the BBB-opening were restricted within the striatum region. Finally, we also can discuss the MRI images acquired with low-field scanners [40] or state-of-the-art portable or mobile MRI machines [41] to acquire images with less scanning time or offer to extend beyond the traditional hospital and imaging center boundaries. To apply our method in a clinical setting, we should replicate and validate the entire experiment with a variety of samples such as conducting high-pressure FUS treatments, collecting BBB-opening data in distinct brain areas, and/or utilizing different types of MRI scanners or MR systems. This study was also limited by the sample diversity. In this study, we only investigated the efficacy of detecting BBB-opening with low dosage contrast agent administration in healthy wild-type mice brains. However, the long term goal of this research is to apply the method to those who suffer from CNS disease. As a result, validating our results on brain tumor or Alzheimer's disease mouse models is necessary. Nonetheless, to extend the research to clinical trials, future non-human primate studies should confirm the result reported in mice studied here. Although the current model is highly automatic, there are some details in the preprocessing pipeline we can improve. The entire deep learning model has been designed to be fully automatic if the DCE-MRI scans and ROI masks are ready. However, we manually labeled the WB and brain muscle regions for the preprocessing step. Even though it is much easier to label muscle maps compared to the reference paper labeling blood vessels, the annotation process is still time-consuming and can disagree depending on the researcher. A possible resolution is to train an additional deep learning-based segmentation tool as described in [36] to remove skull artifacts and extract ROI through a simple U-Net architecture. There are several strategies we can do to further improve the performance or efficiency of the model. In ST-Net, we train the spatial information first and then fuse it with the temporal features. We could try to switch the order sequence of the spatial and temporal network to see if there is any improvement. Furthermore, instead of training spatial and temporal models in sequence, we can design our spatiotemporal model to two-stream CNN, training spatial and temporal models in parallel, and combining the two networks with a late fusion technique. Another strategy is to substitute CNN with different networks such as LSTM [37] or Transformer. In addition to revising the model architecture, we can also implement the data augmentation technique and transfer learning to retrain the model. Even though we successfully estimated BBB-opening with reduced GBCA, GBCA is still necessary. GBCAs can lead to severe side effects in some patients, especially those suffering from kidney disease. Therefore, it is crucial to eliminate the usage of contrast agents in clinical research by developing another framework to achieve a contrast-free method. In ST-Net, we use DCE-MRI and T1W scans as input. In future work, we can introduce multi-modality non-contrast MRI sequences such as effective T2 (T2*), susceptibility-weighted (SWI), arterial spin labeling (ASL), and diffusion tensor (DTI). ## 5 Conclusion In conclusion, we validated the hypothesis of implementing a neural network model focusing on the temporal domain with time-series DCE-MRI data to model Ktrans and detect BBB-opening with low-dose GBCAs. Furthermore, we added a spatial CNN network to significantly improve the performance of Ktrans-based BBB opening confirmation performance. We showed the potential of reducing the use of GBCAs and reducing the risk of contrast agent-induced side effects, thereby improving the safety profile of FUS treatments in the brain. Our data is publicly available, and our code can be found on GitHub. ## Acknowledgment This research was funded by the Gary and Yael Fegel Family Foundation, St. Baldrick's Foundation, the Star and Storm Foundation, the Matheson Foundation (UR010590), Swim Across America, a Herbert Irving Cancer Center Support Grant (P30CA013696), Sebastian Strong Foundation, National Institutes of Health Grants (SR01EB009041) and (5R01AG038961), and the ZI Seed Grant for MR Studies Program. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
2303.11253
Zero-Shot Noise2Noise: Efficient Image Denoising without any Data
Recently, self-supervised neural networks have shown excellent image denoising performance. However, current dataset free methods are either computationally expensive, require a noise model, or have inadequate image quality. In this work we show that a simple 2-layer network, without any training data or knowledge of the noise distribution, can enable high-quality image denoising at low computational cost. Our approach is motivated by Noise2Noise and Neighbor2Neighbor and works well for denoising pixel-wise independent noise. Our experiments on artificial, real-world camera, and microscope noise show that our method termed ZS-N2N (Zero Shot Noise2Noise) often outperforms existing dataset-free methods at a reduced cost, making it suitable for use cases with scarce data availability and limited computational resources. A demo of our implementation including our code and hyperparameters can be found in the following colab notebook: https://colab.research.google.com/drive/1i82nyizTdszyHkaHBuKPbWnTzao8HF9b
Youssef Mansour, Reinhard Heckel
2023-03-20T16:40:37Z
http://arxiv.org/abs/2303.11253v3
# Zero-Shot Noise2Noise: Efficient Image Denoising without any Data ###### Abstract Recently, self-supervised neural networks have shown excellent image denoising performance. However, current dataset free methods are either computationally expensive, require a noise model, or have inadequate image quality. In this work we show that a simple 2-layer network, without any training data or knowledge of the noise distribution, can enable high-quality image denoising at low computational cost. Our approach is motivated by Noise2Noise and Neighbor2Neighbor and works well for denoising pixel-wise independent noise. Our experiments on artificial, real-world camera, and microscope noise show that our method termed ZS-N2N (Zero Shot Noise2Noise) often outperforms existing dataset-free methods at a reduced cost, making it suitable for use cases with scarce data availability and limited computational resources. A demo of our implementation including our code and hyperparameters can be found in the following colab notebook. ## 1 Introduction Image denoising is the process of removing distortions from images, to enhance them visually and to reconstruct fine details. The latter is especially important for medical images, where fine details are necessary for an accurate diagnosis. Current state-of-the-art image denoising techniques rely on large data sets of clean-noisy image pairs and often consist of a neural network trained to map the noisy to the clean image. The drawbacks of dataset based methods are that data collection, even without ground truths, is expensive and time-consuming, and second, a network trained on dataset suffers from a performance drop if the test images come from a different distribution of images. These drawbacks motivate research in dataset-free methods. All current zero-shot models are either suitable only for specific noise distributions and need previous knowledge of the noise level [10, 11], require a lot of compute (time, memory, GPU) to denoise an image [14], have poor denoising quality [13], or do not generalise to different noise distributions or levels [15, 16]. We propose a method that builds on the recent Noise2Noise [12] and Neighbour2Neighbour[17] papers and aims to circumvent these issues to reach a good trade-off between denoising quality and computational resources. We make only minimal assumptions on the noise statistics (pixel-wise independence), and do not require training data. Our method does not require an explicit noise model, and is therefore suitable for various noise types and can be employed when the noise distribution or level are unknown. The only assumption we make about the noise is that it is unstructured and has zero mean. In a nutshell, we convolve the noisy test image with two fixed filters, which yields two downsampled images. We next train a lightweight network with regularization to map one downsampled image to the other. Our strategy builds on the recent Noise2Noise [12] and Neighbour2Neighbour[11] papers, however we take those methods one step further by enabling denoising without any training data. Even with an extremely small network and without any training data, our method achieves good denoising quality and often even outperforms large networks trained on datasets. The key attributes of our work are as follows: * **Compute.** Dataset free neural network based algorithms [17, 18] require solving an optimization problem involving millions of parameters to denoise an image. The huge parameter count requires large memory storage, advanced GPUs, and long denoising times. In this work we show that our method, that utilizes a simple 2 layer network, with only 20k parameters, can often outperform networks with millions of parameters while reducing the computational cost significantly and being easily executable on a CPU. * **Generalisation.** Existing zero-shot methods often to do not generalise well. For example, BM3D [19], a classical denoising algorithm does not generalize well to non-Gaussian noise, and blind spot networks [10][20] (discussed later in detail) fail to denoise well in the regime of low noise level. Extensive experiments on different noise distributions and noise levels show that our proposed approach can generalise better to different conditions better than existing methods. In summary, our proposed method is dataset and noise model-free, and achieves a better trade-off between generalization, denoising quality, and computational resources compared to existing zero-shot methods, as displayed in Figure 1. We compare to the standard zero shot baselines, including BM3D, and the recent neural network-based algorithms DIP [17] and S2S [18]. Only BM3D is faster than our method but achieves poor results on non-Gaussian noise. Only S2S sometimes outperforms our method, but is orders of magnitude slower, often fails on low noise levels [12], and requires ensembling to achieve acceptable performance. ## 2 Related Work Zero-Shot/ Dataset free Methods.Our method is conceptually very similar to Noise2Fast [19], which also builds on Noise2Noise and Neighbour2Neighbour to achieve dataset-free Figure 1: Left and middle plots: PSNR scores for Gaussian and Poission denoising for different noise levels. Note BM3D’s poor performance on Poisson compared to Gaussian noise. Right plot: Time required in seconds to denoise one \(256\times 256\) colour image on CPU and GPU, tested on Poisson noise with \(\lambda=50\). Except for BM3D, all methods have shorter times on GPU. Only S2S in some cases outperforms our method, however it is about 100 times slower. S2S* denotes the ensemble free version of S2S. denoising. However, Noise2Fast uses a relatively large network and requires an early stopping criterion. Our work improves on Noise2Fast by working with a consistency loss that alleviates the need to early stop, and using a much smaller network which saves compute. Specifically, our network is twelve times smaller and a forward pass through it is seven times faster. Our work utilizes a small 2-layer network and achieves competitive quality for image restoration. We show that on grayscale images, our method despite achieving similar scores to Noise2Fast [11], produces better-quality images. This is likely due to Noise2Fast dropping pixel values when downsampling, whereas our method keeps all information retained. Besides this work, classical non-learning-based methods, such as BM3D [10] and Anscombe [14], work well for Gaussian and Poisson noise, respectively, and require the noise level as an input. Another popular neural network-based technique is DIP (Deep Image Prior) [13] and its variants such as the Deep Decoder [12]. DIP builds on the fact that CNNs have an inductive bias towards natural images, in that they can fit natural images much faster than noise. Therefore, a network trained, with early stopping, to map a random input to the noisy image will denoise the image. The denoising performance of DIP is often poor, and is dependent on the number of training epochs, which is hard to determine in advance. Self2Self [15] is another important method that achieves promising results. It utilizes the idea of the blind spot networks (reconstructing masked pixels) on a single image, but with dropout ensembling. However, this method is not computationally efficient, in that it requires long durations to denoise an image. According to the authors, it takes 1.2 hours to denoise one \(256\times 256\) image on a GPU. Compared to other blind spot networks, Self2Self achieves significantly better denoising scores, since it relies on ensembling, i.e., averaging the output of several networks. However, ensemble learning over smoothens the image, causing a loss of some details, despite the improvement in PSNR scores [12]. Similar to almost all supervised and self-supervised methods, both Self2Self and DIP use a UNet [13] or a variant of it as the backbone network in their architectures. A UNet typically has millions of parameters, making it unsuitable for compute limited applications. Our work departs from this scheme, by designing a shallow and simple network with few parameters. Supervised methodsachieve state-of-the-art performance by training a network end-to-end to map a noisy image to a clean one. Networks that work well are CNNs [15, 16], vision transformers [14], or MLP based architectures [13, 12]. Noise2Noise [11] yields excellent performance from training on two noisy images of the same static scene, without any ground truth images. Given that the noise has zero mean, training a network to map one noisy image to another noisy image of the same scene performs as well as mapping to the ground truth. While having access to a pair of noisy images of the same scene is in practice hard to achieve, the Noise2Noise method has motivated further research in self-supervised methods [12] that require only single noisy images. Self-supervised methodsare trained on datasets consisting of only noisy images. Noise2Void [17] and Noise2Self [10] are two blind spot prediction approaches for image denoising. Given a set of noisy images \(\{\mathbf{y}^{i}\}_{1}^{n}\), The idea is to minimize the loss \(\frac{1}{n}\sum_{i=1}^{n}\)\(\mathcal{L}(f_{\boldsymbol{\theta}}(M^{i}(\mathbf{y}^{i})),\mathbf{y}^{i})\), where \(\mathcal{L}\) is a loss function, \(f_{\boldsymbol{\theta}}\) is a network, and \(M^{i}\) is an operator that masks some pixels, hence the name blind spot. Assuming that the neighbouring pixels of a clean image are highly correlated, and that the noise pixels are independent, a network trained to reconstruct a masked pixel, can only predict the signal value from the neighbouring visible pixels, but not the noise. Blind spot networks require long training times and have low denoising quality. Probabilistic variations of such networks [13, 14] converge much faster, and use posterior mean estimation to achieve better quality. Those probabilistic variations of blind spot networks work well for a given artificial noise model, but a significant performance drop was shown when using such methods to denoise real world camera noise, since the natural noise is not well approximated by artificial noise [11]. Recently, several works [10, 11, 12] attempted to use Stein's unbiased risk estimator for Gaussian denoising. Such methods work well only for Gaussian noise and require the noise level to be known in advance. A more general framework is Noisier2Noise [13] which works for any known noise distribution. Noise is sampled and added to the noisy images to create noisier images. A network is then trained to map the noisier to the noisy images. However, working with double noisy images distorts the image even further, which degrades performance. The newly proposed Neighbour2Neighbour [11] builds on the Noise2Noise [10] method, where the assumptions are that the noise has zero mean and is pixel-wise independent. Neighbour2Neighbour extends Noise2Noise by enabling training without noisy image pairs. It does so by sub-sampling single noisy images to create pairs of noisy images, where Noise2Noise can be applied. Image sub-sampling is widely used in image processing tasks, such as compression [14] or as an augmentation technique to increase the training data. ## 3 Method Our method builds on the Noise2Noise [10], for training a network on pairs of noisy images, and the Neighbour2Neighbour (NB2NB) [11], which generates such pairs from a single noisy image. Our main idea is to generate a pair of noisy images from a single noisy image and train a small network only on this pair. We start with a brief summary of Noise2Noise and then introduce our method. ### Background: Noise2Noise and Neighbour2Neighbour Supervised denoising methods are typically neural networks \(f_{\mathbf{\theta}}\) that map a noisy image \(\mathbf{y}\) to an estimate \(f_{\mathbf{\theta}}(\mathbf{y})\) of the clean image \(\mathbf{x}\). Supervised denoising methods are typically trained on pairs of clean images \(\mathbf{x}\) and noisy measurements \(\mathbf{y}=\mathbf{x}+\mathbf{e}\), where \(\mathbf{e}\) is noise. We refer to supervised denoising as Noise2Clean (N2C). Neural networks can also be trained on different noisy observations of the same clean image. Noise2Noise (N2N) [10] assumes access to a set of pairs of noisy images \(\mathbf{y}_{1}=\mathbf{x}+\mathbf{e}_{1},\mathbf{y}_{2}=\mathbf{x}+\mathbf{e} _{2}\), where \(\mathbf{e}_{1},\mathbf{e}_{2}\) are independent noise vectors. A network \(f_{\mathbf{\theta}}\) is then trained to minimize the empirical risk \(\frac{1}{n}\sum_{i=1}^{n}\left\|f_{\mathbf{\theta}}(\mathbf{y}_{1}^{i})-\mathbf{y} _{2}^{i}\right\|_{2}^{2}\). This makes sense, since in expectation over such noisy instances, and assuming zero mean noise, training a network in a supervised manner to map a noisy image to another noisy image is equivalent to mapping it to a clean image i.e., \[\operatorname*{arg\,min}_{\mathbf{\theta}}\mathbb{E}\left[\left\|f_{\mathbf{\theta}}( \mathbf{y}_{1})-\mathbf{x}\right\|_{2}^{2}\right]=\operatorname*{arg\,min}_{ \mathbf{\theta}}\mathbb{E}\left[\left\|f_{\mathbf{\theta}}(\mathbf{y}_{1})-\mathbf{y} _{2}\right\|_{2}^{2}\right]. \tag{1}\] The proof is given in the supplementary material. In theory N2N training reaches the same performance as N2C training if the dataset is infinitely large. In practice, since the training set is limited in size, N2N falls slightly short of N2C. For example, N2N training with a UNet on 50k images gives a performance drop of only about 0.02 dB compared to N2C with a UNet. Despite the great performance of N2N, its usability is often limited, since it is difficult to obtain a pair of noisy images of the same static scene. For instance, the object being captured might be non-static, or the lighting conditions change rapidly. Neighbour2Neighbour (NB2NB) [11] extends N2N and allows training only on a set of single noisy images, by sub-sampling a noisy image to create a pair of noisy images. Similar to N2N, NB2NB exhibits strong denoising performance when trained on many images. ### Zero-Shot Noise2Noise Our work extends Noise2Noise [11] and Neighbour2Neighbour[12] by enabling training on only one single noisy image. To avoid overfitting to the single image, we use a very shallow network and an explicit regularization term. Almost all self- or un-supervised denoising methods, including ours, rely on the premise that a clean natural image has different attributes than random noise. As shown in [12], a noisy image can be decomposed into a pair of downsampled images. Based on the premise that nearby pixels of a clean image are highly correlated and often have similar values, while the noise pixels are unstructured and independent, the downsampled pair of noisy images has similar signal but independent noise. This pair can therefore serve as an approximation of two noisy observations of the same scene, where one observation is used as the input, and the other as the target, as in N2N. Our approach is to first decompose the image into a pair of downsampled images and second train a lightweight network with regularization to map one downsampled image to the other. Applying the so-trained network to a noisy image yields the denoised image. We first explain how we generate the downsampled images, and then how we fit the network. Image Pair DownsamplerThe pair downsampler takes as input an image \(\mathbf{y}\) of size \(H\times W\times C\) and generates two images \(D_{1}(\mathbf{y})\) and \(D_{2}(\mathbf{y})\), each of size \(H/2\times W/2\times C\). The downsampler generates those images by dividing the image into non-overlapping patches of size \(2\times 2\), taking an average of the diagonal pixels of each patch and assigning it to the first low-resolution image, then the average of the anti-diagonal pixels and assigning it to the second low-resolution image. See Figure 2 for an illustration of the pair downsampler. The downsampler is implemented with convolutions as follows. The first low-resolution image is obtained by applying a 2D convolution with stride two and fixed kernel to the original image as \(D_{1}(\mathbf{y})=\mathbf{y}\mathbin{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\raise 2.0pt\hbox{$\mathchar 316$}}\mathbf{k_{1}}\), and the second image is obtained by applying a 2D convolution with stride two and fixed kernel to the original image as \(D_{2}(\mathbf{y})=\mathbf{y}\mathbin{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\raise 2.0pt\hbox{$\mathchar 316$}}\mathbf{k_{2}}\). The convolutions are implemented channel-wise and therefore the downsampling scheme is applicable to any arbitrary number of input channels. Zero-shot-image denoising method.Given a test image \(\mathbf{y}\) to denoise, our method is conceptually similar to first fitting a small image-to-image neural network \(f_{\boldsymbol{\theta}}\) to map the first downsampled image \(D_{1}(\mathbf{y})\) to the second one, \(D_{2}(\mathbf{y})\) by minimizing the loss \[\mathcal{L}(\boldsymbol{\theta})=\left\|f_{\boldsymbol{\theta}}(D_{1}(\mathbf{ y}))-D_{2}(\mathbf{y})\right\|_{2}^{2}. \tag{2}\] Figure 2: The Image Pair Downsampler decomposes an image into two images of half the spatial resolution by averaging diagonal pixels of \(2\times 2\) non-overlapping patches. In the above example the input is a \(4\times 4\) image, and the output is two \(2\times 2\) images. Once we fitted the network, we can apply it to the original noisy observation to estimate the denoised image as \(\hat{\mathbf{x}}=f_{\hat{\mathbf{\theta}}}(\mathbf{y})\). However, our experiments showed that residual learning, a symmetric loss, and an additional consistency-enforcing term are critical for good performance. We next explain the elements of our loss function. In residual learning, the network is optimized to fit the noise instead of the image. The loss then becomes \[\mathcal{L}(\mathbf{\theta})=\|D_{1}(\mathbf{y})-f_{\mathbf{\theta}}(D_{1}(\mathbf{y} ))-D_{2}(\mathbf{y})\|_{2}^{2}. \tag{3}\] Following [1], where a symmetric loss was used in the context of self-supervised pre-training of a siamese network, we additionally adopt a symmetric loss, which yields the residual loss: \[\mathcal{L}_{\text{res.}}(\mathbf{\theta})=\frac{1}{2}\Big{(}\|D_{1}( \mathbf{y})-f_{\mathbf{\theta}}(D_{1}(\mathbf{y}))-D_{2}(\mathbf{y})\|_{2}^{2}+ \tag{4}\] \[\|D_{2}(\mathbf{y})-f_{\mathbf{\theta}}(D_{2}(\mathbf{y}))-D_{1}( \mathbf{y})\|_{2}^{2}\Big{)}.\] In addition, we enforce consistency by ensuring that first denoising the image \(\mathbf{y}\) and then downsampling it, is similar to what we get when first downsampling \(\mathbf{y}\) and then denoising it, i.e., we consider a loss of the form: \[\mathcal{L}(\mathbf{\theta})=\|D(\mathbf{y})-f_{\mathbf{\theta}}(D(\mathbf{y}))-D( \mathbf{y}-f_{\mathbf{\theta}}(\mathbf{y}))\|_{2}^{2}. \tag{5}\] Again adopting a symmetric loss, the consistency loss becomes: \[\mathcal{L}_{\text{cons.}}(\mathbf{\theta})=\frac{1}{2}\Big{(}\|D_{1 }(\mathbf{y})-f_{\mathbf{\theta}}(D_{1}(\mathbf{y}))-D_{1}(\mathbf{y}-f_{\mathbf{ \theta}}(\mathbf{y}))\|_{2}^{2} \tag{6}\] \[+\|D_{2}(\mathbf{y})-f_{\mathbf{\theta}}(D_{2}(\mathbf{y}))-D_{2}( \mathbf{y}-f_{\mathbf{\theta}}(\mathbf{y}))\|_{2}^{2}\Big{)}.\] Note that for the residual loss, the network only has the downsampled images as input. Only in the consistency loss, the network gets to see the image in full spatial resolution. Including the consistency loss enables better denoising performance and helps to avoid overfitting. It can therefore be seen as a regularizing term. In summary, we minimize the loss \(\mathcal{L}(\mathbf{\theta})=\mathcal{L}_{\text{res.}}(\mathbf{\theta})+\mathcal{L}_{ \text{cons.}}(\mathbf{\theta})\) using gradient descent, which yields the network parameters \(\hat{\mathbf{\theta}}\). With those, we estimate the denoised image as \(\hat{\mathbf{x}}=\mathbf{y}-f_{\hat{\mathbf{\theta}}}(\mathbf{y})\). Note that only the network parameters \(\mathbf{\theta}\) are optimized during the gradient descent updates, since the downsampling operations \(D_{1}\) and \(D_{2}\) are fixed. Convergence typically requires 1k to 2k iterations, which thanks to using a lightweight network takes less than half a minute on a GPU and around one minute on a CPU. NetworkMany supervised and self-supervised methods use a relatively large network, often a UNet [11]. Instead, we use a very simple two-layer image-to-image network. It consists of only two convolutional operators with kernel size 3 \(\times\) 3 followed by one operator of 1\(\times\)1 convolutions. This network has about 20k parameters, which is small compared to typical denoising networks. An exact comparison of the network sizes can be found in section 4.4. There are no normalization or pooling layers. The low parameter count and simple structure enables fast denoising even when deployed on a CPU. In the ablation studies we show that using a UNet instead of a lightweight network leads to overfitting and much worse denoising performance. ## 4 Experiments We compare our denoising algorithm (ZS-N2N) to several baselines. The baselines include dataset based methods, as well as other zero-shot methods. For the dataset based methods, we include both supervised (with clean images) and self-supervised (only noisy images) methods. We test all methods on artificial and real-world noise. We provide ablation studies in the supplementary material. The results highlight the dependency of dataset based methods on the dataset they are trained on and suggest that given a small training set, they are outperformed by dataset free ones. Furthermore, the experiments show that methods based on noise models achieve good performance for the specific noise model, but do not generalise to other distributions. Concerning the dataset and noise model free methods, our proposed method is either on par or better than other baselines on Gaussian, Poisson, and real world camera and microscope noise. Our method only falls short of Self2Self [14] on high noise levels, however, it requires only \(\frac{1}{200}\) of the denoising time of Self2Self and 2% of it's memory. Moreover, Self2Self's performance on low noise levels is insufficient. Therefore, considering denoising quality, generalisation, and computational resources, our method achieves a better trade-off compared to existing methods as shown in Figure 1. ### Baselines We compare to Noise2Clean (N2C) with a UNet, which is the current state-of-the-art denoising algorithm. There exits several other networks that perform on par with the UNet, such as DnCNN [13] and RED30 [12], but the UNet is orders of magnitude faster, since it is not very deep, and has a multi-resolution structure. The UNet is therefore the standard choice in all recent denoising papers [10, 11, 12, 13]. For the self-supervised methods, we compare to Neighbour2Neighbour (NB2NB) [13] and Noise2Void (N2V) [11]. We exclude the methods that require an explicit noise model, such as [10, 13, 14], since these methods work well on synthetic denoising tasks for the given noise distribution, but fail to generalize to unknown noise distributions or real-world noise [13, 12]. This is due to the fact that the synthetic noise is insufficient for simulating real camera noise, which is signal-dependent and substantially altered by the camera's imaging system. Regarding the zero-shot methods, which are most similar to ours, we compare to the deep learning based algorithms: DIP [15] and Self2Self (S2S) [14], and also to the classical algorithm: BM3D [1]. Note that apart of our method (and BM3D), all baselines use a U-Net or a variation of it as a denoising backbone. The performance of DIP is very sensitive to the number of gradient descent steps. We used the ground truth images to determine the best early stopping iteration. The DIP results can therefore be seen as an over optimistic performance of the method. For a fair comparison, we report the results of the best performing model for the other baselines. A comparison of the sensitivity of the methods to the number of optimization steps can be found in the supplementary material. The original implementation of S2S uses an ensemble of multiple networks, i.e, averaging the outputs of several networks. All other baselines do not utilize ensembling or averaging. For a fair comparison, we additionally report the results of S2S without any ensembling, which we denote by S2S*. S2S denotes the original implementation with an ensemble of 50 networks. ### Synthetic Noise The dataset based methods (N2C, NB2NB, N2V) are trained on 500 colour images from ImageNet [10]. All methods are tested on the Kodak24 1 and McMaster18 [15] datasets. All training and test images are center-cropped to patches of size 256 \(\times\) 256. Footnote 1: [http://rok.us/graphics/kodak/](http://rok.us/graphics/kodak/) We examine Gaussian and Poisson noise with noise levels \(\sigma\) and \(\lambda\) respectively. We consider the fixed noise levels \(\sigma,\lambda\)= 10, 25, 50. The \(\sigma\) values for Gaussian noise correspond to pixel values in the interval [0,255], while the \(\lambda\) values for Poisson noise correspond to values in the interval [0,1]. For the dataset based methods, we also consider blind denoising during training with the range of noise levels \(\sigma,\lambda\in[10,50]\). During training, a \(\sigma,\lambda\) value is sampled uniformly from the given range for each image in each training epoch, unlike the fixed noise levels, where all training images are contaminated with the same noise level. Blind denoising is what is used in practice, since an exact noise level is typically not given, but rather a range of noise levels. In table 1, we present the denoising performance of the different methods. For the dataset based methods, \(\sigma,\lambda\) is known, denotes that the network trained on that exact noise level is used for testing, while unknown denotes the blind denoising, where the network trained on the range of noise levels [10,50] is used for testing. BM3D requires as input the value of the noise level. For Gaussian denoising the known \(\sigma\) value was used, while for Possion denoising the noise level was estimated using the method in [15]. Note that ZS-N2N, DIP, and S2S do not utilize any prior information on the noise distribution or level. As seen from the results, the dataset based methods often fall slightly short of the dataset free methods. This is due to the fact that they were only trained on 500 images, whereas they reach good performance when trained on larger datasets. In the supplementary material, we show that when N2C is trained on 4000 images, it outperforms all other baselines and its performance can keep improving with more training data. Another drawback of dataset \begin{table} \begin{tabular}{|c c c c|c c|c c c|} \hline Noise & \multicolumn{2}{c|}{Method} & \multicolumn{3}{c|}{Kodak24} & \multicolumn{3}{c|}{McMaster18} \\ \hline \hline \multirow{6}{*}{Gaussian} & \multirow{6}{*}{_N2C_} & \multirow{6}{*}{_N2C_} & \(\sigma\) known? & \(\sigma=10\) & \(\sigma=25\) & \(\sigma=50\) & \(\sigma=10\) & \(\sigma=25\) & \(\sigma=50\) \\ \cline{3-10} & & & yes & 33.45 & 28.27 & 25.47 & 33.03 & 28.46 & **25.86** \\ & & & no & 32.16 & 28.18 & 24.45 & 31.97 & 28.26 & 24.78 \\ & & & yes & 33.01 & 27.90 & 25.02 & 32.63 & 28.01 & 25.25 \\ & & & no & 31.79 & 27.80 & 24.15 & 31.19 & 27.85 & 23.95 \\ & & & yes & 30.19 & 26.21 & 24.07 & 30.95 & 26.50 & 23.94 \\ & & & no & 28.95 & 26.03 & 23.19 & 29.64 & 26.31 & 22.67 \\ \cline{2-10} & & & ZS-N2N (ours) & - & 33.69 & **29.07** & 24.81 & 34.21 & 28.80 & 24.02 \\ & & DIP & - & 32.28 & 27.38 & 23.95 & 33.07 & 27.61 & 23.03 \\ & & S2S & - & 29.54 & 28.39 & **26.22** & 30.78 & 28.71 & 25.03 \\ & & S2S* & - & 26.93 & 26.29 & 24.83 & 27.64 & 26.48 & 23.79 \\ & & BM3D & yes & **33.74** & 29.02 & 25.51 & **34.51** & **29.21** & 24.51 \\ \hline \hline \multirow{6}{*}{Poisson} & \multirow{6}{*}{_N2C_} & \multirow{6}{*}{_N2C_} & \(\lambda\) known? & \(\lambda=50\) & \(\lambda=25\) & \(\lambda=10\) & \(\lambda=50\) & \(\lambda=25\) & \(\lambda=10\) \\ \cline{3-10} & & & yes & 29.42 & 27.49 & 26.25 & 29.89 & 28.20 & 26.42 \\ \cline{3-10} & & & no & 28.92 & 27.14 & 23.13 & 28.62 & 27.51 & 24.32 \\ \cline{3-10} & & & yes & 29.19 & 27.01 & 25.71 & 29.41 & 27.79 & 25.95 \\ \cline{3-10} & & & no & 28.53 & 26.88 & 23.60 & 28.03 & 27.66 & 24.58 \\ \cline{3-10} & & & yes & 27.73 & 25.55 & 23.77 & 27.86 & 25.65 & 23.47 \\ \cline{3-10} & & & no & 27.04 & 25.28 & 21.93 & 26.34 & 25.52 & 22.07 \\ \cline{3-10} & & & ZS-N2N (ours) & - & **29.45** & 27.52 & 24.92 & **30.36** & 28.41 & 25.75 \\ \cline{3-10} & & DIP & - & 27.51 & 25.84 & 23.81 & 28.73 & 27.37 & 24.67 \\ \cline{3-10} & & S2S & - & 28.89 & **28.31** & **27.29** & 30.11 & **29.40** & **27.71** \\ \cline{3-10} & & & S2S* & - & 26.75 & 26.40 & 25.63 & 27.55 & 27.24 & 26.39 \\ \cline{3-10} & & BM3D & no & 28.36 & 26.58 & 24.20 & 27.33 & 24.77 & 21.59 \\ \hline \end{tabular} \end{table} Table 1: PSNR scores in dB for Gaussian and Poisson denoising. Best result is in **bold**, second best result is underlined. The dataset based methods are _italicized_. Note DIP’s mediocre scores and BM3D’s performance drop between Gaussian and Poission noise. S2S has significantly lower scores in low noise as seen with \(\sigma=10\) and its ensemble free version S2S* has inadequate performance. Denoised samples can be found in the supplementary material. based methods is that they are sensitive to the data they are trained on. They experience a performance drop when trained on a range of noise levels as opposed to a specific noise level as the test set. Regarding the zero-shot methods, DIP exhibited worse scores in all simulations. BM3D is tailored to work well for Gaussian denoising, where the exact noise variance is known and required as input. However, its performance dropped for Poisson noise, where the noise level was estimated. ZS-N2N and S2S do not rely on a specific noise model and therefore work consistently well for both Gaussian and Poisson noise. However, S2S suffers from at least two drawbacks. The first is it heavily relies on ensembling to achieve good scores as seen by comparing the results of S2S with S2S*. Despite improving the scores, ensembling oversmoothens the image causing a loss in some visual features [1]. Note that all other baselines are ensemble free. The second drawback is that it performs worse than all other baselines on low noise levels, as seen in the Gaussian noise with \(\sigma=10\). Considering that DIP performs poorly, that BM3D only works well for Gaussian noise, and that S2S's performance without ensembling and on low noise levels is unsatisfactory, our method, ZS-N2N is the only dataset free denoising algorithm that performs well on different noise distributions and levels. ### Real-World Noise Camera noise:Following [15], we evaluate on the PolyU dataset [16] which consists of high-resolution images from various scenes captured by 5 cameras from the 3 leading brands of cameras: Canon, Nikon, and Sony. We also consider the SIDD [1], which consists of images captured by several smartphone cameras under different lighting conditions and noise patterns. Since the computational cost for running S2S is high, we randomly choose 20 images from both datasets to test on. The SIDD validation set has images of size \(256\times 256\). For consistency, we center-crop the PolyU images to patches of size \(256\times 256\). The results are shown in table 2. All methods perform similarly except for BM3D and the ensemble free version of S2S, which exhibit a notable performance drop. Microscope noise:We additionally evaluate on the Fluorescence Microscopy dataset [14], which contains real grayscale fluorescence images obtained with commercial confocal, two-photon, and wide-field microscopes and representative biological samples such as cells, zebrafish, and mouse brain tissues. We pick random images from the test set to test on. We also compare to Noise2Fast (N2F) [13], for which code for denoising grayscale is available. The results are depicted in table 3. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Dataset & ZS-N2N & DIP & S2S & S2S* & BM3D \\ \hline PolyU & 36.92 & **37.07** & 37.01 & 33.12 & 36.11 \\ \hline SIDD & 34.07 & **34.31** & 33.98 & 30.77 & 28.19 \\ \hline \end{tabular} \end{table} Table 2: Denoising PSNR in dB on real world camera noise. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Image & ZS-N2N & DIP & S2S & S2S* & BM3D & N2F \\ \hline Photon BPAE & 30.73 & 29.22 & 30.90 & 29.49 & 27.19 & **30.93** \\ Photon Mice & 31.42 & 30.01 & **31.51** & 29.99 & 29.48 & 31.07 \\ Confocal BPAE & 35.85 & 35.51 & 31.01 & 29.54 & 33.23 & **36.01** \\ \hline Average & **32.67** & 31.58 & 31.14 & 29.67 & 29.97 & **32.67** \\ \hline \end{tabular} \end{table} Table 3: PSNR in dB on real world microscope noise. Our method and Noise2Fast achieve similar scores and slightly outperform the other baselines. Despite the similarity in scores, when inspecting the denoised images visually, we see differences: Our method produces visually sharper images and preserves slightly more details, while the Noise2Fast images are relatively smooth. This is most noticeable on images with fine details, such as MRI images, see Figure 3 for a knee image from the fastMRI dataset [20]. The blurriness in the Noise2Fast images is likely due to the downsampling scheme used, which drops some pixel values, and the ensembling performed to obtain the final image estimate, which oversmoothens the image [4]. Our method, on the other hand, preserves all pixel values during downsampling, and is ensemble free. ### Computational Efficiency In this section we focus on the computational efficiency. We consider the denoising time and the memory requirements represented by the number of network parameters. Since in some applications a GPU is not available [1], we additionally consider the denoising time on a CPU. The GPU tested is Quadro RTX 6000 and the CPU is Intel Core i9-9940X 3.30GHz. In table 4 we display the time required to denoise one colour image of size \(256\times 256\) at inference, as well as the total number of trainable parameters of a model. The dataset based methods are trained for long durations, but after training, the network parameters are fixed, and inference is almost instantaneous, since it is just a forward pass through the model. The time taken for denoising is therefore negligible compared to the zero-shot methods, whose parameters are optimized for each test image separately. In the original implementation of S2S, the authors report a denoising time of 1.2 hours for a \(256\times 256\) colour image on GPU. However, we noticed that only half of the gradient update iterations are needed for convergence. We therefore report only half of their GPU time. Concerning the denoising time, dataset based methods are the fastest, since a forward pass through a fixed network requires only milli seconds. Regarding the deep learning based zero-shot methods, ZS-N2N is significantly more computationally efficient. Specifically, on CPU it Figure 3: Visual comparison between our method and Noise2Fast for denoising Gaussian noise on a knee MRI. Both methods achieve similar PSNR, but notice how the center and left edge are blurry and oversmooth in Noise2Fast. Our method produces a sharper image with less loss of details. is 200 times and 35 times faster than S2S and DIP respectively and has only 2% and 1% of their memory requirements. Only the classical BM3D is computationally more efficient than ZS-N2N. ### Discussion Dataset based methods typically achieve state-of-the-art results but our experiments manifested two of their shortcomings: They don't perform well when trained on small datasets, and the performance drops when the test data differs from the training data, as seen by varying the noise levels. This highlights the importance of dataset free denoising algorithms. Methods that rely on an explicit model of the noise distribution such as Noisier2Noise [14] and Anscombe [13] or those tailored to work well for specific distributions such as BM3D, do not generalize well to other distributions. Their performance therefore degrades when the noise distribution is unknown, or the noise level must be estimated. This has been manifested by BM3D's competitive performance on Gaussian noise, but its failure to keep up with the other baselines on Poission and real world noise. These findings highlight the advantage of noise model free techniques. Regarding the three dataset free and noise model free methods considered, DIP was often lagging behind S2S and ZS-N2N, despite using the ground truths to find the best possible early stopping iteration. S2S's performance without ensembling is inadequate, and even with ensembling, it does not work well on low noise levels. Moreover, it requires more than 0.5 hours to denoise an image on a GPU and 4.5 hours on a CPU. Except for ZS-N2N, all deep learning based baselines have millions of parameters, making them computationally expensive. Considering ZS-N2N's ability to generalize to various denoising conditions with relatively fast denoising time, very few parameters, and CPU compatibility, we can conclude that it offers a good trade-off between denoising quality and computational resources. ## 5 Conclusion We proposed a novel zero-shot image denoising algorithm that does not require any training examples or knowledge of the noise model or level. Our work uses a simple 2-layer network, and allows denoising in a relatively short period of time even when executed without a GPU. The method can perform well on simulated noise as well as real-world camera and microscope noise, and achieves a good trade-off between generalization, denoising quality and computational resources compared to existing dataset free methods. ## Acknowledgements The authors are supported by the Institute of Advanced Studies at the Technical University of Munich, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) \begin{table} \begin{tabular}{|c||c|c|c||c|c|c|} \hline Method & N2C & NB2NB & N2V & ZS-N2N & DIP & S2S & BM3D \\ \hline GPU time & - & - & - & 20 sec. & 3 min. & 35 min. & 4 sec. \\ CPU time & - & - & - & 80 sec. & 45 min. & 4.5 hr. & 4 sec. \\ Network size & 3.3M & 1.3M & 2.2M & 22k & 2.2M & 1M & - \\ \hline \end{tabular} \end{table} Table 4: Computational Resources. **First and Second Rows:** Time taken to denoise one image on average on GPU and CPU. The time for the dataset based methods is discarded, since it is negligible. BM3D does not benefit from the GPU, as there is no optimization involved. **Bottom Row:** Number of parameters of a network. - 456465471, 464123524, the German Federal Ministry of Education and Research, and the Bavarian State Ministry for Science and the Arts. The authors of this work take full responsibility for its content.
2301.00527
Diffusion Probabilistic Models for Scene-Scale 3D Categorical Data
In this paper, we learn a diffusion model to generate 3D data on a scene-scale. Specifically, our model crafts a 3D scene consisting of multiple objects, while recent diffusion research has focused on a single object. To realize our goal, we represent a scene with discrete class labels, i.e., categorical distribution, to assign multiple objects into semantic categories. Thus, we extend discrete diffusion models to learn scene-scale categorical distributions. In addition, we validate that a latent diffusion model can reduce computation costs for training and deploying. To the best of our knowledge, our work is the first to apply discrete and latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene completion (SSC) by learning a conditional distribution using our diffusion model, where the condition is a partial observation in a sparse point cloud. In experiments, we empirically show that our diffusion models not only generate reasonable scenes, but also perform the scene completion task better than a discriminative model. Our code and models are available at https://github.com/zoomin-lee/scene-scale-diffusion
Jumin Lee, Woobin Im, Sebin Lee, Sung-Eui Yoon
2023-01-02T05:00:11Z
http://arxiv.org/abs/2301.00527v1
# Diffusion Probabilistic Models for Scene-Scale 3D Categorical Data ###### Abstract In this paper, we learn a diffusion model to generate 3D data on a scene-scale. Specifically, our model crafts a 3D scene consisting of multiple objects, while recent diffusion research has focused on a single object. To realize our goal, we represent a scene with discrete class labels, i.e., categorical distribution, to assign multiple objects into semantic categories. Thus, we extend discrete diffusion models to learn scene-scale categorical distributions. In addition, we validate that a latent diffusion model can reduce computation costs for training and deploying. To the best of our knowledge, our work is the first to apply discrete and latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene completion (SSC) by learning a conditional distribution using our diffusion model, where the condition is a partial observation in a sparse point cloud. In experiments, we empirically show that our diffusion models not only generate reasonable scenes, but also perform the scene completion task better than a discriminative model. Our code and models are available at [https://github.com/zoomin-lee/scene-scale-diffusion](https://github.com/zoomin-lee/scene-scale-diffusion). ## 1 Introduction Learning to generate 3D data has received much attention thanks to its high performance and promising downstream tasks. For instance, a 3D generative model with a diffusion probabilistic model [2] has shown its effectiveness in 3D completion [2] and text-to-3D generation [1, 3]. While recent models have focused on 3D object generation, we aim beyond a single object by generating a 3D scene with multiple objects. In Fig. 0(b), we show a sample scene from our generative model, where we observe the plausible placement of the objects, as well as their correct shapes. Compared to the existing object-scale model [1] (Fig. 0(a)), our scene-scale model can be used in a broader application, such as semantic scene completion (Sec. 4.3), where we complete a scene given a sparse LiDAR point cloud. We base our scene-scale 3D generation method on a diffusion model, which has shown remarkable performance in modeling complex real-world data, such as realistic 2D images [4, 5, 6] and 3D objects [1, 2, 3]. We develop and evaluate diffusion models learning a scene-scale 3D categorical distribution. First, we utilize categorical data for a voxel entity since we have multiple objects in contrast to the existing work [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 183, 185, 187, 189, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 231, 242, 251, 262, 270, 281, 292, 200, 211, 203, 204, 205, 206, 207, 209, 211, 208, 212, 223, 240, 208, 209, 211, 224, 251, 262, 271, 282, 293, 200, 212, 201, 202, 203, 204, 205, 206, 207, 209, 211, 222, 224, 252, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 252, 242, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 282, 283, 284, 285, 286, 287, 288, 289, 290, 280, 281, 282, 285, 286, 287, 288, 289, 291, 288, 289, 292, 293, 294, 294, 295, 296, 297, 298, 299, 300, 310, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 42, 43, 44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 54, 55, 56, 57, 58, 59, 60, 52, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 108, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 180, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 210, 211, 22, 213, 214, 215, 216, 217, 218, 219, 22, 22, 233, 234, 235, 236, 237, 238, 239, 240, 241, 258, 259, 261, 270, 28, 293, 294, 295, 296, 297, 298, 300, 310, 32, 330, 334, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 53, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 112, 113, 114, 115, 116, 117, 119, 120, 121, 124, 125, 126, 127, 128, 129, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 152, 156, 157, 158, 159, 160, 170, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 200, 210, 212, 223, 234, 235, 236, 2 sonable scene in a realistic scenario with a sparse and partial observation. Lastly, we show the effectiveness of our method in terms of the unconditional and conditional (SSC) generation tasks on the CarlaSC dataset [9] (Sec. 4). Especially, we show that our generative model can outperform a discriminative model in the SSC task. ## 2 Related Work ### Semantic Scene Completion Leveraging 3D data for semantic segmentation has been studied from different perspectives. Vision sensors (e.g., RGB-D camera and LiDAR) provide depth information from a single viewpoint, giving more information about the world. One of the early approaches is using an RGB-D (i.e., color and depth) image with a 2D segmentation map [10]. In addition, using data in a 3D coordinate system has been extensively studied. 3D semantic segmentation is the extension of 2D segmentation, where a classifier is applied to point clouds or voxel data in 3D coordinates [11, 12]. One of the recent advances in 3D semantic segmentation is semantic scene completion (SSC), where a partially observable space - observed via RGB-D image or point clouds - should be densely filled with class labels [13, 14, 15, 16]. In SSC, a model gets the point cloud obtained in one viewpoint; thus, it contains multiple partial objects (e.g., one side of a car). Then, the model not only reconstructs the unobserved shape of the car but also labels it as a car. Here, the prediction about the occupancy and the semantic labels can mutually benefit [17]. Due to the partial observation, filling in occluded and sparse areas is the biggest hurdle. Thus, a generative model is effective for 3D scene completion as 2D completion tasks [18, 19]. Chen et al. [20] demonstrate that generative adversarial networks (GANs) can be used to improve the plausibility of a completion result. However, a diffusion-based generative model has yet to be explored in terms of a 3D semantic segmentation map. We speculate that using a diffusion model has good prospects, thanks to the larger size of the latent and the capability to deal with high-dimensional data. In this work, we explore a diffusion model in the context of 3D semantic scene completion. Diffusion models have been rapidly growing and they perform remarkably well on real-world 2D images [21]. Thus, we would like to delve into the diffusion to generate 3D semantic segmentation maps; thus, we hope to provide the research community a useful road map towards generating the 3D semantic scene maps. ### Diffusion Models Recent advances in diffusion models have shown that a deep model can learn more diverse data distribution by a diffusion process [5]. A diffusion process is introduced to adopt a simple distribution (e.g., Gaussian) to learn a complex distribution [4]. Especially, diffusion models show impressive results for image generation [6] and conditional generation [22, 23] on high resolution compared to GANs. GANs are known to suffer from the mode collapse problem and struggle to capture complex scenes with multiple objects [24]. On the other hand, diffusion models have a capacity to escape mode collapse [6] and generate complex scenes [23, 25] since likelihood-based methods achieve better coverage of full data distribution. Diffusion models have been studied to a large extent in high-dimensional continuous data. However, they often lack the capacity to deal with discrete data (_e.g._, text and segmentation maps) since the discreteness of data is not fully covered by continuous representations. To tackle such discreteness, discrete diffusion models have been studied for various applications, such as text generation [7, 8] and low-dimensional segmentation maps generation [7]. Since both continuous and discrete diffusion models estimate the density of image pixels, a higher image resolution means higher computation. To address this issue, latent diffusion models [23, 26] operate a diffusion process on the latent space of a lower dimension. To work on the compressed latent space, Vector-Quantized Variational Auto-Encoder (VQ-VAE) [27] is employed. Latent diffusion models consist of two stages: VQ-VAE and diffusion. VQ-VAE trains an encoder to compress the image into a latent space. Equipped with VQ-VAE, autoregressive models [28, 29] have shown impressive performance. Recent advances in latent diffusion models further improve the generative performance by ameliorating the unidirectional bias and accumulated prediction error in existing models [23, 26]. Our work introduces an extension of discrete diffusion models for high-resolution 3D categorical voxel data. Specifically, we show the effectiveness of a diffusion model in terms of unconditional and conditional generation tasks, where the condition is a partial observation of a scene (_i.e._, SSC). Further, we propose a latent diffusion models for 3D categorical data to reduce the computation load caused by high-resolution segmentation maps. ### Diffusion Models for 3D Data Diffusion models have been used for 3D data. Until recently, research has been mainly conducted for 3D point clouds with _xyz_-coordinates. PVD [2] applies continuous diffusion on point-voxel representations for object shape generation and completion without additional shape encoders. LION [3] uses latent diffusion for object shape com pletion (_i.e_., conditional generation) with additional shape encoders. In this paper, we aim to learn 3D categorical data (_i.e_., 3D semantic segmentation maps) with a diffusion model. The study of object generation has shown promising results, but as far as we know, our work is the first to generate a 3D scene with multiple objects using a diffusion model. Concretely, our work explores discrete and latent diffusion models to learn a distribution of volumetric semantic scene segmentation maps. We develop the models in an unconditional and conditional generation; the latter can be used directly for the SSC task. ## 3 Method Our goal is to learn a data distribution \(p(\mathbf{x})\) using diffusion models, where each data \(\mathbf{x}\sim p(\mathbf{x})\) represents a 3D segmentation map described with the one-hot representation. 3D segmentation maps are samples from the data distribution \(p(\mathbf{x})\), which is the categorical distribution \(\text{Cat}(k_{0},k_{1},\cdots,k_{M})\) with \(M+1\) probabilities of the free label \(k_{0}\) and \(M\) main categories. The discrete diffusion models could learn data distribution by recovering the noised data, which is destroyed through the successive transition of the label [8]. Our method aims to learn a distribution of voxelized 3D segmentation maps with discrete diffusion (Sec. 3.1). Specifically, it includes unconditional and conditional generation, where the latter corresponds to the SSC task. In addition, we explore a latent diffusion model for 3D segmentation maps (Sec. 3.2). ### Discrete Diffusion Models Fig. 1(a) summarizes the overall process of discrete diffusion, consisting of a forward process and a reverse process; the former gradually adds noise to the data and the latter learns to denoise the noised data. In the forward process in the discrete diffusion, an original segmentation map \(\mathbf{x}_{0}\) is gradually corrupted into a \(t\)-step noised segmentation map \(\mathbf{x}_{t}\) with \(1\leq t\leq T\). Each forward step can be defined by a Markov uniform transition matrix \(Q_{t}\)[8] as \(\mathbf{x}_{t}=\mathbf{x}_{t-1}Q_{t}\). Based on the Markov property, we can derive the \(t\)-step noised segmentation map \(\mathbf{x}_{t}\) straight from the original segmentation map \(\mathbf{x}_{0}\), \(q(\mathbf{x}_{t}|\mathbf{x}_{0})\), with a cumulative transition matrix \(\bar{Q}_{t}=Q_{1}Q_{2}\cdots Q_{t}\): \[q(\mathbf{x}_{t}|\mathbf{x}_{0})=\text{Cat}(\mathbf{x}_{t};p=\mathbf{x}_{0} \bar{Q}_{t}). \tag{1}\] In the reverse process parametrized by \(\theta\), a learnable model is used to reverse a noised segmentation map by \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). Specifically, we use a reparametrization trick [5] to make the model predict a denoised map \(\tilde{\mathbf{x}}_{0}\) and subsequently get the reverse process \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\): \[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}) =q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\tilde{\mathbf{x}}_{0})p_{ \theta}(\tilde{\mathbf{x}}_{0}|\mathbf{x}_{t}), \tag{2}\] \[q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\tilde{\mathbf{x}}_{0}) =\frac{q(\mathbf{x}_{t}|\mathbf{x}_{t-1},\tilde{\mathbf{x}}_{0})q (\mathbf{x}_{t-1}|\tilde{\mathbf{x}}_{0})}{q(\mathbf{x}_{t}|\tilde{\mathbf{x}} _{0})}. \tag{3}\] We optimize a joint loss that consists of the KL divergence of the forward process \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\) from the reverse process \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\); of the original segmentation map \(q(\mathbf{x}_{0})\) from the reconstructed one \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) for an auxiliary loss: \[\mathcal{L}=D_{KL}(\,q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x} _{0})\,\|\,p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\,)\\ +w_{0}D_{KL}(\,q(\mathbf{x}_{0})\,\|\,p_{\theta}(\tilde{\mathbf{ x}}_{0}|\mathbf{x}_{t})\,), \tag{4}\] where \(w_{0}\) is an auxiliary loss weight. Unlike existing discrete diffusion models [7, 8], our goal is to learn the distribution of 3D data. Thus, to better handle 3D data, we use a point cloud segmentation network [30] with modifications for discrete data and time embedding. Conditional generation.We propose discrete diffusion for Semantic Scene Completion (SSC) with conditional generation. SSC jointly estimates a scene's complete geometry and semantics, given a sparse occupancy map \(\mathbf{s}\). Thus, it introduces a condition into Eq. 2, resulting in: \[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{s})=q(\mathbf{x}_{t-1}| \mathbf{x}_{t},\tilde{\mathbf{x}}_{0})p_{\theta}(\tilde{\mathbf{x}}_{0}| \mathbf{x}_{t},\mathbf{s}), \tag{5}\] Figure 2: Overview of (a) Discrete Diffusion Models and (b) Latent Diffusion Models. Discrete diffusion models conduct diffusion process on voxel space, whereas latent diffusion models operate diffusion process on latent space. where \(\mathbf{s}\) is a sparse occupancy map. We give the condition by concatenating a sparse occupancy map \(\mathbf{s}\) with a corrupted input \(\mathbf{x}_{t}\). ### Latent Diffusion Models Fig. 2b provides an overview of latent diffusion on 3D segmentation maps. Latent diffusion models project the 3D segmentation maps into a smaller latent space and operate a diffusion process on the latent space instead of the high-dimensional input space. A latent diffusion takes advantage of a lower training computational cost and a faster inference by processing diffusion on a lower dimensional space. To encode a 3D segmentation map into a latent representation, we use Vector Quantized Variational AutoEncoder (VQ-VAE) [27]. VQ-VAE extends the VAE by adding a discrete learnable codebook \(E=\{\mathbf{e}_{n}\}_{n=1}^{N}\in\mathbb{R}^{N\times d}\), where \(N\) is the size of the codebook and \(d\) is the dimension of the codes. The encoder \(\mathcal{E}\) encodes 3D segmentation maps \(\mathbf{x}\) into a latent \(\mathbf{z}=\mathcal{E}(\mathbf{x})\), and the quantizer \(VQ(\cdot)\) maps the latent \(\mathbf{z}\) into a quantized latent \(\mathbf{z}_{q}\), which is the closest codebook entry \(\mathbf{e}_{n}\). Note that the latent \(\mathbf{z}\in\mathbb{R}^{h\times w\times z\times d}\) has a smaller spatial resolution than the segmentation map \(\mathbf{x}\). Then the decoder \(\mathcal{D}\) reconstructs the 3D segmentation maps from the quantized latent, \(\tilde{\mathbf{x}}=\mathcal{D}(VQ(\mathcal{E}(\mathbf{x})))\). The encoder \(\mathcal{E}\), the decoder \(\mathcal{D}\), and the codebook \(E\) can be trained end-to-end using the following loss function: \[\mathcal{L}_{VQVAE}=-\sum_{k}w_{k}\mathbf{x}_{k}\log(\tilde{ \mathbf{x}}_{k})+\|sg(\mathbf{z})-\mathbf{z}_{q}\|_{2}^{2}\\ +\|\mathbf{z}-sg(\mathbf{z}_{q})\|_{2}^{2}, \tag{6}\] where \(w_{k}\) is a class weight and \(sg(\cdot)\) is the stop-gradient operation. Training the latent diffusion model is similar to that of discrete diffusion. Discrete diffusion models diffuse between labels, but latent diffusion models diffuse between codebook indexes using Markov Uniform transition matrix \(Q_{t}\)[8]. ## 4 Experiments In this section, we empirically study the effectiveness of the diffusion models on 3D voxel segmentation maps. We divide the following sub-sections into the learning of the unconditional data distribution \(p(\mathbf{x})\) (Sec. 4.2) and the conditional data distribution \(p(\mathbf{x}|\mathbf{s})\) given a sparse occupancy map \(\mathbf{s}\) (Sec. 4.3); note that the latter corresponds to semantic scene completion (SSC). ### Implementation Details **Dataset.** Following prior work [9], we employ the CarlaSC dataset - a synthetic outdoor driving dataset - for training and evaluation. The dataset consists of 24 scenes in 8 dynamic maps under low, medium, and high traffic conditions. The splits of the dataset contain 18 training, 3 validation, and 3 test scenes, which are annotated with 10 semantic classes and a free label. Each scene with a resolution of \(128\times 128\times 8\) covers a range of \(25.6\,\mathrm{m}\) ahead and behind the car, \(25.6\,\mathrm{m}\) to each side, and \(3\,\mathrm{m}\) in height. **Metrics.** Since SSC requires predicting the semantic label of a voxel and an occupancy state together, we use mIoU and IoU as SSC and VQ-VAE metrics. The mIoU measures the intersection over union averaged over all classes, and the IoU evaluates scene completion quality, regardless of the predicted semantic labels. **Experimental settings.** Experiments are deployed on two NVIDIA GTX 3090 GPUs with a batch size of 8 for diffusion models and 4 for VQ-VAE. Our models follow the same training strategy as multinomial diffusion [7]. We set the hyper-parameters of the diffusion models with the number of time steps \(T=100\) timesteps. And for VQ-VAE, we set the codebook \(E=\{\mathbf{e}_{n}\}_{n=1}^{N}\in\mathbb{R}^{N\times d}\) where the codebook size \(N=1100\), dimension of codes \(d=11\) and \(\mathbf{e}_{n}\in\mathbb{R}^{32\times 32\times 2\times d}\). For diffusion architecture, we slightly modify the encoder-decoder structure in Cylinder3D [30] for time embedding and discreteness of the data. And for VQ-VAE architecture, we also use encoder-decoder structure in Cylinder3D [30], but with the vector quantizer module. ### 3D Segmentation Maps Generation We use the discrete and the latent diffusion models for 3D segmentation map generation. Fig. 3 shows the qualitative results of the generation. As seen in the figure, both the discrete and latent models learn the categorical distribution as they produce a variety of reasonable scenes. Note that our models are learned on a large-scale data distribution like the 3D scene with multiple objects; this is worth noting since recent 3D diffusion models for point clouds have been performed on an object scale [31, 32, 2, 3]. In Tab. 1, we compare training and sampling time mod \begin{table} \begin{tabular}{c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Resolution} & Training & Sampling \\ & & (time/epoch) & (time/img) \\ \hline D-Diffusion & 128\(\times\)128\(\times\)8 & 19m 48s & 0.883s \\ \hline \multirow{3}{*}{L-Diffusion} & 32\(\times\)32\(\times\)2 & 7m 37s & 0.499s \\ & 16\(\times\)16\(\times\)2 & 4m 41s & 0.230s \\ \cline{1-1} & 8\(\times\)8\(\times\)2 & 4m 40s & 0.202s \\ \hline \hline \end{tabular} \end{table} Table 1: **Computation time comparison** between discrete diffusion models and latent diffusion models for 3D segmentation maps generation. ‘D-Diffusion’ and ‘L-Diffusion’ denote discrete diffusion models and latent diffusion models, respectively. ‘Resolution’ means the resolution of the space in which diffusion process operates. A latent diffusion models process diffusion on a lower dimensional latent space, as a result, it shows advantage of a faster training and sampling time. els for different resolutions on which each diffusion model operates. Compared to the discrete diffusion, the latent diffusion tends to show shorter training and inference time. This is because the latent diffusion models compress the data into a smaller latent so that the time decreases as the compression rate increases. In particular, compared to discrete diffusion, which performs a diffusion process in voxel space, \(32\times 32\times 32\) latent diffusion has 2.6 times faster training time for one epoch and 1.8 times faster sampling time for generating one image. Ablation study on VQ-VAE.Latent diffusion models consist of two stages. The VQ-VAE compresses 3D segmentation maps to latent space, and then discrete diffusion models apply on the codebook index of latent. Therefore, the performance of VQ-VAE may set the upper bound for the final generation quality. So we conduct an ablation study about VQ-VAE while adjusting the resolution of the latent space \(h\times w\times z\) and the codebook capacities \(N\) while keeping the code dimension \(d\) fixed. Concretely, we compress the 3D segmentation maps from \(128\times 128\times 8\) to \(32\times 32\times 2\), \(16\times 16\times 2\), and \(8\times 8\times 2\) with four different codebook size \(N\in\{220,550,1100,2200\}\). The quantitative comparison is shown in Tab. 2. The bigger the codebook size is, the higher the performance is, but it saturates around 1,100. That is because most of the codes are not updated, and the update of the codebook can lapse into a local optimum [33]. The resolution of latent space has a significant impact on performance. As the resolution of the latent space becomes smaller, it cannot contain all the information of the 3D segmentation map. Setting the resolution to \(32\times 32\times 2\) with a \(1{,}100\) codebook size strike a good balance between efficiency and fidelity. ### Semantic Scene Completion We use a discrete diffusion model for conditional 3D segmentation map generation (_i.e._, SSC). As a baseline model against the diffusion model, we train a network with an identical architecture by discriminative learning without a diffusion process. We optimize the baseline with a loss term \(\mathcal{L}=-\sum_{k}w_{k}\mathbf{x}_{k}\log(\tilde{\mathbf{x}}_{k})\), where \(w_{k}\) is a weight for each semantic class. We visualize results from the baseline and our discrete diffusion model in Fig. 4. Despite the complexities of the networks being identical, our discrete diffusion model improves mIoU (_i.e._, class-wise IoU) up to 5.89%p than the baseline model as shown in Tab. 4. Especially, our method achieves outstanding results in small objects and fewer frequency categories like 'pedestrian', 'pole','vehicles,' and 'other'. The qualitative results in Fig. 4 better demonstrate the improvement. In Tab. 3, we compare our model with existing SSC models whose network architectures and training strategies are specifically built for the SSC task. Nonetheless, our diffusion model outperforms LMSCNet [16] and SSCNet [17], in spite of the simpler architecture and training strategies. Although MotionSC [9] shows a slightly better result, we speculate that the diffusion probabilistic model can be improved by extensive future research dedicated to this field. ## 5 Conclusion In this work, we demonstrate the extension of the diffusion model to scene-scale 3D categorical data beyond generating a single object. We empirically show that our models have impressive generative power to craft various scenes through a discrete and latent diffusion process. Additionally, our method provides an alternative view for the SSC task, showing superior performance compared to a discriminative counterpart. We believe that our work can be a useful road map for generating 3D data with a diffusion model.
2302.01369
Laser Ranging Based Intelligent System for Unknown Environment Mapping
This work describes the implementation of a simple and computationally efficient Intelligent Navigation System (INS) for autonomous systems used in areas where human access is impossible. The system uses Laser Range Finder (LRF) readings as input, making it suitable for mobile platform implementation. The INS pre-processes the LRF readings to remove noise and determines an obstacle-free path for mapping. The system's localization method uses a similarity transform and particle filter. The system was tested in artificially generated environments and emulated in real-time with real-environment data. The system was then implemented in a Raspberry Pi3 on a 3WD Omni-directional mobile platform and tested in real environments. The system was able to generate an accurate 2D map of the area. The proposed methodology was shown to be efficient through a comparative analysis of execution time.
T. H. M. N. C. Thelasingha, U. V. B. L. Udugama, E. M. S. P. Ekanayake, G. M. R. I. Godaliyadda, M. P. B. Ekanayake, B. G. L. T. Samaranayake, J. V. Wijayakulasooriya
2023-02-01T04:30:50Z
http://arxiv.org/abs/2302.01369v1
# Laser Ranging Based Intelligent System for Unknown Environment Mapping ###### Abstract Autonomous systems are used in many applications like unknown terrain mapping, Reconnaissance, Explorations etc. in areas of impossible human access. They require an accurate and practically realizable Intelligent Navigation System (INS), which can handle the complexity in the environment and necessarily be ready for mobile platform implementation. Also, a method of error free localization is a major requirement for the practical realization of such a system. This work outlines implementation of a system, simple enough to be implemented on a mobile platform, focused on provision of a logical and computationally efficient algorithm which uses measurements from a Laser Range Finder (LRF) as input. Usage of LRFs for mapping is a trending research area in contrast to the usage of Ultrasonic Sensors with errors like cross talk and specular reflections or vision systems that are power hungry. As, the LRF readings too contain some noise due to reflectivity problems, methods were developed for pre-processing the measurements. INS analyses the data spectrum intelligently to determine the obstacle free path that should be traversed, which will in turn enable the mobile platform to canvas the whole environment and construct the map. Also, the localization methods have been implemented through usage of similarity transform and particle filter. The conceptual design was simulated in Python(tm) on Windows(tm) and was tested for robustness in artificially generated environments. Then the system was emulated in Python(tm) with real environment data fed from the LRF in real-time. Then it was implemented in the embedded system level in a Raspberry pi3(tm) on a 3WD Omni-directional mobile platform, and tested for real environments. Also, the separately running localization algorithm has been used to correct the generated map and the position of the platform. The system was able to generate a complete two-dimensional map of the locale accurately. A quantitative justification of the proposed methodology is presented in a comparative analysis of the execution time. Intelligent Navigation, LIDAR, Autonomous Mapping, Monte Carlo SLAM ## 1 Introduction Environment mapping has been a key fact in endeavours like space explorations, excavations, disaster relief and reconnaissance applications. For that automation is highly essential when such tasks are carried out areas with restricted human access. Mapping an unknown environment autonomously has been an engineering problem that many researches have been focussed[1][2] on for years. For it to be successful, development has to be done in many fronts. Initially a correct sensing method has to be implemented to sense the geometry of the environment. Many of the published work include the use of RADARs [3], SONARs [4], Vision systems and Laser Range Finders (LRFs) [5]. The use of methods like and SONARs are convenient for simple distance sensing but when considered for applications like mapping they contain much noise. Although vision sensors provide more accurate and vivid data spectrum of the environment, the computational power required will make them less feasible for mobile platform implementation. In this work, a low cost LRF [6] has been used as the distance sensor. A high-end sensor like Microsoft Kinect(tm) will render much accurate data but to keep the implementation realizable a simple sensor has been selected. Another major problem is the method of traversing the unknown area. Various approaches like mobile robots and quad-rotors [7] are possible solutions. But considering the accuracy and convenience in positioning and manoeuvrability an omni-directional 3WD mobile platform [8] has been used as the mapping agent in this work. It has been developed completely in house. And its size and the instant mobility in any direction due to omni directional nature has allowed it to traverse many complex environments effectively. A state feedback linearizing nonlinear controller has been implemented so that the platform can adjust itself to a given reference coordinate and orientation. Additionally, a convenient sensor has been selected and a method of motion is developed. Most approaches lack a correct navigation system [1] where the agent can be guided through the obstacles in the environment so that it canvases the whole area. Obstacle detection [9] and traversable space identification methods [10] have already been developed. Also, algorithms for finding the minimum cost path has been developed for some time [11]. But an approach to intelligently analyse the data from the distance measurements and decide the direction to be explored is a requirement. Hence a separate navigation algorithm has been developed which will analyse the information spectrum from the sensor and decides the path to be taken by the mobile platform to create a complete 2D overlay of the locale. Here, to construct the map, the mobile platform traverses the area as guided by the navigation algorithm. And also the current position and orientation of it will be computed and fed to the navigation system in order to find the direction it should travel in the next moment. But the errors in the measurements and actuation are always possible. Also, the encoder based position is prone to errors due to wheel slipping and data transmission errors. Hence, for correct localization and to provide a correction for the map data generated from the LRF, a separately running simultaneous localization and mapping (SLAM) [12] algorithm based on Monte Carlo approach has also been developed. It uses the LRF sweeps from short time frames when the agent traverses in its trajectory and correct the encoder based position estimate in real time. The combined implementation of the above three aspects namely, the hardware setup (omni directional mobile platform and the LRF), the navigation and path planning algorithm and the SLAM algorithm has allowed to develop a successful attempt in generating a 2D overlay of an unknown obstacle filled environment. For the task of extension of the 2D map to a complete 3D map, incorporating a vision system would be an effective approach. Further research in this work would include such a combination. ## 2 The Hardware Implementation ### The Omni Directional Mobile Platform The omni-directional mobile platform is the practical realization on which the proposed work has been implemented. It has been designed and constructed completely in house. A separate embedded system architecture has been developed, so that the control algorithms and also sensor fusion algorithms can be implemented on general purpose micro controllers. Also, the electronics of the system Figure 1: The Practical Realization Figure 2: The complete embedded system architecture including the electronics of motor controlling and other auxiliary systems has been implemented on a single board with sensor modules. Here, it is to be noted that the usage of GPS is avoided as in the scenarios like explorations and excavations which the equipment is developed for, would not guarantee the availability of GPS every time. An accurate estimation of the position and orientation of the mapping agent (pose or state) is obtained by fusing data from three different sensors that detect the orientation. The following sensors were used: a gyroscope (MPU6050), a compass (HMC5883L) and optical shaft encoders (OSE) (model - E50050030) attached to the wheels. To fuse the data from the above sensors, a centralized Kalman filter (CKF) was implemented in a general-purpose micro controller. As odometrical measurement system is coupled with an inertial measurement unit and a compass the accuracy of the position estimate has been improved much, while the system moves in different motion conditions like slipping and jerky motion. As controller strategy, a nonlinear state linearization controller has been implemented on the main micro controller. so that the ODR platform can be guided along any given trajectory while following a given orientation profile. ### The Laser Range Finder (LRF) Data Processing The LRF used, has a Neato laser sensor[6] driven by the Xv-11 LIDAR controller. The used LRF is a popular low-cost range finder compared to others in the market and has been used for many research and experimental applications. Although the accuracy is much higher than SONAR, or simple vision systems, it also has some deviations in the measured distances. The error characteristics of the LRF is shown in the Fig. 3. The laser sensor works by projecting a pulse stream of lasers around and measures the time it takes to reflect back. The distance to the reflection point can be accurately calculated (to 1 mm) from the time of flight the sensor then encodes the distance data into a packet and forward it to the LIDAR controller. The data packets are then decoded in the controller and relayed through the serial port up to the host device which is the embedded system that the Intelligent Navigation System (INS) implemented on. As the percentage error is small, in distances less than 3 m, the LRF measurement can be used as accurate estimation of distance. Separate method has been developed to mitigate the noise in the LRF measurements. Most of it was due to the reflectivity effects on the object surfaces and also the variation of the rotational speed of the LRF. To prevent overlapping of data points and incorrect angle measurements, A PID controller is implemented in the XV LIDAR controller to regulate the rotational speed of the sensor to the optimum speed of 200 rpm, where the device can send out a distance data point per a degree of rotation. Many of the reflectivity distortions could be rectified by sampling through multiple realizations. Here it was about five realizations, considering speed of operation and the power consumption. A main capability that the system was intended to poses was its ability to be versatile and to be dynamically adaptive to any mobile platform. Hence, at the scanning step, the LRF is powered ON only at the scanning moment otherwise it is powered OFF, such that lesser power is consumed in the case of mobile implementation. ## 3 The Intelligent Navigation System The Intelligent Navigation System (INS) is the entity which analyses the data from the sensors and directs the mobile platform to the necessary place to explore [17]. The INS was Figure 4: A Polar LRF Sweep of 360° Figure 3: The Error Characteristic of LRF run for the simulation environment in Fig. 5(a) and the result was generated in the steps shown in the Fig. 5(b) to Fig. 5(h). It should be noted that the anchor points where the laser sweeps are done, are named as 'nodes' and respective nodes, which the agent can traverse to and from each other, are called as 'neighbours'. The procedure of the INS can be analysed through following explained steps, 1) Step 01: All the storage classes and data structures are initialized and current position and orientation is read. A node is created in the current position namely labelled as N1 in Fig. 5(b). (if this is not the first node, new node is also added as a neighbour for the previous node and vice versa) scanning is done for 360\({}^{\circ}\) initially because agent has to localize and decide which direction to explore. The list of distance data per degree is obtained from the LRF. 2) Step 02: Next the gaps and the walls should be correctly identified. Gap points are selected as data points with a relative distance from the agent, which is longer than a given length. Because those places can be assumed as places where it can be expected to find new information about the environment. The distance data is analysed to classify them as walls or gaps. At the end of each classification, its properties (width and centroid) are calculated. Here in the example, a large gap is detected from the scan at N1 hence the agent is directed towards the centroid of it. Hence the scanning is done again at the node N2, where 2 gaps have been detected. Here in Fig.5(c), it can be seen that the agent has been directed towards the minimum width gap first which is N2G2. This approach is justified in the latter part of the document. Here it should be noted that the scanning is done forward biased after the first scan. It is a 300\({}^{\circ}\) scan \(\pm\)150\({}^{\circ}\) around the agents heading direction. In Fig. 5(d), it can be seen that the agent has now travelled up to the node N6 and further to node N7 in Fig. 5(e). 3) Step 03: It can be observed that direct travelling from N2 to N7 is possible. Hence, those two nodes should be neighbours. Also, it can be noticed that a scan down towards N2 from N7 is not necessary as the whole environment in between the nodes have been already mapped through the scans done at the respective nodes. Hence, to handle those kinds of situations a separate 'neighbour identification' is done as, A.) If any two nodes are closer than twice the scan range, the whole area in between nodes can be assumed to be explored. Hence above situation occurs. B.) Each gap of the corresponding two nodes are considered and checked if they intersect the line joining the two nodes. They are set as explored gaps as there is no new un-explored data in side those gaps, which are in between known nodes. C.) Also if the widths of those gaps are larger than the size of the agent, they can be travelled through. Hence, the corresponding nodes are considered neighbours of each other (where node to node travelling is possible). After the above analysis, the agent is now directed towards N7G1 because that is the information lacking area in the map. But INS has now learned that there is a path available to N2 from N7, without travelling or exploring any gap between them. This is kept track by constantly updating the travel cost map from each and every node to current node. The cost of travel (distance) is calculated to each node Figure 5: Simulation of Navigation Algorithm from the current node, through the neighbouring nodes. The minimum cost, unexplored node is selected and it is analysed to find the minimum width unexplored gap in its corresponding gaps. But the gap should be wide enough so that it can be traversed through. The minimum cost unexplored node obviously is the node where the agent is currently positioned. But if it does not contain any unexplored gaps, means a dead end. Then the INS intelligently decides to travel to the minimum cost node instead of just backtracking to find an unexplored gap. (This minimum cost node is the first approach and the narrowest gap first approach is justified in next section) 4) Step 04: In the next step, Fig. 5(f) after agent has explored the upper part of the map, at node N13, it has decided through the above explained process in step 03, to go back to unexplored node N2 to the gap N2G1. As of now the INS knows the path from N7 to N2, hence it does not unnecessarily backtrack, but travels through middle part of the environment to node N2 as observable from Fig. 5(g). The minimum cost path is found to the selected node employing the Dijkstra's Shortest path algorithm [11]. The reference point to travel is generated for the agent to follow the path intended. After node N2, it explores another unexplored gap at N3 and finishes up mapping as in Fig. 5(h) The algorithm has been implemented so that it uses minimum amount of external complex libraries. Also, program loops have been avoided as much as possible and functions have been implemented so that it will be faster in execution. Conditional switches have been used to handle errors caused by data dependencies. The lesser processing power needed and the usage of simple functions and libraries have made it much more feasible to implement the system on a mobile platform with limited resources. ## 4 Improvement of Mapping Efficacy When the efficacy of mapping is considered, two key factors are the accuracy of the map and the time it takes to map the whole area. The accuracy of the map highly depends on the accuracy of the LRF measurements and also correct estimation of current position relative to the locale. The control strategy deployed in the agent will determine the accurate positioning of the platform to the coordinates directed by the INS. Hence to correctly localize the agent a special SLAM algorithm has been employed. It uses the data from the LRF and compute necessary corrections needed for the position of the agent as well as the generated map. Apart from the above approaches, the INS has been developed on several logics that improves the mapping time. one is the scanning method. Initially at the agents first scan of the environment, it is done for a full circle of 360deg, so that the agent can localize itself and correctly decide that where should it explore first. But the scans followed after the initiation are done forward biased around \(\pm\)150deg around the heading direction. This approach limits the agent of collecting existing information again in the area it just explored and keep the information spectrum forward biased. Another approach is the continuously updated cost of travel (distance) map of the nodes. If there are no gaps to explore in the current node the traditional approach is to backtrack and find the unexplored gaps in the past nodes according to the order of traversing. But in the proposed INS next node to explore is decided by analysing the cost of travel to the nodes and selecting the unexplored node with minimum cost of travel. This comes much effective in complex environments, when there exist two or more paths for node to node traversing, which are identified through the process of 'neighbour identification' explained in the above section. Hence, if the agent has found a lesser cost path to a past node, in its path planning, it will effectively utilize the found path instead of simple backtracking. It can be seen clearly in the example in Fig. 5, where a path has been found from N2 to N7 without an explicit scan. Another approach towards improving mapping efficacy is selection of the minimum width gap to explore first. After selecting the respective node to travel as explained above, a gap need to be selected in the respective node. Figure 6: INS execution time comparison Here it's done under the general assumption that in the real world a smaller gap is much probable to lead to a convergence and reach a dead-end, unlike a wider gap. Hence, the minimum width gap is selected to explore before heading out in to the wider area where the probability of convergence is much lower. As a quantitative analysis of the efficacy, figure 6 shows a comparison of the execution times of the INS in 'Min. gap vs. Max. gap first' approaches for five test environments. From the comparative analysis, it can be concluded that the min. gap first approach is an efficient method of exploration. ## 5 Localization - SLAM The accurate localization or else knowing the exact position is critical in the mapping task. As the existing GPS systems provide accuracy of few meters and also fails in indoors, SLAM approach has been utilized as a localization method [16]. Although the usage of extended Kalman filter for SLAM is popular [13], due to its high computational complexity, solutions based on similarity transforms and particle filter (Monte Carlo Localization) [14][15] based methods have been used in this work. In both methods, encoder readings are used to estimate the position, then the LRF data is used to correct the position estimate. Finding the relevant features/objects using LRF is done using differentiation. ### Similarity Transform This method is based on finding an optimized parameter estimation for the transformation, to transform the agent back to its correct position. It has been assumed that the erroneous localization is occurred when it is traveling on curved paths. The detected feature positions can be used for correction. Correspondences between erroneous object positions and previously determined positions of the same object are found through similarity transform and optimize the values of the transformation from using recursive method to minimise the error of the similarity transform. The transform can be found as; \[\lambda R_{i}+t=\eta_{i}\ ; \tag{1}\] \[R\left(rotation\ matrix\right)=\begin{bmatrix}cos\alpha&-sin\alpha\\ sin\alpha&cos\alpha\end{bmatrix}\] \[t\left(translation\ matrix\right)=\left(t_{x},t_{y}\right)\] Equation (1) can be optimized by minimizing the sum of absolute error. The determined transform is then used remap the objects and correct the position as shown in the Fig.7. (a) and (b) shows how the objects have been remapped and agents path has been corrected. ### Particle Filter (Fast SLAM) The major drawback of the similarity transform method is the difficulty to converge on optimised parameter value set in real-time. Hence, another method based on Bayesian filtering called Particle filter has been used. Particle filter SLAM [15] is a better solution, as the convergence rate is very high and with the probabilistic inferences to the estimation, it will address the uncertainty in the LRF too. Initially, it is hypothetically assumed that the agent will be anywhere with any orientation. Then Position estimate is computed using the density of the particles, representing the position with orientation. This is a two-step process, Figure 8: Applying Similarity Transform Figure 7: Feature Identification **A).** Prediction step- Particles are generated to represent the position, through iterative sampling of a normal distribution \[\textit{for}\ i=1\ \textit{to}\ n; \tag{2}\] _sample_\(x^{(i)}\sim N(\mu,\sigma^{2});\) Then the particles are translated with the aid of the applied control signal and then resampled using resampling wheel technique. It is a method that gives a higher chance to choose a particle from a position which has a higher weight (weighting is done in the correction step) rather than others. \[\textit{for}\ i=1\ \textit{to}\ n; \tag{3}\] _sample_\(x^{(i)}\sim p(x_{\text{t}}\ |\ x_{\text{t}-1}^{(\text{in})},U_{\text{t}});\) **B).** Correction Step - This is done by weighting the particle by finding the match between obtained measurement and the position of the particle. This will reduce the outliers with the time and quickly converge to the actual path. Drawn samples with probability \(w_{\text{t}}^{(\text{t})}\) _for_\(i=1\ \textit{to}\ n;\) \[\begin{split} w_{\text{t}}^{(\text{t})}\sim p(z_{\text{t}}\ |\ \tilde{x}_{\text{t}-1}^{(\text{in})},U_{\text{t}});\textit{where}\ \textit{z}\ \textit{is}\ \textit{LRF}\ \textit{measurement}\\ p(z_{\text{t}}|\tilde{x}_{\text{t}}^{(\text{in})})=\prod_{i}p(z_ {i}|\tilde{x}_{\text{t}}^{(\text{in})})\\ p(z_{i}|\tilde{x}_{\text{t}}^{(\text{in})})=p(d-\hat{d})*p( \propto-\delta)\\ =N(d-\hat{d},0,\hat{z})*N(\propto-\delta,0,\hat{z})\end{split} \tag{4}\] \(d-\hat{d}=\text{error}\) in the distance \(\propto-\delta=\text{error}\) in orientation After the calculations, \[\textit{position}\ \textit{estimation}=1/n*\sum_{i}x_{i} \tag{5}\] \[\textit{orientation}\ \textit{estimation}=\tan^{-1}\left(\frac{\sum_{i} \cos\vartheta_{i}}{\sum_{i}\sin\vartheta_{i}}\right)\] Through the above process, a particle filter SLAM can be implemented. A much accurate position estimation is rendered through that approach as seen from the Fig.8. ## 6 Conclusion This work has been focussed on finding a simple and practically realizable solution to the engineering problem of constructing a map of an unknown environment. In the approach proposed, a low-cost laser range finder (LRF) has been selected as the sensor due its high accuracy along with simplicity when compared with traditional ultra-sonic sensors and complex vision systems. Methods have been developed for correcting the distortions and acquiring the data from the LRF. But any convenient sensor capable of producing an accurate distance data spectrum can be easily used to replace the LRF. As the hardware platform, a 3WD Omni directional mobile platform has been chosen due to its convenient manoeuvability into any direction intended. To guide it towards information gap in the data and to provide optimum path planning a separate navigation algorithm has been implemented. Key features like, 'Smallest gap first approach' and the 'Intelligent neighbour identification process' has allowed the mapping efficacy to be at optimum. Due to the errors in the encoder based position measurement of the system, a parallelly running simultaneous localization Figure 8: Applying the Particle Filter Figure 9: The Test Environment and Generated 2D Map. and mapping (SLAM) algorithm has been developed. Similarity transform based approach and Monte-Carlo method (Particle filter) has been experimented and the Particle filter based Fast SLAM has been implemented on the system. Altogether taken the combined work on above approaches has enabled to create a simple and rapidly realizable solution to the problem of environment mapping. Although specific hardware set-up is considered in this work, the developed algorithms are quickly adaptable to any other hardware combination. Also, as further improvements if a suitable sensor like a vision system is available, it would be convenient to extended the 2D map, generated through the proposed approach to a complete 3D map of the locale. ## Acknowledgement This research work has been hosted and supported by the Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Sri Lanka.
2308.15347
Masquerade: Simple and Lightweight Transaction Reordering Mitigation in Blockchains
Blockchains offer strong security gurarantees, but cannot protect users against the ordering of transactions. Players such as miners, bots and validators can reorder various transactions and reap significant profits, called the Maximal Extractable Value (MEV). In this paper, we propose an MEV aware protocol design called Masquerade, and show that it will increase user satisfaction and confidence in the system. We propose a strict per-transaction level of ordering to ensure that a transaction is committed either way even if it is revealed. In this protocol, we introduce the notion of a "token" to mitigate the actions taken by an adversary in an attack scenario. Such tokens can be purchased voluntarily by users, who can then choose to include the token numbers in their transactions. If the users include the token in their transactions, then our protocol requires the block-builder to order the transactions strictly according to token numbers. We show through extensive simulations that this reduces the probability that the adversaries can benefit from MEV transactions as compared to existing current practices.
Arti Vedula, Shaileshh Bojja Venkatakrishnan, Abhishek Gupta
2023-08-29T14:42:43Z
http://arxiv.org/abs/2308.15347v1
# Masquerade: Simple and Lightweight Transaction Reordering Mitigation in Blockchains ###### Abstract. Blockchains offer strong security gurarantees, but cannot protect users against the ordering of transactions. Players such as miners, bots and validators can reorder various transactions and reap significant profits, called the Maximal Extractable Value (MEV). In this paper, we propose an MEV aware protocol design called Masquerade, and show that it will increase user satisfaction and confidence in the system. We propose a strict per-transaction level of ordering to ensure that a transaction is committed either way even if it is revealed. In this protocol, we introduce the notion of a "token" to mitigate the actions taken by an adversary in an attack scenario. Such tokens can be purchased voluntarily by users, who can then choose to include the token numbers in their transactions. If the users include the token in their transactions, then our protocol requires the block-builder to order the transactions strictly according to token numbers. We show through extensive simulations that this reduces the probability that the adversaries can benefit from MEV transactions as compared to existing current practices. ## 1. Introduction Blockchains offer a decentralized and transparent platform for recording transactions with strong security guarantees. Originally conceived as a payment system, in recent years smart contracts in blockchains have enabled the development of a plethora of decentralized applications (dapps) in myriad service sectors. Among these, dapps developed for financial use cases--such as lending and borrowing, stablecoins, exchanges, insurance etc.--have emerged as a popular alternative to centralized financial institutions. Colloquially referred to as de-fi (for decentralized finance), financial smart contracts are projected to grow to a half a trillion market (Bradbury et al., 2017). Concurrent with the growth of defi dapps, transaction reordering attacks have become commonplace in blockchains. This is when a malicious actor attempts to confirm its own transaction ahead of a victim transaction on the blockchain _after_ observing the victim transaction. A frontrunning attack is problematic to the victim as it can unfairly diminish the value earned by the victim in the transaction. For example, a victim looking to purchase tokens available at a lower-than-market price at a decentralized exchange may not get that price if the attacker purchases those tokens first. Unfortunately, blockchains by design facilitate transaction reordering attacks: unconfirmed transactions submitted by users are publicly visible in a'mempool' and transactions can be prioritized for confirmation by increasing their transaction fees. Outside of financially hurting a victim, reordering attacks can seriously affect the security of the blockchain consensus protocol especially in proof-of-stake systems. It is reported that in 2022 alone more than 300 million dollars were unfairly gained through reordering attacks (Bradbury et al., 2017). With such fortune at stake, it is feasible for an attacker to bribe block proposers to act in an unethical way or even deviate from protocol (Krishnan et al., 2017). Preventing transaction reordering attacks has therefore become an active area of research with several mitigation techniques proposed so far (Krishnan et al., 2017). In practice, Ethereum has favored an approach where transaction order is determined by (trusted) third-party entities called builders. A builder collects transactions from end-users, orders them within a block and sends the block to a proposer for inclusion within the blockchain. Separating out the functionalities of ordering transactions in a block (building) and publishing a block (proposing) protects block proposers from being influenced to rearrange transactions. However, reordering attacks still very much happen at the builder level. We note that not all builders practice or tolerate reordering attacks, but a vast majority of them do. In our work we present Masquerade, a simple and lightweight solution to preventing transaction reordering in blockchains. The key idea in Masquerade is the use of a token-number system (as at a takeout restaurant or the drivers' license agency) for deciding transaction ordering. In Masquerade, a user first purchases a token with an assigned token number for a refundable fee. Subsequently the user can use this token to make a transaction. Transactions are ordered in the block in an increasing order of the token numbers of tokens associated with transactions. Once a token is used with a transaction, the funds paid for purchasing the token are returned to the user. The key intuition for why Masquerade works is that at the time of a token purchase, an attacker has no way of knowing what the purchased token will be used for in the future. An attacker is therefore forced to purchase as many tokens as possible, which places a significant financial stress on the attacker. Under a formal model of the system, if the total funds available to an attacker is a fraction \(\sigma<1\) of the total funds available to honest users, we show that the fraction of transactions that can be front run is at most \(\sigma\). Moreover, Masquerade guarantees that attacker to user wealth ratio diminishes over time making the fraction of transactions front run asymptotically go to zero. Experiments on synthetic and real-world transaction data support our theoretical observations. We also note that Masquerade is robust to transaction reordering attacks occurring on token-purchase transactions submitted by users. Deploying Masquerade in practice requires minimal changes to the consensus protocol--blocks must be verified to ensure transactions are in increasing order of their token numbers. Issuance and maintenance of tokens can be easily implemented as a smart contract. In contrast, existing proposals enforcing some notion of a fair-ordering of transactions (Kal In centralized finance markets (CeFi) such as Binance[(1)], CoinBase[(15)], Kraken [(22)] etc, transaction ordering is strictly monitored by an authority, and adversaries are punished. A major problem with these CeFi protocols is that the users are required to relinquish control of their assets to the service provider and trust the operator while transacting. A canonical example of a DeFi app is a decentralized exchange (DEX), which is a platforms that allows trading through smart contracts on the blockchain. DEX transactions are particularly susceptible to front running attacks by adversaries, as there is a lack of any regulatory body that can punish these malpractices. Further, clients who have been manipulated by this exchange have no way of recovering their lost wealth. An Automated Market Maker(AMM) is a smart contract used in DEXs to facilitate the trading of digital assets without the need for traditional order books or other intermediaries. These AMMs use some predetermined formulas to determine the prices of assets based on their supply and demand within a liquidity pool. AMMs were introduced as an alternative to traditional order book exchanges, where buyers and sellers interact directly to set the prices of assets. An AMM contains a liquidity pool of atleast two assets, where it allows users to deposit an asset and withdraw the other based on the determined exchange value. One of the most commonly used AMMs is the Constant Product Market Maker(CPMM) algorithm. This algorithm is used in DEXs like Uniswap[(6)] and PancakeSwap[(5)]. In the CPMM algorithm, the product of the quantities of two assets in a pool remains constant, which helps determine the price when one asset is traded for another. Simply, if an exchange has \(x\) and \(y\) units of currency in their reserve, \(xy=k\) denotes the price of the asset using the exchange. ### MEV Attacks Currently, in blockchains such as Ethereum, adversaries frequently monitor the mempool to search for a certain kind of MEV transactions which would yield reward at the expense of increased cost to the user. We explain three major attacks, benefits to the adversary, and the cost to the user below. 1. Sandwich Attack: In this case, an attacker attempts to monitor mempool for pending transactions trading large assets that may cause price fluctuations. Let a user transaction be \(\mathtt{txn}_{u}\) that is exchanging currency \(C_{1}\) to \(C_{2}\), denoted by \(C_{1}\to C_{2}\). An attacker \(a\) then submits two transactions, say \(\mathtt{txn}_{a}\) and \(\mathtt{txn}^{\prime}_{a}\), with the same transaction amount as the user \(u\). In \(\mathtt{txn}_{a}\), the attacker exchanges \(C_{1}\to C_{2}\), and in \(\mathtt{txn}^{\prime}_{a}\) exchanges \(C_{2}\to C_{1}\). A sandwich attack happens if \(\mathtt{txn}_{a}\), \(\mathtt{txn}_{u}\) and \(\mathtt{txn}^{\prime}_{a}\) are included in the same block \(\mathcal{B}\) and in this order. 2. Arbitrage MEV: An arbitrage occurs when users are trading assets across different exchanges. Let a user transaction be \(\mathtt{txn}_{u}\) that is trading \(\alpha_{u}\) amount of currency \(C_{1}\) with fiat money across two different exchanges \(E_{1}\) and \(E_{2}\) with different exchange rates \(P_{C_{1},E_{1}}\) and \(P_{C_{1},E_{2}}\), respectively. A successful arbitrage occurs if price \(\alpha_{u}P_{C_{1},E_{1}}+g_{u}<\alpha_{u}P_{C_{1},E_{2}}\), for gas fees \(g_{u}\). In this case, the attacker \(a\) monitors the public mempool for an arbitrage opportunity and attempts to front-run \(\mathtt{txn}_{u}\) with \(\mathtt{txn}_{a}\) with \(\alpha_{a}=\alpha_{u}\), gas fee \(g_{a}>g_{u}\), so that \(\alpha_{a}P_{C_{1},E_{1}}+g_{a}<\alpha_{a}P_{C_{1},E_{2}}\). In this case, the arbitrage benefit to the user is diminished and the adversary gains from front-running such a transaction. 3. Liquidation MEV: A liquidation attack occurs on a loan taken out by a user. Let a user borrow \(\alpha_{C_{1}}\) units of currency \(C_{1}\), for a price of \(\alpha_{C_{1}}C_{1}\). The user in exchange offers a collateral of units \(\alpha_{C_{2}}\) of currency \(C_{2}\) for a price of \(\alpha_{C_{2}}C_{2}\geq\alpha_{C_{1}}C_{1}\). This attack occurs, when the collateral \(\alpha_{C_{2}}C_{2}\) no longer covers the value of the debt. Let the exchange rate at the time of borrowing be \(C_{2}=\beta_{r}C_{1}\). For the loan to be healthy, \(\beta>1\). Due to price fluctuations, it is possible that \(\frac{\alpha_{C_{2}}C_{2}}{\alpha_{C_{1}}C_{1}}<1\) which means that the loan is now under-collateralized. At this time, the collateral is available to users to purchase at a low rate. An adversary now frontruns the purchase of the collateral \(\alpha_{C_{2}}C_{2}\), and is able to accrue a profit on this transaction by selling it later at a higher price. ### MEV Mitigation: Status Quo Ethereum uses a proposer-builder separation architecture for mitigating the negative impacts of MEV on blockchain security (Han et al., 2017). Under this architecture, the process for the formation of a block is as follows. Users submit transactions either by broadcasting publicly over the blockchain network, or by sending privately to a third-party reputable entity called a builder (Flashbots, BeaverBuild, Builder0x69, BloXroute etc.). Transactions made include an appropriate amount of transaction fees depending on the priority desired for the transaction. The builder collects transactions and orders them to create a candidate block which it then advertises to the proposer of that time slot. Builder strive to construct blocks with high aggregate transaction fees, as a portion of the transactions fees goes to the builder. Competing builders advertise blocks to the proposer. The proposes chooses the block containing the most amount of fees for publication on the blockchain. ## 3. Model We consider time is divided in to discrete rounds, with one block produced during each round. Our model consists of two actors, a user and an adversary as described in the following. **User:** The user in our model represents collectively all honest users in the system. The essence of our results does not change even if we explicitly consider multiple users in the model. We assume the user is honest and follows protocol. The user seeks to make MEV transactions without getting front run to maximize its profits. We define an "MEV transaction" as a DeFi transaction from which value can be extracted by the adversary through a reordering attack. The user is interested in making at most one MEV transaction each round. The profit gained by the user upon making a MEV transaction successfully (i.e., without getting front run) and the profits lost when a front running attack happens are discussed in SS3.1 below. At the beginning of the experiment, the user has a net wealth of \(W_{u}[0]\). **Adversary:** The adversary in our model is an entity that seeks to make profit by attacking the user's MEV transactions. We primarily consider front running attacks in our work, though the results extend to back running and sandwich attacks as well. During an MEV attack, the adversary gains precisely the value lost by the user on the transaction. At the beginning of the experiment, we assume the adversary has a net wealth of \(W_{a}[0]\). We assume the total wealth \(W_{a}[0]\) of the adversary is lower than the total wealth of the user \(W[0]\) initially by a factor of \(\sigma<1\). This is a reasonable assumption, as the security of Proof-of-Stake consensus followed by Ethereum relies on such an assumption as well. For any round \(r\), we let \(W_{u}[r],W_{a}[r]\) respectively denote the total wealth of the user and adversary respectively at the beginning of round \(r\). Note that since the user can make at most one MEV transaction each round, the adversary can also front run at most one MEV transaction each round. We do not consider regular (i.e., non-MEV) transactions made by the user or the adversary in our model. The adversary has complete knowledge of the internal state of the user at any time. Unless the proposed consensus protocol prohibits it, the adversary can front run any transaction submitted by the user during a round with its own transaction. The blockchain network also contains builders, relays and validators, but for our problem we do not consider them to be an essential part of the network dynamics, and omit their roles. We define a transaction as \(\texttt{txn}_{u}[r]\) for a transaction made by a user at round \(r\), and \(\texttt{txn}_{a}[r]\) for a transaction made by an adversary. ### Rewards We assume that the profit made by an honest user on an MEV transaction is \(\eta\), of which he loses \(f\eta\) if an adversary manages to front run the transaction. In practice, users can specify a slippage parameter to control their MEV loss which relates to the \(f\) in our model (Kang et al., 2018). Thus, the rewards to the user and the adversary, respectively, \(h_{u}[r],h_{a}[r]\) in round \(r\) can be defined as follows: \[h_{u}[r] =\begin{cases}\eta-f\eta&\text{if MEV transaction is front run}\\ \eta&\text{otherwise,}\end{cases} \tag{1}\] \[h_{a}[r] =\begin{cases}f\eta&\text{if MEV transaction is front run}\\ 0&\text{otherwise.}\end{cases} \tag{2}\] We ignore the costs incurred by gas fees to the user and the adversary. In today's Ethereum (referred to as "current protocol" in the paper), we assume a user's MEV transaction always get front run which leads to a profit of \(\eta-f\eta\) per round for the user. This is a reasonable assumption, as a user today either issues its transaction publicly and gets attacked, or issues its transaction privately to a builder by paying hefty fees. Either way the value the user rightfully must gain in the transaction is lost in today's Ethereum. After \(R\) rounds, the total wealth accumulated by the user and the adversary in the current protocol from MEV transactions is given by: \[W_{u} =W_{u}[0]+\sum_{r=1}^{R}h_{u}[r] \tag{4}\] \[=W_{u}[0]+(\eta-f\eta)R \tag{3}\] \[W_{a} =W_{a}[0]+\sum_{r=1}^{R}h_{a}[r] \tag{6}\] \[=W_{a}[0]+f\eta R \tag{5}\] Thus, we see that an honest user is losing out on atleast \(f\eta R\) profits on having made an MEV transaction every round for \(R\) rounds. In the real world scenario, they end up losing even more money when multiple MEV transactions are part of a block. ### Problem Statement Our objective is to design a transaction ordering protocol that prevents MEV attacks and maximizes the total wealth of the after \(R\) rounds. Or, equivalently, we would like to reduce the fraction of transactions that are successfully attacked by the adversary. The solution space we explore must obey the following constraints. First, we do not want to introduce any significant modifications to the consensus protocol keeping in mind the difficulties involved in implementation. Any solution we propose must be implementable with just a few lines of code, either at the consensus or execution layers. We also avoid use of computationally expensive cryptographic algorithms due to their complexity of implementation. Finally, we would like our method to be resistant to attacks without the usage of any trusted third parties. ## 4. Proposed Transaction Ordering Protocol We introduce our transaction ordering protocol, called Masquerade, which is a decentralized protocol with minimal changes to the current consensus protocol and no reliance on external trusted parties. In our protocol, we modify a percentage of transactions \(\texttt{txn}_{u}\), \(\texttt{txn}_{a}\) to include a new parameter called "token number" with them. These transactions will then be ordered strictly according to the token number accompanying the transaction. As a result, even if the content of the MEV transaction is made aware to an attacker, they are unable to frontrun it without the relevant token number required to actually frontrun the transaction. We add two new kinds of transactions called "token purchase transaction" and "tokenized transaction" that we will describe in more detail below: ### Token Purchase Transaction A token purchase transaction is simply a transaction that the user makes in order to receive token number \(T\) that can be used for future transactions. A single token purchase transaction can be used to specify any number of tokens desired, as long as sufficient funds are available for the token. A single token purchase costs \(y\) units. A token purchase transaction is considered successful, if it has been included as a part of the main chain. A user, or adversary cannot specify the token number they would like to purchase. Token numbers are issued independently, by a token issuing algorithm. If a user uses a valid token, to make a valid tokenized transaction, the token is considered to be spent, and the cost of purchasing that particular token \(y\) is refunded back to the user. Each token can only be used once, but a token that has been purchased once never expires, and can be used in the future. ### Tokenized Transaction A tokenized transaction for round \(r\), \(\texttt{txn}_{u}[r]\), \(\texttt{txn}_{a}[r]\) is a transaction that is accompanied by a valid token number \(T_{u}[r]\), \(T_{a}[r]\) for the user and adversary respectively. A valid token number is one, that has been confirmed and included on the main chain in the previous rounds \(r-1\). A user can only use valid tokens to make a tokenized transaction. Tokenized transactions are strictly ordered in ascending order, with transaction \(\texttt{txn}_{u}[r]\) being executed earlier than \(\texttt{txn}_{a}[r]\), if \(T_{u}[r]<T_{a}[r]\). Further, all tokenized transactions precede non-tokenized transactions. The process of formation of the block now follows the following procedure: * At the beginning of the round, a user can take the following actions: * all users can make non-tokenized regular transactions or non-tokenized MEV transactions. * users can make the desired token purchase requests. * users can use previously purchased tokens to make tokenized MEV transactions, or tokenized regular transansactions. * All transactions are verified to ensure they are valid. * The validator for round \(r\) then collects \(N\) valid tokenized transactions, and orders them based on token numbers. A tokenized transaction \(\texttt{txn}_{u}[r]\) is strictly ordered before \(\texttt{txn}_{a}[r]\) if \(T_{u}[r]<T_{a}[r]\). * The validator also collects non-tokenized transactions and creates a block based on the highest rewards that can be extracted. * Finally, at the end of the round, the validator updates the state of the Blockchain and publishes the block. It is important to note, here, that the token purchase transaction, like any other transaction, has the ability to get attacked. An adversary is able to frontrun an honest user's token purchase transaction, to get a lower token number than the user. ### Rewards for the proposed protocol During round \(r\), there are only \(l\) tokens available that can be purchased. We assume a reasonable, fixed user policy \(\pi_{u}\) and adversary policy \(\pi_{a}\). We would like to find the maximum possible reward that can be achieved by the adversary in a fixed user policy scenario, which is strictly less than what the adversary gains in the current scenario. Let the user and adversary each purchase \(\mathsf{X}_{u}[r],\mathsf{X}_{a}[r]\) number of tokens respectively. As each token costs \(y\) units, the total costs incurred by them is \(y\mathsf{X}_{u}[r],y\mathsf{X}_{a}[r]\) Let each block only allow \(N\) tokenized transactions. Let us define \(\mathsf{M}_{u}[r]\) to be a function that defines whether an MEV transaction is made by a user or not i.e \[\mathsf{M}_{u}[r]=\begin{cases}1&\text{if MEV transaction is made}\\ 0&\text{otherwise}\end{cases} \tag{7}\] Similarly, we define \(\mathsf{M}_{a}[r]\) to be a function that defines whether an MEV transaction is attacked by an adversary or not i.e \[\mathsf{M}_{a}[r]=\begin{cases}1&\text{if MEV transaction is attacked}\\ 0&\text{otherwise}\end{cases} \tag{8}\] Let us define \(\mathsf{F}_{r}\) to be a function that defines whether an MEV transaction made by the user is frontrun or not i.e \[\mathsf{F}_{r}=\begin{cases}1&\text{if MEV transaction is frontrun}\\ 0&\text{otherwise}\end{cases} \tag{9}\] The rewards to the user and the adversary \(h_{u}[r],h_{a}[r]\) in round \(r\) can be defined as follows: \[h_{u}[r]=\begin{cases}-y\mathsf{X}_{u}[r]+\eta-f\eta+y&\text{if MEV transaction is frontrun}\\ -y\mathsf{X}_{u}[r]+\eta+y&\text{otherwise}\end{cases} \tag{10}\] \[h_{a}[r]=\begin{cases}-y\mathsf{X}_{a}[r]+f\eta+y&\text{if MEV transaction is frontrun}\\ -y\mathsf{X}_{a}[r]&\text{otherwise}\end{cases} \tag{11}\] Now, both the user and adversary start with an initial wealth \(W_{u}[0],W_{a}[0]\). If, we assume that each block has only a single MEV transaction, after \(R\) rounds, the maximum wealth that can be accumulated by both the user and the adversary from MEV transactions are: \[W_{u} =W_{u}[0]+\max\sum_{r=1}^{R}h_{u}[r]\] \[=W_{u}[0]+\max\sum_{r=1}^{R}\eta-f\eta\mathsf{F}_{r}+y\mathsf{M} _{u}[r]-y\mathsf{X}_{u}[r] \tag{12}\] \[W_{a} =W_{a}[0]+\max\sum_{r=1}^{R}h_{a}[r]\] \[=W_{a}[0]+\max\sum_{r=1}^{R}(f\eta+y)\mathsf{F}_{r}-y\mathsf{X}_ {a}[r] \tag{13}\] We now introduce a randomly chosen fixed user policy \(\pi_{u}\), as shown in Algorithm 1. At the end of each round, a user purchases a single token, which depletes their wealth. When the threshold of this wealth is less than a small threshold \(\tau\), the user begins to spend their tokens. This ensures that there is sufficient time for user to collect tokens to spend so that they are not frontrun by adversary tokens. Finally, the user will always take the MEV opportunity presented to them as long as they have an appropriate token. We assume that an adversary can simply frontrun any non-tokenized transactions. As a result, the user solely makes tokenized MEV transactions. Further, as the user is not aware of any tokens that are held by the adversary, they always use their lowest available token. ``` 1:Inputs: \(W_{u}[r],y,\tau,\tilde{H}_{u}[r]\subseteq\tilde{H}[r]\) 2:if\(W_{u}[r]>y\)then 3:\(X_{u}[r]=1\) 4:endif 5:if\(W_{u}[r]\leq\tau\)then 6:\(M_{u}[r]=1\) 7:\(T_{u}[r]=\tilde{H}_{u}[r]\) 8:else 9:\(M_{u}[r]=0\) 10:\(T_{u}[r]=\infty\) 11:endif ``` **Algorithm 1** User Policy \(\pi_{u}\) The adversary policy \(\pi_{a}\) on the other hand, is more powerful. As a worst case scenario, we assume that the adversary has full control over the token number assignment, and may reorder the token purchase transactions as they please. Thus, the adversary is aware of all tokens that the user has. We also allow the adversary to monitor user transactions, and the ability to frontrun these transactions. Since the user only makes tokenized MEV transactions, the adversary is able to frontrun these transactions, if they have tokens that are smaller in number than user tokens. This policy is detailed in Algorithm 2. The adversary can choose any policy, however we show in Section 5, that this indeed is the best policy that can be taken. ``` 1:Inputs: \(y,W_{a}[r],T_{u}[r],M_{u}[r],\tilde{H}_{a}[r]\subseteq\tilde{H}[r]\) 2:\(X_{a}[r]=\lfloor\frac{W_{a}[r]}{y}\rfloor\) 3:if\(M_{u}[r]=1\)then 4:if\(T_{u}[r]<\infty\)then 5:\(T_{a}[r]=\max T\in\tilde{H}_{a}[r]\) s.t. \(T<T_{u}[r]\) 6:else 7:\(T_{a}[r]=\infty\) 8:endif 9:\(M_{a}[r]=0\) 10:\(T_{a}[r]=\infty\) 11:endif ``` **Algorithm 2** Adversary Policy \(\pi_{a}\) ## 5. Analysis To ease analysis, we divide time in to epochs as defined in the following. The first epoch begins at round \(r=0\) and ends when the user has completed the initial token purchasing as in Algorithm 1. Equivalently, the first epoch lasts until the user's wealth \(W_{u}[r]\) drops below threshold \(\tau\). Each subsequent epoch begins immediately after the epoch prior to it ends. An epoch ends when the following two conditions are satisfied: 1. the user has utilized all of the tokens purchased in the previous epoch; 2. the available wealth \(W_{u}[r]\) of the user drops below \(\tau\). The above conditions lead to a well-defined notion of an epoch as the user always utilizes the earliest available token for a MEV transaction. Tokens in one epoch, therefore, are completely utilized before tokens in the next epoch are utilized. We also define a terminal epoch in which parties utilize tokens from the previous epoch, but make no new token purchases during the epoch. The game ends after the terminal epoch. We assume the game lasts for \(k>0\) epochs. Restricting the game to \(k\) epochs captures the intuition that in practice, a player is interested in optimizing her rewards over a fixed time horizon, e.g., the lifespan of the player, the next five years etc. Note that our definition of an epoch is tied to the user's behavior described in Algorithm 1. The definition is independent of the adversary's behavior. Let \(\tilde{H}[e]\) be the set of tokens purchased by either the user or the adversary during epoch \(e\). Further, let \(\tilde{H}_{u}[e]\subseteq\tilde{H}[e]\) and \(\tilde{H}_{a}[e]\subseteq\tilde{H}[e]\) be the tokens purchased by the user and adversary, respectively, during \(e\). \(\tilde{W}_{a}[e]\) and \(\tilde{W}_{u}[e]\) denote, respectively, the total wealth of the adversary and user at the end of epoch \(e\). The total wealth of a party includes the wealth available for spending and the funds locked up in the form of tokens. We call a policy \(\pi_{a}\) followed by the adversary as balanced if the following holds true. **Property 1** (Balanced policy): _An adversary policy \(\pi_{a}\) is balanced if for each non-terminal epoch \(e\), there exists an injective mapping \(m_{e}:\tilde{H}_{a}[e]\rightarrow\tilde{H}_{u}[e]\) such that \(m_{e}(T)>T\) for all \(T\in\tilde{H}_{a}[e]\) and \(|\tilde{H}_{a}[e]|=\lfloor\tilde{W}_{a}[e]/y\rfloor\)._ With a balanced policy, an adversary can successfully utilize each purchased token to front run some victim transaction. The adversary also maximally utilizes its wealth to purchase tokens. We call a specific epoch \(e\) as balanced if there exists an injective mapping \(m_{e}:\tilde{H}_{a}[e]\rightarrow\tilde{H}_{u}[e]\) such that \(m_{e}(T)>T\) for all \(T\in\tilde{H}_{a}[e]\) and \(|\tilde{H}_{a}[e]|=\lfloor\tilde{W}_{a}[e]/y\rfloor\). We now show that the adversary policy \(\pi_{a}\) described in Algorithm 2 is balanced. **Theorem 1**: _For \(W_{a}[0]<\sigma W_{u}[0]\) where \(0<\sigma<\frac{1}{2}\) is a security parameter, \(f<\frac{1-\sigma-\epsilon}{1+\sigma}\), and \(W_{a}[0]>\frac{y^{2}}{\eta\epsilon}\) where \(\epsilon<(yf+f^{2}\eta)/(\eta(1-f))\) is a parameter and \(\tau<\epsilon W_{u}[0]\) the adversary's policy \(\pi_{a}\) as described in Algorithm 2 is balanced._ The proof is by induction. During the initial epoch, the adversary purchases \(\lfloor W_{a}[0]/y\rfloor\) tokens first following which the user purchases \(\lceil(W_{u}[0]-\tau)/y\rceil\) tokens as per the adversary policy \(\pi_{a}\) and user policy \(\pi_{u}\) in Algorithm 1. By assumptions in Theorem, \(W_{a}[0]<\sigma W_{u}[0]<\frac{W_{u}[0]}{2}\) and therefore \(\lfloor W_{a}[0]/y\rfloor<\lceil(W_{u}[0]-\tau)/y\rceil\). The injective mapping \(m_{0}:\tilde{H}_{a}[0]\rightarrow\tilde{H}_{u}[0]\) can simply be Figure 1: Illustration of token purchases with time. \(m_{0}(i)=\lfloor W_{a}[0]/y\rfloor+i\) for all \(1\leq i\leq\lfloor W_{a}[0]/y\rfloor\). The total wealth of the user \(\tilde{W}_{u}[0]\) and adversary \(\tilde{W}_{a}[0]\) at the end of the initial epoch are still \(W_{u}[0]\) and \(W_{a}[0]\) respectively. Now, consider an epoch \(e\) that is not the initial or the terminal epoch. Supposing the previous epoch \(e-1\) is balanced, i.e., satisfies the property that there exists an injective function \(m_{e-1}:\tilde{H}_{a}[e-1]\to\tilde{H}_{u}[e-1]\) with \(m_{e-1}(T)>T\) for all \(T\in\tilde{H}_{a}[e-1]\). Also suppose the mapping \(m_{e-1}\) is such that there are at least \(c\lceil\tilde{W}_{a}[e-1]/y\rceil\) user tokens at the end of epoch \(e-1\) without a preimage in \(m_{e-1}\) where \(c>(f+\epsilon)/(1-f-\epsilon)\), i.e., \[|\{T^{\prime}\in\tilde{H}_{u}[e-1]:T^{\prime}>\max_{T:T\in\tilde{W}_{a}[e-1]}m _{e-1}(T)\}|>c\left\lceil\frac{\tilde{W}_{a}[e-1]}{y}\right\rceil. \tag{14}\] Equation (14) implies there exist at least \(c\lceil\tilde{W}_{a}[e-1]/y\rceil\) user transactions that are not front run in the end of epoch \(e\). Further, suppose that the total wealth of the parties are such that \(\tilde{W}_{a}[e-1]<\sigma\tilde{W}_{u}[e-1]\). We will show that there exists an injective function \(m_{e}:\tilde{H}_{a}[e]\to\tilde{H}_{u}[e]\) with \(m_{e}(T)>T\) for all \(T\in\tilde{H}_{a}[e]\) and \(\tilde{W}_{a}[e]<\sigma\tilde{W}_{u}[e]\). There are \(\lfloor\tilde{W}_{a}[e-1]/y\rfloor\) tokens purchased by the adversary during epoch \(e-1\). Since epoch \(e-1\) is balanced, during epoch \(e\) the adversary can use each of its tokens in \(e-1\) to successfully front run transactions. After each successful front running attack, the adversary replaces its used token in epoch \(e-1\) by purchasing a new token in epoch \(e\). We call such a token the adversary purchases as a _replacement_ token. Additionally, the adversary can use the profits gained from front running to purchase tokens whenever the adversary's balance exceeds \(y\). We call such a token as a _bonus_ token. Note that after each transaction the user submits, the user also purchases a replacement token and possibly bonus tokens. We define replacement token and bonus token for the user analogous to those for the adversary. Per our model, the replacement and bonus tokens of the adversary are purchased before the replacement and bonus tokens of the user after each front running attack. To construct an injective mapping for epoch \(e\), first note that the replacement token purchased by the adversary after each successful front run can be mapped to the replacement token purchased by the user after the front run. Thus, \(\lfloor\tilde{W}_{a}[e-1]/y\rfloor\) of the adversary's tokens can be mapped to higher tokens in epoch \(e\). However, in addition to replacement tokens the adversary also purchases \(\lfloor\lfloor\tilde{W}_{a}[e-1]/y\rfloor f\eta/y\rfloor\) bonus tokens in epoch \(e\), which need to be mapped. For this, consider round \(r^{*}\) when the last front run attack is performed by the adversary in epoch \(e\). We observe that, by policy \(\pi_{a}\) all of the adversary's bonus tokens are purchased by round \(r^{*}+1\) in epoch \(e\) (Figure 1). Transactions submitted by the user after round \(r^{*}\) are not attacked. By assumption, at least \(\lceil\tilde{W}_{a}[e-1]/y\rceil\) transactions are submitted by the user after round \(r^{*}+1\). Each of those transactions generate a reward of \(\eta\) for the user. Therefore, the user is able to purchase at least \(\lfloor\lceil\tilde{W}_{a}[e-1]/y\rceil(y+\eta)/y\rfloor\) tokens at the end of epoch \(e\). To complete our mapping, we map the \(\lfloor\lfloor\tilde{W}_{a}[e-1]/y\rfloor f\eta/y\rfloor\) bonus tokens of the adversary to the first \(\lfloor\lfloor\tilde{W}_{a}[e-1]/y\rfloor f\eta/y\rfloor\) user tokens purchased after round \(r^{*}\). This is possible since \[\epsilon <\frac{yf+f^{2}\eta}{\eta(1-f)}\Rightarrow\frac{\eta\epsilon+f \eta}{y+\eta}<\frac{f}{1-f}\Rightarrow\frac{\frac{y^{2}}{\tilde{W}_{a}[e-1]} +f\eta}{y+\eta}<\frac{f}{1-f}\] \[\Rightarrow\frac{\frac{y^{2}}{\tilde{W}_{a}[e-1]}+f\eta}{y+\eta} <c\Rightarrow y^{2}<\tilde{W}_{a}[e-1](cy+c\eta-f\eta)\] \[\Rightarrow\tilde{W}_{a}[e-1]f\eta <c\tilde{W}_{a}[e-1](y+\eta)-y^{2}\Rightarrow\frac{\tilde{W}_{a }[e-1]f\eta}{y^{2}}<\frac{c\tilde{W}_{a}[e-1](y+\eta)}{y^{2}}-1\] \[\Rightarrow\lfloor\lfloor\tilde{W}_{a}[e-1]/y\rfloor f\eta/y\rfloor< \lfloor c\lceil\tilde{W}_{a}[e-1]/y\rceil(y+\eta)/y\rfloor. \tag{15}\] Thus, we have showed the existence of an injective mapping \(m_{e}:\tilde{H}_{a}[e]\rightarrow\tilde{H}_{u}[e]\) with \(m_{e}(T)>T\) for all \(T\in\tilde{H}_{a}[e]\). The number of user tokens at the end of epoch \(e\) who which a preimage on \(m_{e}\) does not exist is at least \(\lfloor\lceil\tilde{W}_{a}[e-1]/y\rceil(y+\eta)/y\rfloor-\lfloor\lfloor\tilde{ W}_{a}[e-1]/y\rfloor f\eta/y\rfloor\). In the following we show that this quantity is at least \(\lceil\tilde{W}_{a}[e]/y\rceil\). We have \[\frac{f+\epsilon}{1-f-\epsilon} <c\] \[\Rightarrow\frac{f+\frac{y^{2}}{\eta W_{a}[e-1]}}{1-f-\frac{y^{2 }}{\eta\tilde{W}_{a}[e-1]}} <c\] \[\Rightarrow f\eta+\frac{y^{2}}{W_{a}[\tilde{e}-1]} <c\eta(1-f)-\frac{cy^{2}}{\tilde{W}_{a}[e-1]}\] \[\Rightarrow cy+cf\eta+\frac{cy^{2}}{\tilde{W}_{a}[e-1]} <cy+c\eta-\frac{y^{2}}{W_{a}[\tilde{e}-1]}-f\eta\] \[\Rightarrow c\frac{\tilde{W}_{a}[e-1]}{y}\left(1+\frac{f\eta}{y} \right)+c <\frac{c\tilde{W}_{a}[e-1](y+\eta)}{y^{2}}-1-\frac{\tilde{W}_{a}[e-1] f\eta}{y^{2}}\] \[\Rightarrow c\tilde{W}_{a}[e]/y+c <c\lceil\tilde{W}_{a}[e-1]/y\rceil(y+\eta)/y-1-\lfloor\tilde{W}_{a }[e-1]/y\rfloor f\eta/y\] \[\Rightarrow c\lceil\tilde{W}_{a}[e]/y\rceil<\lfloor c\lceil\tilde{W}_{a }[e-1]/y\rceil(y+\eta)/y\rfloor-\lfloor\lfloor\tilde{W}_{a}[e-1]/y\rfloor f \eta/y\rfloor \tag{16}\] It only remains to show that \(\tilde{W}_{a}[e]<\sigma\tilde{W}_{a}[e]\). We have, \[\frac{\tilde{W}_{a}[e-1]+\left\lfloor\frac{\tilde{W}_{a}[e-1]}{y }\right\rfloor f\eta}{\tilde{W}_{u}[e-1]+\left\lfloor\frac{\tilde{W}_{a}[e-1] }{y}\right\rfloor(1-f)\eta+\left(\left\lceil\frac{\tilde{W}_{u}[e-1]-\tau}{y }\right\rceil-\left\lfloor\frac{\tilde{W}_{a}[e-1]}{y}\right\rfloor\right) \eta}<\sigma\] \[\Leftarrow\frac{\tilde{W}_{a}[e-1]+\left\lfloor\frac{\tilde{W}_{ a}[e-1]}{y}\right\rfloor f\eta}{\tilde{W}_{u}[e-1]+\left\lfloor\frac{\tilde{W}_{a}[e-1]}{y }\right\rfloor(1-f)\eta+\left(\left\lceil\frac{\tilde{W}_{u}[e-1]-\tau}{y} \right\rceil-\left\lfloor\frac{\tilde{W}_{a}[e-1]}{y}\right\rfloor\right)\eta }<\frac{\tilde{W}_{a}[e-1]}{\tilde{W}_{u}[e-1]}\] \[\Leftarrow\frac{\tilde{W}_{a}[e-1]+\left\lfloor\frac{\tilde{W}_{ a}[e-1]}{y}\right\rfloor f\eta}{\tilde{W}_{a}[e-1]}<\frac{\tilde{W}_{u}[e-1]+\left\lfloor \frac{\tilde{W}_{a}[e-1]}{y}\right\rfloor(1-f)\eta+\left(\left\lceil\frac{ \tilde{W}_{u}[e-1]-\tau}{y}\right\rceil-\left\lfloor\frac{\tilde{W}_{a}[e-1] }{y}\right\rfloor\right)\eta}{\tilde{W}_{u}[e-1]}\] \[\Leftarrow\frac{\tilde{W}_{a}[e-1]+\frac{\tilde{W}_{a}[e-1]}{y}f\eta}{ \tilde{W}_{a}[e-1]}<\frac{\tilde{W}_{u}[e-1]+\left\lfloor\frac{\tilde{W}_{a}[e-1 ]}{y}\right\rfloor(1-f)\eta+\left(\frac{\tilde{W}_{u}[e-1]-\tau}{y}-\frac{ \tilde{W}_{a}[e-1]}{y}\right)\eta}{\tilde{W}_{u}[e-1]}\] \[\Leftarrow 1+\frac{f\eta}{y}<1+\left\lfloor\frac{\tilde{W}_{a}[e-1 ]}{y}\right\rfloor\frac{(1-f)\eta}{\tilde{W}_{u}[e-1]}+\frac{1-\frac{\tau}{ \tilde{W}_{u}[e-1]}}{y}\eta-\frac{\tilde{W}_{a}[e-1]}{\tilde{W}_{u}[e-1]y}\eta\] \[\Leftarrow\frac{f\eta}{y}+\frac{\sigma\eta}{y}<\left\lfloor \frac{\tilde{W}_{a}[e-1]}{y}\right\rfloor\frac{(1-f)\eta}{\tilde{W}_{u}[e-1]}+ \frac{1-\frac{\tau}{\tilde{W}_{u}[e-1]}}{y}\eta\] \[\Leftarrow\frac{f\eta}{y}+\frac{\sigma\eta}{y}<\left(\frac{ \tilde{W}_{a}[e-1]}{y}-1\right)\frac{(1-f)\eta}{\tilde{W}_{u}[e-1]}+\frac{1- \frac{\tau}{\tilde{W}_{u}[e-1]}}{y}\eta\] \[\Leftarrow f+\sigma<\frac{(\tilde{W}_{a}[e-1]-y)(1-f)}{\tilde{W}_ {u}[e-1]}+\frac{1-\frac{\tau}{\tilde{W}_{u}[e-1]}}{y}\eta\] \[\Leftarrow f\left(1+\frac{\tilde{W}_{a}[e-1]-y}{\tilde{W}_{u}[e -1]}\right)<\frac{\tilde{W}_{a}[e-1]-y}{\tilde{W}_{u}[e-1]}+1-\frac{\tau}{ \tilde{W}_{u}[e-1]}\] \[\Leftarrow f\left(1+\sigma-\frac{y}{\tilde{W}_{u}[e-1]}\right)< -\sigma+1-\frac{\tau}{\tilde{W}_{u}[e-1]}\] \[\Leftarrow f<\frac{1-\sigma-\frac{\tau}{\tilde{W}_{u}[e-1]}}{1+ \sigma-\frac{y}{\tilde{W}_{u}[e-1]}}\] \[\Leftarrow f<\frac{1-\sigma-\epsilon}{1+\sigma} \tag{17}\] where we have used \(\tilde{W}_{a}[e-1]>y^{2}/(\eta\epsilon)>y\) and \(\tau<\epsilon\tilde{W}_{u}[e-1]\). The proof follows by induction. Lemma 1.: _The total wealth of an adversary, following \(\pi_{a}\), after \(k\) epochs is upper bounded as_ \[\tilde{W}_{a}[k]\leq W_{a}[0]\left(1+\frac{f\eta}{y}\right)^{k}. \tag{18}\] Proof.: The total wealth of the adversary at the end of the initial epoch, \(\tilde{W}_{a}[0]\), is \(W_{a}[0]\). Under policy \(\pi_{a}\), the adversary is able to front run \(\lfloor\tilde{W}_{a}[0]/y\rfloor\) many user transactions in the first epoch. Therefore, the total wealth of the adversary at the end of the first epoch is (under what assumptions?) \[\tilde{W}_{a}[1]=W_{a}[0]+\left\lfloor\frac{W_{a}[0]}{y}\right\rfloor f\eta \leq W_{a}[0]\left(1+\frac{f\eta}{y}\right). \tag{19}\] From Equation (19), the total wealth of the adversary at the end of the second epoch is at most \[\tilde{W}_{a}[2]\leq W_{a}[0]\left(1+\frac{f\eta}{y}\right)+\left\lfloor\frac {W_{a}[0]\left(1+\frac{f\eta}{y}\right)}{y}\right\rfloor f\eta\leq W_{a}[0] \left(1+\frac{f\eta}{y}\right)^{2}. \tag{20}\] Continuing this way, the total wealth of the adversary at the of the \(k\)-th epoch is at most \[\tilde{W}_{a}[k]\leq W_{a}[0]\left(1+\frac{f\eta}{y}\right)^{k}. \tag{21}\] **Lemma 2**: _The total wealth of the user at the end of \(k\) epochs is lower bounded as_ \[\tilde{W}_{u}[k]\geq W_{u}[0]\left(1+\frac{\eta}{y}-\frac{\sigma f\eta}{y} \right)^{k}-\frac{\tau\eta}{y}\left(\frac{\left(1+\frac{\eta}{y}-\frac{\sigma f \eta}{y}\right)^{k}-1}{\left(\frac{\eta}{y}-\frac{\sigma f\eta}{y}\right)} \right). \tag{22}\] Next, the total wealth of the user at the end of the initial epoch is \(W_{u}[0]\). During the first epoch, the user makes \(\left\lceil\frac{W_{u}[0]-\tau}{y}\right\rceil\) transactions bringing \(\left\lceil\frac{W_{u}[0]-\tau}{y}\right\rceil\eta\) additional value in to the system. From Equation (19), the total wealth of the user at the end of the first epoch is \[\tilde{W}_{u}[1] \geq W_{u}[0]+W_{a}[0]+\left\lceil\frac{W_{u}[0]-\tau}{y}\right\rceil \eta-W_{a}[0]\left(1+\frac{f\eta}{y}\right)\] \[\geq W_{u}[0]+W_{a}[0]+\left(\frac{W_{u}[0]-\tau}{y}\right)\eta-W_{ a}[0]\left(1+\frac{f\eta}{y}\right)\] \[= W_{u}[0]\left(1+\frac{\eta}{y}\right)-W_{a}[0]\frac{f\eta}{y}- \frac{\tau\eta}{y}\] \[\geq W_{u}[0]\left(1+\frac{\eta}{y}\right)-W_{u}[0]\sigma\frac{f \eta}{y}-\frac{\tau\eta}{y}\] \[= W_{u}[0]\left(1+\frac{\eta}{y}-\frac{\sigma f\eta}{y}\right)- \frac{\tau\eta}{y}. \tag{23}\] Similarly, the total wealth of the user at the end of the second epoch is \[\tilde{W}_{u}[2] \geq \tilde{W}_{u}[1]+\tilde{W}_{a}[1]+\left\lceil\frac{\tilde{W}_{u }[1]-\tau}{y}\right\rceil\eta-\tilde{W}_{a}[1]\left(1+\frac{f\eta}{y}\right) \tag{24}\] \[\geq \tilde{W}_{u}[1]\left(1+\frac{\eta}{y}-\frac{\sigma f\eta}{y} \right)-\frac{\tau\eta}{y}\] \[\geq W_{u}[0]\left(1+\frac{\eta}{y}-\frac{\sigma f\eta}{y}\right)^ {2}-\frac{\tau\eta}{y}\left(1+\frac{\eta}{y}-\frac{\sigma f\eta}{y}+1\right).\] Repeating the above for \(k\) times, we have \[\tilde{W}_{u}[k]\geq W_{u}[0]\left(1+\frac{\eta}{y}-\frac{\sigma f\eta}{y} \right)^{k}-\frac{\tau\eta}{y}\left(\frac{\left(1+\frac{\eta}{y}-\frac{\sigma f \eta}{y}\right)^{k}-1}{\left(\frac{\eta}{y}-\frac{\sigma f\eta}{y}\right)} \right). \tag{25}\] **Theorem 3**: _Over epochs, the percentage of user transactions front run in each epoch goes to zero._ The number of user transactions front run during epoch \(e\) is \(\left\lceil\tilde{W}_{a}[e-1]/y\right\rfloor\). The total number of transactions made by the user during epoch \(e\) is \(\left\lceil(\tilde{W}_{u}[e]-\tau)/y\right\rceil\). Since each of the adversary's tokens can be used to front run a transaction, the fraction of transactions that are front run during the epoch is given by \[\frac{\lfloor\tilde{W}_{a}[e-1]/y\rfloor}{\lceil(\tilde{W}_{u}[e-1]-\tau)/y \rceil}\leq\frac{\tilde{W}_{a}[e-1]/y}{(\tilde{W}_{u}[e-1]-\tau)/y}=\frac{ \tilde{W}_{a}[e-1]}{(\tilde{W}_{u}[e-1]-\tau)}. \tag{26}\] From Lemma 1 and 2 we have the percentage of front run transactions as at most \[\frac{W_{a}[0]\left(1+\frac{f\eta}{y}\right)^{e-1}}{W_{u}[0]\left(1+\frac{ \eta}{y}-\frac{\sigma f\eta}{y}\right)^{e-1}-\frac{\tau\eta}{y}\left(\frac{ \left(1+\frac{\eta}{y}-\frac{\sigma f\eta}{y}\right)^{e-1}-1}{\left(\frac{ \eta}{y}-\frac{\sigma f\eta}{y}\right)}\right)-\tau}. \tag{27}\] Taking limit of Equation (27) as \(e\rightarrow\infty\), we have \[\lim_{e\rightarrow\infty}\frac{W_{a}[0]}{W_{u}[0]\left(\frac{y+\eta-\sigma f \eta}{y+f\eta}\right)^{e-1}-\frac{\tau}{1-\sigma f}\left(\left(\frac{y+\eta- \sigma f\eta}{y+f\eta}\right)^{e-1}-\frac{y^{e-1}}{(y+f\eta)^{e-1}}\right)- \frac{\tau y^{e-1}}{(y+f\eta)^{e-1}}}\] \[=\lim_{e\rightarrow\infty}\frac{W_{a}[0]}{\left(W_{u}[0]-\frac{ \tau}{1-\sigma f}\right)\left(\frac{y+\eta-\sigma f\eta}{y+f\eta}\right)^{e-1 }+\frac{\tau}{1-\sigma f}\frac{y^{e-1}}{(y+f\eta)^{e-1}}-\frac{\tau y^{e-1}}{ (y+f\eta)^{e-1}}}=0, \tag{28}\] since \(y+\eta-\sigma f\eta>y+f\eta\iff f<1/(1+\sigma)\) which is true, and \(W_{u}[0]>\tau/(1-\sigma f)\). Theorem 4 ().: _The total wealth earned by the adversary after \(k\) epochs under policy \(\pi_{a}\) is at most a factor \(\frac{y}{y-\eta e}\) away from the total reward under any optimal policy._ (Proof in Appendix A) ## 6. Experiments We perform extensive experiments in order to show the results of the tokenized transaction protocol under different conditions. The experiments are performed by fixing user policy as described in Algorithm 1 and intelligent adversary policy, as per Algorithm 2. We run the experiment for \(10000\) rounds and use the following values of \(f=0.8,\eta=100\) for our simulations. We assume that only a single MEV transaction is included in the block. The user starts with an initial wealth of \(w_{\text{USER}}=1000\$\) and the adversary starts with \(\frac{w_{\text{USER}}}{2}\). \(y=80\) is our chosen token cost. ### Transaction Models We compare the results of our experiment, to the current status-quo, that contains transactions that are being ordered by a relay such as flashbots, and provided to a validator. Here, \(f=1\), since the transactions are always attacked by the adversary. We also compare our results to an ideal scenario, where no MEV transaction is attacked. We use the final wealth of the user as an indicator to determine the long term feasibility of our policy. We also consider the percentage of transactions that are frontrun by the adversary, in order to track the mitigation of the MEVs from our method. We consider the wealth of all parties in the system based on the following model configurations: * Constant \(\eta\): This state is when each block only has a single MEV transaction, and the user and adversary gain a constant, deterministic reward \(\eta\) for making this transaction. If an adversary attacks the transaction, they earn a constant reward of \(f\eta\), while the user earns \(\eta-f\eta\). * Stochastic \(\eta\): This state is when each block has a single MEV transaction, but the user and adversary gain a reward that is stochastic. * Fatal frontrunning: While making MEV transactions, there are some attacks that cannot be blocked if they are frontrun, this is called fatal frontrunning. In this case, we include Type 1 and Type 2 transactions, where Type 1 are those transactions that can be protected (such as swaps), and Type 2 are those that cannot (such as arbitrage attacks and liquidation). * Real world \(\eta\): In this case, we consider real values of MEV profits that have been extracted from the Ethereum blockchain. ### Tokenization on constant \(\eta\) In this case, we consider the wealth gained by the user and the adversary based on constant \(\eta=100\), as shown in Figure 2. We see in Figure 2, that the total user wealth, while using Masquerade is comparable to the ideal scenario, one in which the adversary does not exist. The status-quo represents the current protocol, where all MEV transactions are frontrun by the adversary. We see, that on average, only 30% of transactions are attacked, which is a significant improvement, compared to all transactions being attacked by adversary. We see, that when the user waits for a certain period of time, they are able to beat the adversary and protect their transaction. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline Method & \multicolumn{2}{c}{Wealth of User} & \multicolumn{2}{c}{Wealth of Adversary} & \multicolumn{1}{c|}{\% Frontrun} & \multicolumn{1}{c|}{\% Backrun} \\ & Initial & Final & Initial & Final & & \\ \hline Status Quo & 1000 & 103840.0 & 500 & 411560.0 & 100 & 100 \\ Token MEV & 1000 & 390240.0 & 500 & 125160.0 & 29.81 & 30.90 \\ Ideal case & 1000 & 515400.0 & 0 & 0 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1. Improvements in transaction unstealibility using tokens Figure 2. (a) Shows the percentage of transactions successfully attacked by the adversary every 1000 rounds. (b) The accumulation of the wealth of users and the adversaries when using Masquerade (our method) v/s the current status quo. ### Tokenization on stochastic \(\eta\) In this case, we consider the wealth gained by the user and the adversary for a stochastic \(\eta\) based on samples drawn from a Gaussian distribution, as shown in Figure 3. We also consider a heavy tailed Cauchy distribution as shown in Figure 3. This is a more interesting case, as the value earned by stealing the transaction is vastly different. The adversary now, no longer can predict what value each MEV transaction that can be potentially made in the future holds. They are only aware of current token request transactions, and tokenized MEV transactions. Now, the adversary has to carefully decide if they would like to use their best token to attack the current tokenized MEV transaction, or wait in case a better prospect in the future shows up. If the adversary chooses not to attack, the user benefits directly. If the adversary attacks, they may lose out on a future transaction. To capture this, we consider simple modifications to Algorithm 1 where the user decides to use their lowest token if the value of \(\eta>100\), and second lowest token otherwise. Similarly, in Algorithm 2, the adversary decides whether to use their best token in the current round, or the next. Here too, we see that our algorithm performs quite well, improving the user's wealth compared to current status quo. We also see that the adversary is unable to attack all transactions, and even misses out on crucial, high value MEVs. The challenge for the adversary here, lies in guessing the most profitable transactions, and being able to attack them successfully. If the adversary waits for too long in a phase without using tokens, then the adversary misses out on purchasing early tokens in the next phase. The user in the next phase can generally exploit this and use later tokens for low value transactions and early tokens for high value transactions. We can consider several other efficient user policies in this case. Figure 3. The user and adversary wealths for stochastic values of eta, drawn from Gaussian and Cauchy distributions ### Fatal frontrunning attacks In a fatal frontrunning attack, the attacker's transaction executes while the user's transaction fails. This happens mostly in case of arbittages, or loan liquidations. If the user discovers an arbitrage opportunity and issues a transaction, the adversary can observe the user's transaction, duplicate it, and issue its own transaction with a higher gas fee. In this case, the adversary's transaction executes, while the user receives no profit. No matter what slippage the user specifies, the adversary has gained the full profit. This leads us to two types of transactions, the first kind, or type 1 transactions, which involve exchanges that can be controlled with \(f,\eta\), and type 2 transactions, that are fatal, such as liquidation attacks and arbitrage attacks. Creating an MEV transaction is a computationally expensive operation for the user. The more compute the user can afford, the higher value MEV transactions they can find. Thus, here, if the user sees that they are not getting adequate rewards for the compute spent, the user would try to find computationally cheaper MEV transactions, which for our experiments is equivalent to low \(\eta\) transactions. Thus, for type 2 transactions, the user only engages with small \(\eta\). We ran our experiment on attacks that include fatal frontrunning attacks, and we see that in such cases, we are able to only prevent a very small percentage of these attacks, however even so, the total wealth obtained by the user remains better than compared to the status quo. For fatal frontrunning attacks that happened about 50% of the time, users get successfully attacked 70% of the time. This number decreased when we slightly modified Algorithm 1. In this case, the user only engaged with \(\eta<100\) for type 2 transactions, and thus, the percentage of frontruns decreased significantly. In Figure 4, we include fatal frontrunning attacks both 50% and 30% of the time, and observe that users are able to increase their wealth successfully. ### Tokenization on real-world MEV rewards Here, we consider the real world MEV values that we have extracted from the Ethereum blockchain. We use a similar procedure as when \(\eta\) is stochastic. Here too, we see that masquerade performs better than current practices. Figure 5 shows the total wealth of the users in this system, along with successful attacks. ### Token Epochs For ease in analysis, we consider an alternate arrangement of the current proposed model, as described in Section 5, where we separate the token purchase and token spending into epochs. We can see that this process of separating the token purchase and spend into phases is the same as our method, where the tokens are purchased and spent continuously, over the course of a number of rounds, as visualized in Figure 6. We observe that the continuous token purchase lags by a number of rounds. This is because in the continuous case we also consider the possibility of a user wanting to make non-MEV transactions, and the adversary does not attack these transactions. As a result, there is a slight delay in reaching the same amounts of wealth. However, they are essentially the same. ### Ablation studies We perform multiple ablation studies using different token conditions, such as different token costs, expiration of the tokens, scenarios where the token cost is unable to be returned to the users and the adversaries in the following scenarios: #### 6.7.1. Token Costs We consider different scenarios for token costs, ranging from extremely cheap tokens, to extremely expensive tokens. As the tokens are eventually returned to the users in the system, the token costs do not influence total wealth of the system. If the tokens are cheaper, more users will be able to afford them, whereas if the tokens are expensive, fewer users can afford them, which reduces the number of tokenized transactions that can be made in a single round. As a result, the adversary is able to manipulate this result due to all users being able to afford only small number of tokens. We see that even when starting with a small sum of wealth, the adversary is unable to frontrun or backrun the user, when the token costs are reasonably priced. #### 6.7.2. Token with Expiration In this case, the tokens expire after a set number of rounds. If the user does not use these tokens within the set limit, they expire and the users are unable to recoup the Figure 4. The wealth of the adversary, when including fatal frontrunning attacks (a)50% and (b)30% of the time, for f=0.1,0.8 and 0.9. Figure 5. Masquerade performs quite well even on real world Ethereum data costs spent on the token. As a consequence of this, we observe that it is generally in best interest of the user and adversary to not hoard tokens and spend tokens before they expire. #### 6.7.3. Token with no Refund In this case, a user will not receive a refund when they make a tokenized transaction. Here, the token costs need to be cheap in order to incentivize users to make tokenized transactions. ## 7. Related Work Common types of MEVs include frontrunning, backrunning and sandwich attacks. An adversary is generally able to frontrun the transaction by listing a higher transaction fee, so that their transaction is executed before the honest user. Backrunning involves a similar strategy, except the adversary's transaction is placed immediately after the user. Sometimes, an adversary can place a pair of transactions, right before and right after the user transaction, which can lead to an artificial manipulation of asset prices, resulting in profit at the cost of the user. These MEVs, pose a significant threat to users, cause network congestion and may even lead to centralization without appropriate mitigation strategies. So far, there have been efforts to introduce countermeasures to MEVs. In this section, we list the most popular solutions to MEV as follows: * MEV Auction Platforms: These MEV auction platforms are commonly used in post-merge Ethereum. Generally, the idea here is that there exist builders such as Flashbots(Brock et al., 2018), BloXroute(BloXroute, 2018), MEV-Boost(Brock et al., 2018) who assemble blocks with transactions they receive from users, the public mempool as well as the ones the builder itself inserts to generate MEV. A user is allowed to specify their transaction ordering preferences to miners in exchange for compensation. These assembled blocks are then submitted to a block proposer, with some promised profits. This is a "commit-and-reveal" strategy, i.e the proposer is unable to see the order of the transactions or block contents, they can only look at the promised profit and some other metadata. The block proposer then selects the block they wish to propose. MEV auction platforms attempt to create a more competitive and efficient environment for users to have control over how their transactions are ordered and to capture some of the value that would otherwise go to miners. These platforms can be beneficial for users who want to ensure their transactions are executed in a specific way or who want to benefit from MEV themselves. A Figure 6. Here, we see that the user wealth and the adversary wealth are equal at different rounds/epochs, they are simply shifted, and their difference is constant. key issue with this method is the reliance on a "trusted builder", though theoretically, any builder can be part of the network. * Time-based Ordering Solutions: These solutions rely on establishing a certain order in transactions, they establish a set of properties that the transaction needs to satisfy in order to be part of the block. An example of this class of solutions is Hedera(Hedera, 2017), that uses median to establish the timestamp of the received transaction. Fair ordering means that \(\mathtt{txn}_{u}\) is executed before \(\mathtt{txn}_{a}\) if the median received time of \(\mathtt{txn}_{u}\)<median received time of \(\mathtt{txn}_{a}\). A problem of using median timestamps in this way, is that it is susceptible to manipulation by adversary. Other examples of receive-order fairness are Themis(Hedera, 2017) and Aequitas(Hedera, 2017) which consider that if atleast \(\gamma\) of the nodes receive transaction \(\mathtt{txn}_{u}\) before \(\mathtt{txn}_{a}\), then \(\mathtt{txn}_{u}\) can be included no later than \(\mathtt{txn}_{a}\). Wendy(Wededula et al., 2018) uses relative fairness to approach this issue. Their solution, is that if there exists some time \(t\) at which all honest nodes have seen transaction \(\mathtt{txn}_{u}\), and they then saw transaction \(\mathtt{txn}_{a}\) after \(t\), then \(\mathtt{txn}_{u}\) is executed before \(\mathtt{txn}_{a}\). These class of solutions pose a risk to the users based on latencies of nodes. * Content-agnostic Ordering: These class of solutions do not impose a constraint on the ordering of the transactions, as long as the ordering is independent of the content of these transactions. Most of these algorithms generally encrypt all transactions, or use a trusted third party in order to hide the transactions. They then wait for these transactions to be committed on to the Blockchain, and finally provide the secret key in order to reveal the transaction, and ensure validity etc. An example of this is TEX(Hedera, 2017), which is used in a proof of work situation, where the user encrypts their transaction using timelock puzzles, with the understanding that the attacker cannot solve the puzzles faster than the user. This uses a trusted custodian, and thus requires users to regularly check in to make sure the custodian is not misbehaving. Tesseract(Hedera, 2017) requires the user to trust additional secure hardware called TEE (Trusted Execution Environment), which is responsible for encrypting the transactions and releasing them after they have been committed to the block. Most of these solutions remove ordering privileges from the miners, but result in additional trust assumptions that may reduce the decentralization in the network. ## 8. Conclusion In this paper, we have shown that the utility obtained by the tokenization system is better than the current utility, and we have suggested a possible reform to the system without the need of a trusted builder or proposer. The token system is a very powerful way for the user to be able to trick the adversary in the real world, as an adversary is faced with the choice of spending their token or saving it for a later transaction that may of may not provide a better value. An adversary only has the information that a token has been purchased, and is unable to decipher which token can be used for which transaction, until it is too late. We have shown a huge improvement in the transaction unstealability for a user in the system, thus improving the overall fairness in the network. One of the major drawbacks of Masquerade, is that it does not completely eliminate the presence of MEV attacks. To a large extent, our algorithm prevents user losses, and stops transaction order manipulation by removing a significant amount of miner privileges, however, blind frontrunning is still possible in the system. An extension to the project involves looking for lightweight, scalable solutions that can build on top of Masquerade in order to completely eliminate MEV attacks. One possible extension to this project also is to include dynamic behaviors by the adversary and user in response to current policies.